path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
hw08/hw08.ipynb | ###Markdown
Домашняя работа №8 Студент: Правилов Михаил
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Есть уравнение ЗШЛ: $-\phi''(r) + l(l+1)r^{-2}\phi(r) - 2r^{-1}\phi(r) = 2E_{nl}\phi(r)$, $\phi(0) = \phi(\infty) = 0$Перепишем его, сделав замену и приведя к виду, схожему с тем, что был на лекции:$\lambda = -2E_{nl} = -2 * \frac{-1}{2 * (n +l+1)^2} = \frac{1}{(n + l + 1)^2}$$\phi''(r) - (l(l+1)r^{-2} - 2r^{-1})\phi(r) = \lambda\phi(r)$$\phi''(r) - p(r)\phi(r) = \lambda\rho(r)\phi(r)$Где $p(r) = l(l+1)r^{-2} - 2r^{-1}, \rho(r) = 1$$\phi(0) = \phi(R) = 0$ Используйте сеточкую аапроксимацию второго порядка для решения данной ЗШЛ. (Спектр матрицы можно найти с помощью библеотечных функций)Вычислите 5 первых с.з. для $l = 0$ с точностью $\epsilon = 10^{-5}$(Начните высиления со значения R = 10)Постройте первые 5 собственных функций Для этого надо построить матрицу и найти ее спектр.
###Code
def rho(x):
return 1
def p(x, l):
return l * (l + 1) * x ** (-2) - 2 * x ** (-1)
def get_matrix(l, R, N):
h = R / N
rhos = [rho(i * h) for i in range(1, N)]
ps = [p(i * h, l) for i in range(1, N)]
A = np.zeros((N - 1, N - 1))
for i in range(0, N - 1):
if i != 0:
A[i][i - 1] = 1 / h ** 2
A[i][i] = - (2 / h ** 2 + ps[i])
if i != N - 2:
A[i][i + 1] = 1 / h ** 2
return -A / 2
###Output
_____no_output_____
###Markdown
С лекции мы знаем, что сходимость сеточных методов это $O(h^2)$, значит чтобы погрешность была порядка $10^{-5}$, надо решить:$h^2 = 10^{-5} h = 10^{-2.5} R / N = 10^{-2.5} N = 10^{2.5} * R$Если R = 10, то $N = 10^{3.5}$ Поэтому возьмем $N = 4000$ Но так как мы используем O нотацию, то она скрывает в себе константы, которые могут сильно влиять. В итоге более менее хороший ответ не достигается при R = 10, N = 4000. Если же взять при R = 100, N = 5000, то заданная погрешность в $10^{-5}$ достигается.
###Code
R = 100
A = get_matrix(0, R, 5000)
eig = np.linalg.eig(A)
spectrum = eig[0]
eig_vectors = eig[1]
indexes = np.argsort(spectrum)[:5]
print("5 первых собственных значений")
print(spectrum[indexes])
print("---")
print("5 первых собственных функций")
print(eig_vectors[indexes])
def draw_eig_func(data, R):
N = (len(data) + 1)
h = R / N
data_y = np.zeros(N + 1)
data_y[1:N] += data
data_x = [i * h for i in range(N + 1)]
plt.subplot(211)
plt.plot(data_x, data_y)
plt.ylabel("eigen function(N)")
plt.xlabel("N")
plt.figure(figsize=(10, 10), dpi=180)
draw_eig_func(eig_vectors[indexes][0], R)
plt.title("eigen function for lambda1 = " + str(spectrum[indexes][0]))
plt.show()
plt.figure(figsize=(10, 10), dpi=180)
draw_eig_func(eig_vectors[indexes][1], R)
plt.title("eigen function for lambda2 = " + str(spectrum[indexes][1]))
plt.show()
plt.figure(figsize=(10, 10), dpi=180)
draw_eig_func(eig_vectors[indexes][2], R)
plt.title("eigen function for lambda3 = " + str(spectrum[indexes][2]))
plt.show()
plt.figure(figsize=(10, 10), dpi=180)
draw_eig_func(eig_vectors[indexes][3], R)
plt.title("eigen function for lambda4 = " + str(spectrum[indexes][3]))
plt.show()
plt.figure(figsize=(10, 10), dpi=180)
draw_eig_func(eig_vectors[indexes][4], R)
plt.title("eigen function for lambda5 = " + str(spectrum[indexes][4]))
plt.show()
###Output
_____no_output_____
###Markdown
Видимо чем меньше собственное число, тем меньше период этого "глаза", поэтому уже начиная с 4 собственного числа получается чисто синий цвет - график слишком скачет.
###Code
def get_A_numerov(l, R, N):
h = R / N
rhos = [rho(i * h) for i in range(1, N)]
ps = [p(i * h, l) for i in range(1, N)]
A = np.zeros((N - 1, N - 1))
for i in range(0, N - 1):
if i != 0:
A[i][i - 1] = 1 / h ** 4 - 1 / 12 * ps[i - 1] / h ** 2
A[i][i] = - (2 / h ** 4 + ps[i] - 1 / 6 * ps[i] / h ** 2)
if i != N - 2:
A[i][i + 1] = 1 / h ** 4 - 1 / 12 * ps[i + 1] / h ** 2
return A
def get_B_numerov(l, R, N):
h = R / N
rhos = [rho(i * h) for i in range(1, N)]
B = np.zeros((N - 1, N - 1))
for i in range(0, N - 1):
if i != 0:
B[i][i - 1] = rhos[i - 1] / h ** 2 / 12
B[i][i] = rhos[i] - 1/ 6 * rhos[i] / h ** 2
if i != N - 2:
B[i][i + 1] = rhos[i + 1] / h ** 2 / 12
return B
def get_matrix_numerov(l, R, N):
return -np.matmul(np.linalg.inv(get_B_numerov(l, R, N)), get_A_numerov(l, R, N)) / 2
def get_first_5(A):
return np.sort(np.linalg.eig(A)[0])[:5]
def draw_error(method, R, N_min, N_max):
real = [-1 / (2 * (n + 1) ** 2) for n in range(5)]
data_x = [i for i in range(N_min, N_max + 1)]
data_y = []
for N in range(N_min, N_max + 1):
matrix = method(0, R, N)
data_y.append(np.log10(max(abs(get_first_5(matrix) - real))))
plt.subplot(211)
plt.plot(data_x, data_y)
plt.ylabel("log10(max error)")
plt.xlabel("N")
plt.figure(figsize=(10, 10), dpi=180)
R = 300
N_min = 6
N_max = 200
draw_error(get_matrix, R, N_min, N_max)
draw_error(get_matrix_numerov, R, N_min, N_max)
plt.title("Max error N")
plt.legend(("mesh", "numerov"))
plt.show()
###Output
/home/mikhail/anaconda3/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
warnings.warn(message, mplDeprecation, stacklevel=1)
|
TEST_Zero_Shot_Pipeline.ipynb | ###Markdown
###Code
!pip install git+https://github.com/huggingface/transformers.git
from transformers import pipeline
classifier = pipeline("zero-shot-classification")
###Output
Some weights of the model checkpoint at facebook/bart-large-mnli were not used when initializing BartForSequenceClassification: ['model.encoder.version', 'model.decoder.version']
- This IS expected if you are initializing BartForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BartForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
###Markdown
We can use this pipeline by passing in a sequence and a list of candidate labels. The pipeline assumes by default that only one of the candidate labels is true, returning a list of scores for each label which add up to 1.
###Code
sequence = "Who are you voting for in 2020?"
candidate_labels = ["politics", "public health", "economics"]
classifier(sequence, candidate_labels)
###Output
_____no_output_____
###Markdown
To do multi-class classification, simply pass `multi_class=True`. In this case, the scores will be independent, but each will fall between 0 and 1.
###Code
sequence = "Who are you voting for in 2020?"
candidate_labels = ["politics", "public health", "economics", "elections"]
classifier(sequence, candidate_labels, multi_class=True)
###Output
_____no_output_____
###Markdown
Here's an example of sentiment classification:
###Code
sequence = "I hated this movie. The acting sucked."
candidate_labels = ["positive", "negative"]
classifier(sequence, candidate_labels)
###Output
_____no_output_____
###Markdown
So how does this method work?The underlying model is trained on the task of Natural Language Inference (NLI), which takes in two sequences and determines whether they contradict each other, entail each other, or neither.This can be adapted to the task of zero-shot classification by treating the sequence which we want to classify as one NLI sequence (called the premise) and turning a candidate label into the other (the hypothesis). If the model predicts that the constructed premise _entails_ the hypothesis, then we can take that as a prediction that the label applies to the text. Check out [this blog post](https://joeddav.github.io/blog/2020/05/29/ZSL.html) for a more detailed explanation.By default, the pipeline turns labels into hypotheses with the template `This example is {class_name}.`. This works well in many settings, but you can also customize this for your specific setting. Let's add another review to our above sentiment classification example that's a bit more challenging:
###Code
sequences = [
"I hated this movie. The acting sucked.",
"This movie didn't quite live up to my high expectations, but overall I still really enjoyed it."
]
candidate_labels = ["positive", "negative"]
classifier(sequences, candidate_labels)
###Output
_____no_output_____
###Markdown
The second example is a bit harder. Let's see if we can improve the results by using a hypothesis template which is more specific to the setting of review sentiment analysis. Instead of the default, `This example is {}.`, we'll use, `The sentiment of this review is {}.` (where `{}` is replaced with the candidate class name)
###Code
sequences = [
"I hated this movie. The acting sucked.",
"This movie didn't quite live up to my high expectations, but overall I still really enjoyed it."
]
candidate_labels = ["positive", "negative"]
hypothesis_template = "The sentiment of this review is {}."
classifier(sequences, candidate_labels, hypothesis_template=hypothesis_template)
###Output
_____no_output_____ |
ml/lab1.ipynb | ###Markdown
Abalone Abalone vary in size from 20 mm (0.79 in) (Haliotis pulcherrima) to 200 mm (7.9 in) while Haliotis rufescens is the largest of the genus at 12 in (30 cm).The shell of abalones is convex, rounded to oval in shape, and may be highly arched or very flattened. The shell of the majority of species has a small, flat spire and two to three whorls. The last whorl, known as the body whorl, is auriform, meaning that the shell resembles an ear, giving rise to the common name "ear shell". Haliotis asinina has a somewhat different shape, as it is more elongated and distended. The shell of Haliotis cracherodii cracherodii is also unusual as it has an ovate form, is imperforate, shows an exserted spire, and has prickly ribs.A mantle cleft in the shell impresses a groove in the shell, in which are the row of holes characteristic of the genus. These holes are respiratory apertures for venting water from the gills and for releasing sperm and eggs into the water column. They make up what is known as the selenizone which forms as the shell grows. This series of eight to 38 holes is near the anterior margin. Only a small number is generally open. The older holes are gradually sealed up as the shell grows and new holes form. Each species has a typical number of open holes, between four and 10, in the selenizone. An abalone has no operculum. The aperture of the shell is very wide and nacreous.
###Code
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
%matplotlib inline
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.decomposition import PCA
data = pd.read_csv('abalone.data', names=['Sex', 'Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'])
data.head()
###Output
_____no_output_____
###Markdown
Now let's convert categorical feature 'Sex' to numerical via **one-hot encoding**
###Code
data = pd.get_dummies(data)
data.head()
###Output
_____no_output_____
###Markdown
Analysis
###Code
data.describe()
corr = data.corr()
fig, ax = plt.subplots(figsize=(18,10))
sns.heatmap(corr)
corr
fig, ((ax1, ax2), (ax3, ax4),(ax5, ax6),(ax7,ax8)) = plt.subplots(4, 2, figsize = (15,10), sharex=False)
axs = [ax1,ax2,ax3,ax4,ax5,ax6,ax7,ax8]
plt.tight_layout()
for n in range(0, 8):
axs[n].hist(data[data.columns[n]], bins=30)
axs[n].set_title(data.columns[n], fontsize=10)
plt.figure(figsize=(18, 10))
plt.hist(data['Rings'], bins=30)
plt.title("Rings", fontsize=16)
plt.show()
X_train, X_test, y_train, y_test = train_test_split(data.drop(columns=['Rings']), data['Rings'], test_size=.2, random_state=17)
sc = StandardScaler().fit(X_train)
X_train, X_test = sc.transform(X_train), sc.transform(X_test)
###Output
_____no_output_____
###Markdown
Classification
###Code
def approx(y_pred, y_true):
predictions = list(zip(y_pred, y_true))
return [len(list(filter(lambda a: abs(a[0] - a[1]) <= d, predictions))) / len(predictions) for d in [0.5, 1, 2]]
def score(model):
model.fit(X_train, y_train)
print('Train score: {}'.format(approx(model.predict(X_train), y_train)))
print('Test score: {}'.format(approx(model.predict(X_test), y_test)))
def grid_search(model, params):
gs = GridSearchCV(model, params)
return gs.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
K-Neighbors
###Code
score(KNeighborsClassifier(29))
###Output
Train score: [0.3214606405267884, 0.6596827297216402, 0.7955701885662975]
Test score: [0.2619617224880383, 0.6363636363636364, 0.7990430622009569]
###Markdown
SVM + linear kernel
###Code
score(SVC(kernel='linear'))
###Output
Train score: [0.27357078718946426, 0.6381322957198443, 0.7898832684824902]
Test score: [0.25478468899521534, 0.6411483253588517, 0.7858851674641149]
###Markdown
Decision tree
###Code
import graphviz
from sklearn.tree import export_graphviz
dt = DecisionTreeClassifier(max_depth=5)
score(dt)
dot_data = export_graphviz(dt, out_file=None,
feature_names=data.drop(columns=['Rings']).columns,
class_names=[str(i + 1) for i in range(29)],
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
###Output
Train score: [0.31188266985932356, 0.6390302304699191, 0.8054474708171206]
Test score: [0.24162679425837322, 0.6267942583732058, 0.7966507177033493]
###Markdown
Random forest
###Code
score(RandomForestClassifier(max_depth=4, n_estimators=83, max_features=1))
###Output
Train score: [0.29841364860820113, 0.6438192158036516, 0.7832984136486082]
Test score: [0.27751196172248804, 0.6435406698564593, 0.7834928229665071]
###Markdown
Multi-layer perceptron
###Code
score(MLPClassifier(alpha=2))
###Output
Train score: [0.2837473810236456, 0.6569889254714157, 0.8021550434001796]
Test score: [0.26674641148325356, 0.6686602870813397, 0.8086124401913876]
###Markdown
AdaBoost
###Code
score(AdaBoostClassifier())
###Output
Train score: [0.21430709368452558, 0.5501346902125113, 0.7306195749775516]
Test score: [0.23205741626794257, 0.569377990430622, 0.7296650717703349]
###Markdown
Regression
###Code
from sklearn.svm import SVR
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.tree import DecisionTreeRegressor
from sklearn.neural_network import MLPRegressor
###Output
_____no_output_____
###Markdown
Linear regression
###Code
score(LinearRegression())
###Output
Train score: [0.23585752768632146, 0.43998802753666566, 0.7357078718946424]
Test score: [0.23205741626794257, 0.4258373205741627, 0.7165071770334929]
###Markdown
SVM + RBF kernel
###Code
score(SVR(C=250, gamma=0.01))
###Output
Train score: [0.2927267285243939, 0.5175097276264592, 0.7803052978150254]
Test score: [0.27392344497607657, 0.49401913875598086, 0.7763157894736842]
###Markdown
SVM + polynomial kernel
###Code
score(SVR(kernel='poly', C=100, degree=4))
###Output
Train score: [0.3163723436096977, 0.5474408859622868, 0.7880873989823406]
Test score: [0.25239234449760767, 0.4880382775119617, 0.757177033492823]
###Markdown
Decision tree
###Code
score(DecisionTreeRegressor(max_depth=6, criterion="mse", min_samples_leaf=20))
###Output
Train score: [0.26578868602214906, 0.4890751272074229, 0.7692307692307693]
Test score: [0.23205741626794257, 0.45454545454545453, 0.7332535885167464]
###Markdown
Multi-layer perceptron
###Code
score(MLPRegressor(alpha=1e-2))
###Output
Train score: [0.2529182879377432, 0.4681233163723436, 0.7482789583956899]
Test score: [0.2583732057416268, 0.465311004784689, 0.7332535885167464]
###Markdown
TensorFlow
###Code
import urllib
import tempfile
import tensorflow as tf
FLAGS = None
LEARNING_RATE = 0.001
tf.logging.set_verbosity(tf.logging.INFO)
def maybe_download(train_data=None, test_data=None, predict_data=None):
"""Maybe downloads training data and returns train and test file names."""
if train_data:
train_file_name = train_data
else:
train_file = tempfile.NamedTemporaryFile(delete=False)
urllib.request.urlretrieve(
"http://download.tensorflow.org/data/abalone_train.csv",
train_file.name)
train_file_name = train_file.name
train_file.close()
print("Training data is downloaded to %s" % train_file_name)
if test_data:
test_file_name = test_data
else:
test_file = tempfile.NamedTemporaryFile(delete=False)
urllib.request.urlretrieve(
"http://download.tensorflow.org/data/abalone_test.csv", test_file.name)
test_file_name = test_file.name
test_file.close()
print("Test data is downloaded to %s" % test_file_name)
if predict_data:
predict_file_name = predict_data
else:
predict_file = tempfile.NamedTemporaryFile(delete=False)
urllib.request.urlretrieve(
"http://download.tensorflow.org/data/abalone_predict.csv",
predict_file.name)
predict_file_name = predict_file.name
predict_file.close()
print("Prediction data is downloaded to %s" % predict_file_name)
return train_file_name, test_file_name, predict_file_name
def model_fn(features, labels, mode, params):
first_hidden_layer = tf.layers.dense(features["x"], 10, activation=tf.nn.relu)
second_hidden_layer = tf.layers.dense(
first_hidden_layer, 10, activation=tf.nn.relu)
output_layer = tf.layers.dense(second_hidden_layer, 1)
predictions = tf.reshape(output_layer, [-1])
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={"ages": predictions})
loss = tf.losses.mean_squared_error(labels, predictions)
optimizer = tf.train.GradientDescentOptimizer(
learning_rate=params["learning_rate"])
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(
tf.cast(labels, tf.float64), predictions)
}
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
abalone_train, abalone_test, abalone_predict = maybe_download()
training_set = tf.contrib.learn.datasets.base.load_csv_without_header(
filename=abalone_train, target_dtype=np.int, features_dtype=np.float64)
test_set = tf.contrib.learn.datasets.base.load_csv_without_header(
filename=abalone_test, target_dtype=np.int, features_dtype=np.float64)
prediction_set = tf.contrib.learn.datasets.base.load_csv_without_header(
filename=abalone_predict, target_dtype=np.int, features_dtype=np.float64)
model_params = {"learning_rate": LEARNING_RATE}
nn = tf.estimator.Estimator(model_fn=model_fn, params=model_params)
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array(training_set.data)},
y=np.array(training_set.target),
num_epochs=None,
shuffle=True)
nn.train(input_fn=train_input_fn, steps=5000)
test_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": np.array(test_set.data)},
y=np.array(test_set.target),
num_epochs=1,
shuffle=False)
ev = nn.evaluate(input_fn=test_input_fn)
print("Loss: %s" % ev["loss"])
print("Root Mean Squared Error: %s" % ev["rmse"])
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": prediction_set.data},
num_epochs=1,
shuffle=False)
predictions = nn.predict(input_fn=predict_input_fn)
for i, p in enumerate(predictions):
print("Prediction %s: %s" % (i + 1, p["ages"]))
t_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": test_set.data},
num_epochs=1,
shuffle=False)
t_pred = nn.predict(input_fn=t_fn)
t_pred = list(map(lambda x: x['ages'], t_pred))
approx(t_pred, test_set.target)
###Output
_____no_output_____ |
docs/mindspore/programming_guide/source_zh_cn/tokenizer.ipynb | ###Markdown
文本处理与增强[](https://gitee.com/mindspore/docs/blob/master/docs/mindspore/programming_guide/source_zh_cn/tokenizer.ipynb) [](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/programming_guide/zh_cn/mindspore_tokenizer.ipynb) [](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9vYnMuZHVhbHN0YWNrLmNuLW5vcnRoLTQubXlodWF3ZWljbG91ZC5jb20vbWluZHNwb3JlLXdlYnNpdGUvbm90ZWJvb2svbW9kZWxhcnRzL3Byb2dyYW1taW5nX2d1aWRlL21pbmRzcG9yZV90b2tlbml6ZXIuaXB5bmI=&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) 概述分词就是将连续的字序列按照一定的规范重新组合成词序列的过程,合理的进行分词有助于语义的理解。MindSpore提供了多种用途的分词器(Tokenizer),能够帮助用户高性能地处理文本,用户可以构建自己的字典,使用适当的标记器将句子拆分为不同的标记,并通过查找操作获取字典中标记的索引。MindSpore目前提供的分词器如下表所示。此外,用户也可以根据需要实现自定义的分词器。| 分词器 | 分词器说明 || :-- | :-- || BasicTokenizer | 根据指定规则对标量文本数据进行分词。 || BertTokenizer | 用于处理Bert文本数据的分词器。 || JiebaTokenizer | 基于字典的中文字符串分词器。 || RegexTokenizer | 根据指定正则表达式对标量文本数据进行分词。 || SentencePieceTokenizer | 基于SentencePiece开源工具包进行分词。 || UnicodeCharTokenizer | 将标量文本数据分词为Unicode字符。 || UnicodeScriptTokenizer | 根据Unicode边界对标量文本数据进行分词。 || WhitespaceTokenizer | 根据空格符对标量文本数据进行分词。 || WordpieceTokenizer | 根据单词集对标量文本数据进行分词。 |更多分词器的详细说明,可以参见[API文档](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.dataset.text.html)。 MindSpore分词器下面介绍几种常用分词器的使用方法。 BertTokenizer`BertTokenizer`是通过调用`BasicTokenizer`和`WordpieceTokenizer`来进行分词的。下面的样例首先构建了一个文本数据集和字符串列表,然后通过`BertTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
input_list = ["床前明月光", "疑是地上霜", "举头望明月", "低头思故乡", "I am making small mistakes during working hours",
"😀嘿嘿😃哈哈😄大笑😁嘻嘻", "繁體字"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
vocab_list = [
"床", "前", "明", "月", "光", "疑", "是", "地", "上", "霜", "举", "头", "望", "低", "思", "故", "乡",
"繁", "體", "字", "嘿", "哈", "大", "笑", "嘻", "i", "am", "mak", "make", "small", "mistake",
"##s", "during", "work", "##ing", "hour", "😀", "😃", "😄", "😁", "+", "/", "-", "=", "12",
"28", "40", "16", " ", "I", "[CLS]", "[SEP]", "[UNK]", "[PAD]", "[MASK]", "[unused1]", "[unused10]"]
vocab = text.Vocab.from_list(vocab_list)
tokenizer_op = text.BertTokenizer(vocab=vocab)
dataset = dataset.map(operations=tokenizer_op)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']))
###Output
------------------------before tokenization----------------------------
床前明月光
疑是地上霜
举头望明月
低头思故乡
I am making small mistakes during working hours
😀嘿嘿😃哈哈😄大笑😁嘻嘻
繁體字
------------------------after tokenization-----------------------------
['床' '前' '明' '月' '光']
['疑' '是' '地' '上' '霜']
['举' '头' '望' '明' '月']
['低' '头' '思' '故' '乡']
['I' 'am' 'mak' '##ing' 'small' 'mistake' '##s' 'during' 'work' '##ing'
'hour' '##s']
['😀' '嘿' '嘿' '😃' '哈' '哈' '😄' '大' '笑' '😁' '嘻' '嘻']
['繁' '體' '字']
###Markdown
JiebaTokenizer`JiebaTokenizer`是基于jieba的中文分词。下载字典文件`hmm_model.utf8`和`jieba.dict.utf8`,并将其放到指定位置,在Jupyter Notebook中执行如下命令。
###Code
!wget -N https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/hmm_model.utf8 --no-check-certificate
!wget -N https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/jieba.dict.utf8 --no-check-certificate
!mkdir -p ./datasets/tokenizer/
!mv hmm_model.utf8 jieba.dict.utf8 -t ./datasets/tokenizer/
!tree ./datasets/tokenizer/
###Output
./datasets/tokenizer/
├── hmm_model.utf8
└── jieba.dict.utf8
0 directories, 2 files
###Markdown
下面的样例首先构建了一个文本数据集,然后使用HMM与MP字典文件创建`JiebaTokenizer`对象,并对数据集进行分词,最后展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
input_list = ["今天天气太好了我们一起去外面玩吧"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
# files from open source repository https://github.com/yanyiwu/cppjieba/tree/master/dict
HMM_FILE = "./datasets/tokenizer/hmm_model.utf8"
MP_FILE = "./datasets/tokenizer/jieba.dict.utf8"
jieba_op = text.JiebaTokenizer(HMM_FILE, MP_FILE)
dataset = dataset.map(operations=jieba_op, input_columns=["text"], num_parallel_workers=1)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']))
###Output
------------------------before tokenization----------------------------
今天天气太好了我们一起去外面玩吧
------------------------after tokenization-----------------------------
['今天天气' '太好了' '我们' '一起' '去' '外面' '玩吧']
###Markdown
SentencePieceTokenizer`SentencePieceTokenizer`是基于[SentencePiece](https://github.com/google/sentencepiece)这个开源的自然语言处理工具包。下载文本数据集文件`botchan.txt`,并将其放置到指定位置,在Jupyter Notebook中执行如下命令。
###Code
!wget -N https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/botchan.txt --no-check-certificate
!mkdir -p ./datasets/tokenizer/
!mv botchan.txt ./datasets/tokenizer/
!tree ./datasets/tokenizer/
###Output
./datasets/tokenizer/
└── botchan.txt
0 directories, 1 files
###Markdown
下面的样例首先构建了一个文本数据集,然后从`vocab_file`文件中构建一个`vocab`对象,再通过`SentencePieceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
from mindspore.dataset.text import SentencePieceModel, SPieceTokenizerOutType
input_list = ["I saw a girl with a telescope."]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/master/tests/ut/data/dataset/test_sentencepiece/botchan.txt
vocab_file = "./datasets/tokenizer/botchan.txt"
vocab = text.SentencePieceVocab.from_file([vocab_file], 5000, 0.9995, SentencePieceModel.UNIGRAM, {})
tokenizer_op = text.SentencePieceTokenizer(vocab, out_type=SPieceTokenizerOutType.STRING)
dataset = dataset.map(operations=tokenizer_op)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']))
###Output
------------------------before tokenization----------------------------
I saw a girl with a telescope.
------------------------after tokenization-----------------------------
['▁I' '▁sa' 'w' '▁a' '▁girl' '▁with' '▁a' '▁te' 'les' 'co' 'pe' '.']
###Markdown
UnicodeCharTokenizer`UnicodeCharTokenizer`是根据Unicode字符集来分词的。下面的样例首先构建了一个文本数据集,然后通过`UnicodeCharTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
input_list = ["Welcome to Beijing!", "北京欢迎您!", "我喜欢English!"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
tokenizer_op = text.UnicodeCharTokenizer()
dataset = dataset.map(operations=tokenizer_op)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']).tolist())
###Output
------------------------before tokenization----------------------------
Welcome to Beijing!
北京欢迎您!
我喜欢English!
------------------------after tokenization-----------------------------
['W', 'e', 'l', 'c', 'o', 'm', 'e', ' ', 't', 'o', ' ', 'B', 'e', 'i', 'j', 'i', 'n', 'g', '!']
['北', '京', '欢', '迎', '您', '!']
['我', '喜', '欢', 'E', 'n', 'g', 'l', 'i', 's', 'h', '!']
###Markdown
WhitespaceTokenizer`WhitespaceTokenizer`是根据空格来进行分词的。下面的样例首先构建了一个文本数据集,然后通过`WhitespaceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
input_list = ["Welcome to Beijing!", "北京欢迎您!", "我喜欢English!"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
tokenizer_op = text.WhitespaceTokenizer()
dataset = dataset.map(operations=tokenizer_op)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']).tolist())
###Output
------------------------before tokenization----------------------------
Welcome to Beijing!
北京欢迎您!
我喜欢English!
------------------------after tokenization-----------------------------
['Welcome', 'to', 'Beijing!']
['北京欢迎您!']
['我喜欢English!']
###Markdown
WordpieceTokenizer`WordpieceTokenizer`是基于单词集来进行划分的,划分依据可以是单词集中的单个单词,或者多个单词的组合形式。下面的样例首先构建了一个文本数据集,然后从单词列表中构建`vocab`对象,通过`WordpieceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
input_list = ["my", "favorite", "book", "is", "love", "during", "the", "cholera", "era", "what",
"我", "最", "喜", "欢", "的", "书", "是", "霍", "乱", "时", "期", "的", "爱", "情", "您"]
vocab_english = ["book", "cholera", "era", "favor", "##ite", "my", "is", "love", "dur", "##ing", "the"]
vocab_chinese = ["我", '最', '喜', '欢', '的', '书', '是', '霍', '乱', '时', '期', '爱', '情']
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
vocab = text.Vocab.from_list(vocab_english+vocab_chinese)
tokenizer_op = text.WordpieceTokenizer(vocab=vocab)
dataset = dataset.map(operations=tokenizer_op)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']))
###Output
------------------------before tokenization----------------------------
my
favorite
book
is
love
during
the
cholera
era
what
我
最
喜
欢
的
书
是
霍
乱
时
期
的
爱
情
您
------------------------after tokenization-----------------------------
['my']
['favor' '##ite']
['book']
['is']
['love']
['dur' '##ing']
['the']
['cholera']
['era']
['[UNK]']
['我']
['最']
['喜']
['欢']
['的']
['书']
['是']
['霍']
['乱']
['时']
['期']
['的']
['爱']
['情']
['[UNK]']
###Markdown
文本处理与增强`Ascend` `GPU` `CPU` `数据准备`[](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tb2RlbGFydHMvcHJvZ3JhbW1pbmdfZ3VpZGUvbWluZHNwb3JlX3Rva2VuaXplci5pcHluYg==&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c) [](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/programming_guide/zh_cn/mindspore_tokenizer.ipynb) [](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/master/programming_guide/zh_cn/mindspore_tokenizer.py) [](https://gitee.com/mindspore/docs/blob/master/docs/mindspore/programming_guide/source_zh_cn/tokenizer.ipynb) 概述随着可获得的文本数据逐步增多,对文本数据进行预处理,以便获得可用于网络训练所需干净数据的诉求也更为迫切。文本数据集预处理通常包括,文本数据集加载与数据增强两部分。其中文本数据加载通常包含以下几种方式:- 通过相应文本读取的Dataset接口如[ClueDataset](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/dataset/mindspore.dataset.CLUEDataset.htmlmindspore.dataset.CLUEDataset)、[TextFileDataset](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/dataset/mindspore.dataset.TextFileDataset.htmlmindspore.dataset.TextFileDataset)进行读取。- 将数据集转成标准格式(如MindRecord格式),再通过对应接口(如[MindDataset](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/dataset/mindspore.dataset.MindDataset.htmlmindspore.dataset.MindDataset))进行读取。- 通过GeneratorDataset接口,接收用户自定义的数据集加载函数,进行数据加载,用法可参考[自定义数据集加载](https://www.mindspore.cn/docs/programming_guide/zh-CN/master/dataset_loading.html%E8%87%AA%E5%AE%9A%E4%B9%89%E6%95%B0%E6%8D%AE%E9%9B%86%E5%8A%A0%E8%BD%BD)章节。针对文本数据增强,常用操作包含文本分词、词汇表查找等:- 完成文本数据集加载后,通常需进行分词操作,即将原始一长串句子连续分割成多个基本的词汇。- 进一步,需构建词汇表,查找分割后各词汇对应的id,并将句子中包含的id组成词向量传入网络进行训练。下面对主要对数据增强过程中,用到的分词功能和词汇表查找等功能进行介绍,关于文本处理API的使用说明,可以参见[API文档](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.dataset.text.html)。 词汇表构造与使用词汇表提供了单词与id对应的映射关系,通过词汇表,输入单词能找到对应的单词id,反之依据单词id也能获取对应的单词。MindSpore提供了多种构造词汇表(Vocab)的方法,可以从字典、文件、列表以及Dataset对象中获取原始数据,以便构造词汇表,对应的接口为:[from_dict](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/dataset_text/mindspore.dataset.text.Vocab.htmlmindspore.dataset.text.Vocab.from_dict)、[from_file](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/dataset_text/mindspore.dataset.text.Vocab.htmlmindspore.dataset.text.Vocab.from_file)、[from_list](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/dataset_text/mindspore.dataset.text.Vocab.htmlmindspore.dataset.text.Vocab.from_list)、[from_dataset](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/dataset_text/mindspore.dataset.text.Vocab.htmlmindspore.dataset.text.Vocab.from_dataset)。以from_dict为例,构造Vocab的方式如下,传入的dict中包含多组单词和id对。
###Code
from mindspore.dataset import text
vocab = text.Vocab.from_dict({"home": 3, "behind": 2, "the": 4, "world": 5, "<unk>": 6})
###Output
_____no_output_____
###Markdown
Vocab提供了单词与id之间相互查询的方法,即:[tokens_to_ids](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/dataset_text/mindspore.dataset.text.Vocab.htmlmindspore.dataset.text.Vocab.tokens_to_ids)和[ids_to_tokens](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/dataset_text/mindspore.dataset.text.Vocab.htmlmindspore.dataset.text.Vocab.ids_to_tokens)方法,用法如下所示:
###Code
from mindspore.dataset import text
vocab = text.Vocab.from_dict({"home": 3, "behind": 2, "the": 4, "world": 5, "<unk>": 6})
ids = vocab.tokens_to_ids(["home", "world"])
print("ids: ", ids)
tokens = vocab.ids_to_tokens([2, 5])
print("tokens: ", tokens)
###Output
ids: [3, 5]
tokens: ['behind', 'world']
###Markdown
此外Vocab也是多种分词器(如WordpieceTokenizer)的必要入参,分词时会将句子中存在于词汇表的单词,前后分割开,变成单独的一个词汇,之后通过查找词汇表能够获取对应的词汇id。 MindSpore分词器分词就是将连续的字序列按照一定的规范重新组合成词序列的过程,合理的进行分词有助于语义的理解。MindSpore提供了多种用途的分词器(Tokenizer),能够帮助用户高性能地处理文本,用户可以构建自己的字典,使用适当的标记器将句子拆分为不同的标记,并通过查找操作获取字典中标记的索引。MindSpore目前提供的分词器如下表所示。此外,用户也可以根据需要实现自定义的分词器。| 分词器 | 分词器说明 || :-- | :-- || BasicTokenizer | 根据指定规则对标量文本数据进行分词。 || BertTokenizer | 用于处理Bert文本数据的分词器。 || JiebaTokenizer | 基于字典的中文字符串分词器。 || RegexTokenizer | 根据指定正则表达式对标量文本数据进行分词。 || SentencePieceTokenizer | 基于SentencePiece开源工具包进行分词。 || UnicodeCharTokenizer | 将标量文本数据分词为Unicode字符。 || UnicodeScriptTokenizer | 根据Unicode边界对标量文本数据进行分词。 || WhitespaceTokenizer | 根据空格符对标量文本数据进行分词。 || WordpieceTokenizer | 根据单词集对标量文本数据进行分词。 |下面介绍几种常用分词器的使用方法。 BertTokenizer`BertTokenizer`是通过调用`BasicTokenizer`和`WordpieceTokenizer`来进行分词的。下面的样例首先构建了一个文本数据集和字符串列表,然后通过`BertTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
input_list = ["床前明月光", "疑是地上霜", "举头望明月", "低头思故乡", "I am making small mistakes during working hours",
"😀嘿嘿😃哈哈😄大笑😁嘻嘻", "繁體字"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
vocab_list = [
"床", "前", "明", "月", "光", "疑", "是", "地", "上", "霜", "举", "头", "望", "低", "思", "故", "乡",
"繁", "體", "字", "嘿", "哈", "大", "笑", "嘻", "i", "am", "mak", "make", "small", "mistake",
"##s", "during", "work", "##ing", "hour", "😀", "😃", "😄", "😁", "+", "/", "-", "=", "12",
"28", "40", "16", " ", "I", "[CLS]", "[SEP]", "[UNK]", "[PAD]", "[MASK]", "[unused1]", "[unused10]"]
vocab = text.Vocab.from_list(vocab_list)
tokenizer_op = text.BertTokenizer(vocab=vocab)
dataset = dataset.map(operations=tokenizer_op)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']))
###Output
------------------------before tokenization----------------------------
床前明月光
疑是地上霜
举头望明月
低头思故乡
I am making small mistakes during working hours
😀嘿嘿😃哈哈😄大笑😁嘻嘻
繁體字
------------------------after tokenization-----------------------------
['床' '前' '明' '月' '光']
['疑' '是' '地' '上' '霜']
['举' '头' '望' '明' '月']
['低' '头' '思' '故' '乡']
['I' 'am' 'mak' '##ing' 'small' 'mistake' '##s' 'during' 'work' '##ing'
'hour' '##s']
['😀' '嘿' '嘿' '😃' '哈' '哈' '😄' '大' '笑' '😁' '嘻' '嘻']
['繁' '體' '字']
###Markdown
JiebaTokenizer`JiebaTokenizer`是基于jieba的中文分词。以下示例代码完成下载字典文件`hmm_model.utf8`和`jieba.dict.utf8`,并将其放到指定位置。
###Code
import os
import requests
requests.packages.urllib3.disable_warnings()
def download_dataset(dataset_url, path):
filename = dataset_url.split("/")[-1]
save_path = os.path.join(path, filename)
if os.path.exists(save_path):
return
if not os.path.exists(path):
os.makedirs(path)
res = requests.get(dataset_url, stream=True, verify=False)
with open(save_path, "wb") as f:
for chunk in res.iter_content(chunk_size=512):
if chunk:
f.write(chunk)
print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(dataset_url), path))
train_path = "datasets/MNIST_Data/train"
test_path = "datasets/MNIST_Data/test"
download_dataset("https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/hmm_model.utf8", "./datasets/tokenizer/")
download_dataset("https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/jieba.dict.utf8", "./datasets/tokenizer/")
###Output
_____no_output_____
###Markdown
下载的文件放置的目录结构如下:```text./datasets/tokenizer/├── hmm_model.utf8└── jieba.dict.utf8``` 下面的样例首先构建了一个文本数据集,然后使用HMM与MP字典文件创建`JiebaTokenizer`对象,并对数据集进行分词,最后展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
input_list = ["今天天气太好了我们一起去外面玩吧"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
# files from open source repository https://github.com/yanyiwu/cppjieba/tree/master/dict
HMM_FILE = "./datasets/tokenizer/hmm_model.utf8"
MP_FILE = "./datasets/tokenizer/jieba.dict.utf8"
jieba_op = text.JiebaTokenizer(HMM_FILE, MP_FILE)
dataset = dataset.map(operations=jieba_op, input_columns=["text"], num_parallel_workers=1)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']))
###Output
------------------------before tokenization----------------------------
今天天气太好了我们一起去外面玩吧
------------------------after tokenization-----------------------------
['今天天气' '太好了' '我们' '一起' '去' '外面' '玩吧']
###Markdown
SentencePieceTokenizer`SentencePieceTokenizer`是基于[SentencePiece](https://github.com/google/sentencepiece)这个开源的自然语言处理工具包。以下示例代码将下载文本数据集文件`botchan.txt`,并将其放置到指定位置。
###Code
download_dataset("https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/botchan.txt", "./datasets/tokenizer/")
###Output
_____no_output_____
###Markdown
下载的文件放置的目录结构如下:```text./datasets/tokenizer/└── botchan.txt``` 下面的样例首先构建了一个文本数据集,然后从`vocab_file`文件中构建一个`vocab`对象,再通过`SentencePieceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
from mindspore.dataset.text import SentencePieceModel, SPieceTokenizerOutType
input_list = ["I saw a girl with a telescope."]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
# file from MindSpore repository https://gitee.com/mindspore/mindspore/blob/master/tests/ut/data/dataset/test_sentencepiece/botchan.txt
vocab_file = "./datasets/tokenizer/botchan.txt"
vocab = text.SentencePieceVocab.from_file([vocab_file], 5000, 0.9995, SentencePieceModel.UNIGRAM, {})
tokenizer_op = text.SentencePieceTokenizer(vocab, out_type=SPieceTokenizerOutType.STRING)
dataset = dataset.map(operations=tokenizer_op)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']))
###Output
------------------------before tokenization----------------------------
I saw a girl with a telescope.
------------------------after tokenization-----------------------------
['▁I' '▁sa' 'w' '▁a' '▁girl' '▁with' '▁a' '▁te' 'les' 'co' 'pe' '.']
###Markdown
UnicodeCharTokenizer`UnicodeCharTokenizer`是根据Unicode字符集来分词的。下面的样例首先构建了一个文本数据集,然后通过`UnicodeCharTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
input_list = ["Welcome to Beijing!", "北京欢迎您!", "我喜欢English!"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
tokenizer_op = text.UnicodeCharTokenizer()
dataset = dataset.map(operations=tokenizer_op)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']).tolist())
###Output
------------------------before tokenization----------------------------
Welcome to Beijing!
北京欢迎您!
我喜欢English!
------------------------after tokenization-----------------------------
['W', 'e', 'l', 'c', 'o', 'm', 'e', ' ', 't', 'o', ' ', 'B', 'e', 'i', 'j', 'i', 'n', 'g', '!']
['北', '京', '欢', '迎', '您', '!']
['我', '喜', '欢', 'E', 'n', 'g', 'l', 'i', 's', 'h', '!']
###Markdown
WhitespaceTokenizer`WhitespaceTokenizer`是根据空格来进行分词的。下面的样例首先构建了一个文本数据集,然后通过`WhitespaceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
input_list = ["Welcome to Beijing!", "北京欢迎您!", "我喜欢English!"]
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
tokenizer_op = text.WhitespaceTokenizer()
dataset = dataset.map(operations=tokenizer_op)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']).tolist())
###Output
------------------------before tokenization----------------------------
Welcome to Beijing!
北京欢迎您!
我喜欢English!
------------------------after tokenization-----------------------------
['Welcome', 'to', 'Beijing!']
['北京欢迎您!']
['我喜欢English!']
###Markdown
WordpieceTokenizer`WordpieceTokenizer`是基于单词集来进行划分的,划分依据可以是单词集中的单个单词,或者多个单词的组合形式。下面的样例首先构建了一个文本数据集,然后从单词列表中构建`vocab`对象,通过`WordpieceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。
###Code
import mindspore.dataset as ds
import mindspore.dataset.text as text
input_list = ["my", "favorite", "book", "is", "love", "during", "the", "cholera", "era", "what",
"我", "最", "喜", "欢", "的", "书", "是", "霍", "乱", "时", "期", "的", "爱", "情", "您"]
vocab_english = ["book", "cholera", "era", "favor", "##ite", "my", "is", "love", "dur", "##ing", "the"]
vocab_chinese = ["我", '最', '喜', '欢', '的', '书', '是', '霍', '乱', '时', '期', '爱', '情']
dataset = ds.NumpySlicesDataset(input_list, column_names=["text"], shuffle=False)
print("------------------------before tokenization----------------------------")
for data in dataset.create_dict_iterator(output_numpy=True):
print(text.to_str(data['text']))
vocab = text.Vocab.from_list(vocab_english+vocab_chinese)
tokenizer_op = text.WordpieceTokenizer(vocab=vocab)
dataset = dataset.map(operations=tokenizer_op)
print("------------------------after tokenization-----------------------------")
for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
print(text.to_str(i['text']))
###Output
------------------------before tokenization----------------------------
my
favorite
book
is
love
during
the
cholera
era
what
我
最
喜
欢
的
书
是
霍
乱
时
期
的
爱
情
您
------------------------after tokenization-----------------------------
['my']
['favor' '##ite']
['book']
['is']
['love']
['dur' '##ing']
['the']
['cholera']
['era']
['[UNK]']
['我']
['最']
['喜']
['欢']
['的']
['书']
['是']
['霍']
['乱']
['时']
['期']
['的']
['爱']
['情']
['[UNK]']
|
Módulo 2/Clase12_ManejoAnalisisDatosPandas.ipynb | ###Markdown
Aplicando Python para análisis de precios: descarga, manejo y análisis de datos > En esta y en las siguientes dos clases veremos un caso de aplicación de simulación montecarlo en la toma de decisiones. Para lograr este objetivo, primero veremos (en esta clase) como manipular datos con *pandas*, tanto desde un archivo local de excel como remotamente desde Yahoo Finance.> Python Data Analysis Library: pandas es una librería de código abierto, fácil de usar y que provee alto rendimiento en estructuras de datos y herramientas de análisis de datos para el lenguaje de programación Python.**Referencias:**- http://pandas.pydata.org/- http://www.learndatasci.com/python-finance-part-yahoo-finance-api-pandas-matplotlib/- https://www.datacamp.com/community/tutorials/python-excel-tutorial- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html 0. MotivaciónHace menos de una década, los instrumentos financieros estaban en la cúspide de la popularidad. Las instituciones financieras de todo el mundo estaban negociando miles de millones de dólares de estos instrumentos a diario, y los analistas cuantitativos estaban modelándolos utilizando el cálculo estocástico y el poderoso `C++`.Sin embargo, el avance en los últimos años ha sido impresionante y las cosas han cambiado. Por una parte, la [crisis financiera del 2008](https://es.wikipedia.org/wiki/Crisis_financiera_de_2008) fue producida por los instrumentos financieros llamados *derivados*. Por otra parte, los volúmenes transaccionales han bajado y la demanda de modelado con `C++` se ha marchitado con ellos. Además, un nuevo jugador entró en la competencia... `¡Python!``Python` ha estado ganando muchos seguidores en la industria financiera en los últimos años y con razón. No en vano, junto a `R` son los lenguajes de programación más utilizados en cuanto a análisis financiero. 1. Descarga de datos de Yahoo! FinancePara esto utilizaremos el paquete *pandas_datareader*.**Nota**: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete *pandas_datareader*. Por lo que será necesario instalarlo aparte:- buscar en inicio "Anaconda prompt" y ejecutarlo como administrador;- el siguiente comando instala el paquete en Anaconda: *conda install pandas-datareader*;- una vez finalice la instalación correr el comando: *conda list*, y buscar que sí se haya instalado pandas-datareader
###Code
# Importar el modulo data del paquete pandas_datareader. La comunidad lo importa con el nombre de web
import pandas as pd
import pandas_datareader as web
# Librerías estándar para arreglos y gráficos
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Primero importaremos datos desde un archivo con extensión `.csv`
###Code
#Importar datos de un archivo csv
name = "WMT.csv"
datos = pd.read_csv(name)
datos
###Output
_____no_output_____
###Markdown
Ahora lo haremos desde Yahoo Finance
###Code
web.DataReader?
datos = web.DataReader('WMT','yahoo','1972-08-25','2020-11-03')
datos["Adj Close"]
# Escribir una función para generalizar la importación desde Yahoo
def get_closes(names,start,end):
precios = web.DataReader(names,'yahoo',start,end)
closes = precios["Adj Close"]
return closes
# Instrumentos a descargar
names = ['BIMBOA.MX','AEROMEX.MX', 'GFAMSAA.MX']
# Fechas: inicios 2015 a finales de 2019
start = '2015-01-01'
end = '2020-11-03'
# Obtenemos los precios ajustados en el cierre
datos_MX = get_closes(names,start,end)
datos_MX
###Output
_____no_output_____
###Markdown
¿Cómo lucen estos datos?
###Code
# Graficar
datos_MX.plot(figsize=(15,8))
###Output
_____no_output_____
###Markdown
Una vez tenemos los datos, podemos operar con ellos. Por ejemplo un resumen de datos estadísticos se podría obtener con
###Code
# Método describe
datos_MX.describe()
###Output
_____no_output_____
###Markdown
2. Rendimientos diariosPara una sucesión de precios $\{S_t\}_{t=0}^{n}$, el rendimiento simple $R_t$ se define como el el cambio porcentual$$R_t=\frac{S_t-S_{t-1}}{S_{t-1}}$$para $t=1,\ldots,n$.Para el ejemplo en curso, ¿cómo calcular esto?
###Code
# Método shift
datos_MX.shift()
# Entonces los rendimientos se calculan como
ret_MX = (datos_MX - datos_MX.shift())/datos_MX.shift()
ret_MX = ret_MX.dropna()
ret_MX
# Método pct_change
datos_MX.pct_change().dropna()
###Output
_____no_output_____
###Markdown
y la gráfica de los rendimientos se puede obtener como...
###Code
# Gráfica
ret_MX.plot(figsize=(15,8))
###Output
_____no_output_____
###Markdown
Donde se observa que el rendimiento tiene una tendencia constante y, por tanto, se puede plantear la hipótesis de que se puede modelar usando un proceso estocástico estacionario en media. Otro rendimiento usado con frecuencia es el rendimiento continuamente compuesto o rendimiento logaritmico. Éste, está definido como$$r_t=\ln\left(\frac{S_t}{S_{t-1}}\right).$$**Esta ecuación sólo es válida cuando se tienen periodos cortos de tiempo**Es fácil darse cuenta que $r_t=\ln(1+R_t)$.**Nota:** ver gráficamente que si $0\leq|x|\ll 1$, entonces $\ln(1+x)\approx x$.Para este caso, la fórmula del rendimiento continuamente compuesto se translada facilmente a código Python (obtener, graficar y comparar).
###Code
# Rendimiento logarítmico
ret_log = np.log(datos_MX/datos_MX.shift())
ret_log
# Gráfica
ret_log.plot(figsize=(15,8))
# Valor absoluto de la diferencia
###Output
_____no_output_____
###Markdown
Donde se observa que el rendimiento tiene una tendencia constante y, por tanto, se puede plantear la hipótesis de que se puede modelar usando un proceso estocástico estacionario en media.Podemos incluso plantear la hipótesis de que los log rendimientos son normales...
###Code
# Media y volatilidad de rendimientos
ret_MX.mean()['BIMBOA.MX']
ret_MX.std()
ret_MX.std()['GFAMSAA.MX']
###Output
_____no_output_____ |
20_newsgroups_automl.ipynb | ###Markdown
20 Newsgroups data import script for *Google Cloud AutoML Natural Language*This notebook downloads the [20 newsgroups dataset](https://scikit-learn.org/0.19/datasets/twenty_newsgroups.html) using scikit-learn. This dataset contains about 18000 posts from 20 newsgroups, and is useful for text classification. The script transforms the data into a pandas dataframe and finally into a CSV file readable by [Google Cloud AutoML Natural Language](https://cloud.google.com/natural-language/automl). Imports
###Code
import numpy as np
import pandas as pd
import csv
from sklearn.datasets import fetch_20newsgroups
###Output
_____no_output_____
###Markdown
Fetch data
###Code
newsgroups = fetch_20newsgroups(subset='all')
df = pd.DataFrame(newsgroups.data, columns=['text'])
df['categories'] = [newsgroups.target_names[index] for index in newsgroups.target]
df.head()
###Output
Downloading 20news dataset. This may take a few minutes.
Downloading dataset from https://ndownloader.figshare.com/files/5975967 (14 MB)
###Markdown
Clean data
###Code
# Convert multiple whitespace characters into a space
df['text'] = df['text'].str.replace('\s+',' ')
# Trim leading and tailing whitespace
df['text'] = df['text'].str.strip()
# Truncate all fields to the maximum field length of 128kB
df['text'] = df['text'].str.slice(0,131072)
# Remove any rows with empty fields
df = df.replace('', np.NaN).dropna()
# Drop duplicates
df = df.drop_duplicates(subset='text')
# Limit rows to maximum of 100,000
df = df.sample(min(100000, len(df)))
df.head()
###Output
_____no_output_____
###Markdown
Export to CSV
###Code
csv_str = df.to_csv(index=False, header=False)
with open("20-newsgroups-dataset.csv", "w") as text_file:
print(csv_str, file=text_file)
###Output
_____no_output_____ |
notebooks/RNN_models/RNN_multiclass_1234.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from sklearn.utils import class_weight
from keras.models import Sequential
from keras.layers import Dense, LSTM, Concatenate, BatchNormalization
from keras.regularizers import l2
from keras import Model
from sklearn.metrics import confusion_matrix
import seaborn as sns
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
import warnings
import tensorflow as tf
from sklearn.preprocessing import StandardScaler
from keras.preprocessing import sequence
warnings.filterwarnings("ignore")
%matplotlib inline
###Output
Mounted at /content/drive
Requirement already satisfied: tables in /usr/local/lib/python3.7/dist-packages (3.4.4)
Collecting tables
Downloading tables-3.7.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.9 MB)
[K |████████████████████████████████| 5.9 MB 5.2 MB/s
[?25hRequirement already satisfied: numexpr>=2.6.2 in /usr/local/lib/python3.7/dist-packages (from tables) (2.8.1)
Requirement already satisfied: numpy>=1.19.0 in /usr/local/lib/python3.7/dist-packages (from tables) (1.19.5)
Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from tables) (21.3)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->tables) (3.0.7)
Installing collected packages: tables
Attempting uninstall: tables
Found existing installation: tables 3.4.4
Uninstalling tables-3.4.4:
Successfully uninstalled tables-3.4.4
Successfully installed tables-3.7.0
Collecting tensorflow-addons==0.8.3
Downloading tensorflow_addons-0.8.3-cp37-cp37m-manylinux2010_x86_64.whl (1.0 MB)
[K |████████████████████████████████| 1.0 MB 7.5 MB/s
[?25hRequirement already satisfied: typeguard in /usr/local/lib/python3.7/dist-packages (from tensorflow-addons==0.8.3) (2.7.1)
Installing collected packages: tensorflow-addons
Successfully installed tensorflow-addons-0.8.3
###Markdown
Loads in the Dataframes
###Code
# loads the dataframes
higgs_df = pd.read_hdf('/content/drive/MyDrive/Colab Notebooks/ttH.hd5')
semi_leptonic_df = pd.read_hdf('/content/drive/MyDrive/Colab Notebooks/ttsemileptonic.hd5')
fully_leptonic_df = pd.read_hdf('/content/drive/MyDrive/Colab Notebooks/fully_leptonic.hd5')
fully_hadronic_df = pd.read_hdf('/content/drive/MyDrive/Colab Notebooks/fully_hadronic.hd5')
# labels signal vs background
higgs_df["signal"] = 0
semi_leptonic_df["signal"] = 1
fully_hadronic_df["signal"] = 2
fully_leptonic_df["signal"] = 3
# combines the dataframes and randomly shuffles the rows
full_df = higgs_df.append(semi_leptonic_df, ignore_index=True)
full_df = full_df.append(fully_leptonic_df, ignore_index=True)
full_df = full_df.append(fully_hadronic_df, ignore_index=True)
full_df = shuffle(full_df)
event_cols = [
"BiasedDPhi",
"DiJet_mass",
"HT",
"InputMet_InputJet_mindPhi",
"InputMet_pt",
"MHT_pt",
"MinChi",
"MinOmegaHat",
"MinOmegaTilde",
"ncleanedBJet",
"ncleanedJet",
]
object_cols = [
"cleanedJet_pt",
"cleanedJet_area",
"cleanedJet_btagDeepB",
"cleanedJet_chHEF",
"cleanedJet_eta",
"cleanedJet_mass",
"cleanedJet_neHEF",
"cleanedJet_phi",
]
# removes useless columns
df = full_df[event_cols + object_cols + ["signal", "xs_weight"]]
###Output
_____no_output_____
###Markdown
Splits data into event / object dataframes and train / test dataframes
###Code
scaler = StandardScaler()
# columns that should not be transformed
untransformed_cols = ["ncleanedBJet", "ncleanedJet", "BiasedDP hi", "signal"]
transformed_cols = list(set(event_cols) - set(untransformed_cols))
# takes the log of each column to remove skewness
for col_name in event_cols:
if col_name in transformed_cols:
df[col_name] = np.log(df[col_name])
# splits data into training and validation
num_classes = 4
X, y = df.drop("signal", axis=1), df["signal"]
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
# divides training data into object level and event level features
event_X_train, event_X_test = X_train[event_cols], X_test[event_cols]
object_X_train, object_X_test = X_train[object_cols], X_test[object_cols]
# scales features so they all have the same mean and variance
event_X_train[event_cols] = scaler.fit_transform(event_X_train[event_cols].values)
event_X_test[event_cols] = scaler.transform(event_X_test[event_cols].values)
max_jets = df["ncleanedJet"].max()
# pads input sequences with zeroes so they're all the same length
for col in object_cols:
object_X_train[col] = sequence.pad_sequences(
object_X_train[col].values, padding="post", dtype="float32"
).tolist()
object_X_test[col] = sequence.pad_sequences(
object_X_test[col].values, padding="post", dtype="float32"
).tolist()
# one-hot encodes the label data
y_train = tf.keras.utils.to_categorical(y_train, num_classes=num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=num_classes)
print(
"Removed Columns:",
[col for col in full_df.columns if col not in set(event_cols + object_cols)],
)
X_train.head()
###Output
Removed Columns: ['dataset', 'entry', 'InputMet_phi', 'MHT_phi', 'hashed_filename', 'weight_nominal', 'xs_weight', 'signal']
###Markdown
Loads data
###Code
# object data
object_X_train = np.load('/content/drive/MyDrive/RNN_classifier/object_X_train_multiclass.npy')
object_X_test = np.load('/content/drive/MyDrive/RNN_classifier/object_X_test_multiclass.npy')
plt.scatter(object_X_train[:, :, 7], object_X_train[:, :, 4], s=0.1) # plots (eta, phi) for all jets
###Output
_____no_output_____
###Markdown
Hyperparameters
###Code
# hyperparameters
lr = 0.001
activation = "relu"
batch_size = 32
num_classes = 4
lstm_l2 = 0 #1e-6
mlp_l2 = 0 #1e-4
optimizer = keras.optimizers.Adam(
learning_rate=lr,
)
METRICS = [
keras.metrics.CategoricalAccuracy(name="accuracy"),
keras.metrics.Precision(name="precision"),
keras.metrics.Recall(name="recall"),
keras.metrics.AUC(name='AUC'),
]
y_integers = np.argmax(y_train, axis=1)
class_weights = class_weight.compute_class_weight(
class_weight='balanced', classes=np.unique(y_integers), y=y_integers
)
class_weights = {l: c for l, c in zip(np.unique(y_integers), class_weights)}
###Output
_____no_output_____
###Markdown
Callbacks
###Code
monitor = 'val_loss'
mode = 'auto'
# stops training early if score doesn't improve
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor=monitor,
verbose=1,
patience=6,
mode=mode,
restore_best_weights=True,
)
# saves the network at regular intervals so you can pick the best version
checkpoint = tf.keras.callbacks.ModelCheckpoint(
filepath="/content/drive/MyDrive/RNN_classifier/best_model_multiclass_v2.h5",
monitor=monitor,
verbose=1,
save_best_only=True,
save_weights_only=False,
mode=mode,
save_freq="epoch",
)
# reduces the lr whenever training plateaus
reduce_lr = keras.callbacks.ReduceLROnPlateau(
monitor=monitor,
factor=0.1,
patience=3,
mode=mode,
)
###Output
_____no_output_____
###Markdown
Defines and compiles the model
###Code
DNN_model = Sequential([
Dense(40, input_shape=(event_X_train.shape[1],), activation=activation, kernel_regularizer=l2(mlp_l2)),
BatchNormalization()])
RNN_model = Sequential([
LSTM(
200,
input_shape=(object_X_train.shape[1], object_X_train.shape[2]),
activation="tanh",
unroll=False,
recurrent_dropout=0.0,
kernel_regularizer=l2(lstm_l2)),
BatchNormalization()])
merged = Concatenate()([DNN_model.output, RNN_model.output])
merged = BatchNormalization()(merged)
merged = Dense(40, activation=activation, kernel_regularizer=l2(mlp_l2))(merged)
merged = Dense(num_classes, activation="softmax")(merged)
model = Model(inputs=[DNN_model.input, RNN_model.input], outputs=merged)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=METRICS)
# plots the model as a graph
#keras.utils.plot_model(model, "RNN_multiclass_model_diagram.png", show_shapes=True, show_layer_names=False)
###Output
_____no_output_____
###Markdown
Loads pre-trained model
###Code
model = keras.models.load_model('/content/drive/MyDrive/RNN_classifier/best_model_multiclass_v2.h5')
###Output
_____no_output_____
###Markdown
Trains the model
###Code
history = model.fit(
[event_X_train, object_X_train],
y_train,
batch_size=32,
class_weight=class_weights,
epochs=6,
callbacks=[early_stopping, checkpoint],
validation_data=([event_X_test, object_X_test], y_test),
verbose=1,
)
# 0.8604
###Output
Epoch 1/6
9586/9586 [==============================] - ETA: 0s - loss: 1.1608 - accuracy: 0.4910 - precision: 0.5571 - recall: 0.2625 - AUC: 0.7541
Epoch 00001: val_loss improved from 2.76207 to 0.94436, saving model to /content/drive/MyDrive/RNN_classifier/best_model_multiclass_v2.h5
9586/9586 [==============================] - 360s 38ms/step - loss: 1.1608 - accuracy: 0.4910 - precision: 0.5571 - recall: 0.2625 - AUC: 0.7541 - val_loss: 0.9444 - val_accuracy: 0.6109 - val_precision: 0.6895 - val_recall: 0.3880 - val_AUC: 0.8433
Epoch 2/6
9585/9586 [============================>.] - ETA: 0s - loss: 1.1313 - accuracy: 0.4987 - precision: 0.5762 - recall: 0.2800 - AUC: 0.7613
Epoch 00002: val_loss improved from 0.94436 to 0.93611, saving model to /content/drive/MyDrive/RNN_classifier/best_model_multiclass_v2.h5
9586/9586 [==============================] - 361s 38ms/step - loss: 1.1313 - accuracy: 0.4987 - precision: 0.5762 - recall: 0.2800 - AUC: 0.7613 - val_loss: 0.9361 - val_accuracy: 0.6519 - val_precision: 0.7659 - val_recall: 0.4681 - val_AUC: 0.8534
Epoch 3/6
9586/9586 [==============================] - ETA: 0s - loss: 1.1390 - accuracy: 0.4890 - precision: 0.5479 - recall: 0.3076 - AUC: 0.7658
Epoch 00003: val_loss did not improve from 0.93611
9586/9586 [==============================] - 359s 37ms/step - loss: 1.1390 - accuracy: 0.4890 - precision: 0.5479 - recall: 0.3076 - AUC: 0.7658 - val_loss: 1.2560 - val_accuracy: 0.4061 - val_precision: 0.4824 - val_recall: 0.2369 - val_AUC: 0.7035
Epoch 4/6
9586/9586 [==============================] - ETA: 0s - loss: 1.2300 - accuracy: 0.4684 - precision: 0.5062 - recall: 0.2683 - AUC: 0.7467
Epoch 00004: val_loss did not improve from 0.93611
9586/9586 [==============================] - 354s 37ms/step - loss: 1.2300 - accuracy: 0.4684 - precision: 0.5062 - recall: 0.2683 - AUC: 0.7467 - val_loss: 1.1063 - val_accuracy: 0.5140 - val_precision: 0.6012 - val_recall: 0.2771 - val_AUC: 0.7743
Epoch 5/6
1384/9586 [===>..........................] - ETA: 4:37 - loss: 1.1766 - accuracy: 0.4284 - precision: 0.4627 - recall: 0.2231 - AUC: 0.7111
###Markdown
Evaluates the model
###Code
y_pred_test = model.predict([event_X_test, object_X_test])
def plot_metrics(history):
metrics = ['loss', 'accuracy', 'precision', 'recall', 'AUC']
fig = plt.figure(figsize=(14, 14))
for n, metric in enumerate(metrics):
name = metric.replace("_"," ")
plt.subplot(3,2,n+1)
plt.plot(history.epoch, history.history[metric], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
plt.legend()
plot_metrics(history)
def compare_models(history, history2):
metrics = ['loss', 'accuracy', 'precision', 'recall', 'AUC']
fig = plt.figure(figsize=(14, 14))
for n, metric in enumerate(metrics):
name = metric.replace("_"," ")
plt.subplot(3,2,n+1)
plt.plot(history.epoch, history.history[metric], label='Model 1 Train')
plt.plot(history.epoch, history.history['val_'+metric],
linestyle="--", label='Model 1 Val')
plt.plot(history2.epoch, history2.history[metric], label='Model 2 Train')
plt.plot(history2.epoch, history2.history['val_'+metric],
linestyle="--", label='Model 2 Val')
plt.xlabel('Epoch')
plt.ylabel(name)
plt.legend()
compare_models(history, history2)
def plot_cm(labels, predictions, p=0.5):
signal_types = ['ttH', 'semi leptonic', 'fully hadronic', 'fully leptonic']
cm = confusion_matrix(labels, predictions, normalize='true')
plt.figure(figsize=(7, 7))
sns.heatmap(cm, annot=True, xticklabels=signal_types, yticklabels=signal_types, vmin=0, vmax=1)
plt.title(f'Confusion matrix')
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
plot_cm(y_test.argmax(axis=1), y_pred_test.argmax(axis=1))
###Output
_____no_output_____
###Markdown
Significance as a Function of Threshold
###Code
preds = model.predict([event_X_test, object_X_test])
test_weight = X_test["xs_weight"].values
test_frac = len(y_test) / len(y_train)
thresholds = np.linspace(0, 1, 50)
significance = np.zeros(len(thresholds), dtype=float)
lum = 140e3
epsilon = 1e-5
sg = np.zeros(len(thresholds))
bg = np.zeros(len(thresholds))
labels = [y.argmax() for y in y_test]
for i, threshold in enumerate(thresholds):
sg[i] = sum([test_weight[j] for j, (pred, label) in enumerate(zip(preds, labels)) if (pred[0] >= threshold and label == 0)]) * lum / test_frac
bg[i] = sum([test_weight[j] for j, (pred, label) in enumerate(zip(preds, labels)) if (pred[0] >= threshold and label != 0)]) * lum / test_frac
significance = sg / np.sqrt(bg + epsilon)
index = significance.argmax()
print(thresholds[index])
plt.plot(thresholds, significance)
plt.show()
###Output
0.36734693877551017
###Markdown
Discriminator Plots
###Code
test_weight = X_test["xs_weight"].values
labels = [y.argmax() for y in y_test]
signals = [pred[0] for label, pred in zip(labels, preds) if label == 0]
backgrounds = [pred[0] for label, pred in zip(labels, preds) if label != 0]
n_bins = 75
alpha = 0.6
fig, (ax1, ax2) = plt.subplots(2, figsize=(8, 8))
fig.suptitle('Discriminator Plots')
ax1.hist(signals, density=True, bins=n_bins, alpha=alpha)
ax1.hist(backgrounds, density=True, bins=n_bins, alpha=alpha)
ax2.hist(signals, density=False, bins=n_bins, alpha=alpha)
ax2.hist(backgrounds, density=False, bins=n_bins, alpha=alpha)
plt.show()
###Output
_____no_output_____ |
Quandl for DataVigo.ipynb | ###Markdown
How to use Quandl with Python for Data Analysis This notebook demonstrates how to extract data from Quandl for data analysis. The example is based on United Kingdom Office of National Statistics:https://www.quandl.com/data/UKONS-United-Kingdom-Office-of-National-StatisticsYou must first register on Quandl website:https://www.quandl.com/ Then find your own unique API key under the account setting.You must also install the quandl package for python: pip install quandl (check this out for further guidelines: https://docs.quandl.com/docs/python-installation)
###Code
import quandl
import pandas as pd
quandl.ApiConfig.api_key = 'type your unique API key here'
###Output
_____no_output_____
###Markdown
This is an example of extracting a dataset:
###Code
quandl.get('UKONS/L5PA_A')
###Output
_____no_output_____
###Markdown
We can also specify the date range.
###Code
quandl.get('UKONS/L5PA_A', start_date = '2010-01-01', end_date ='2020-06-30')
###Output
_____no_output_____
###Markdown
So as you can see, we need to simply find the code of the dataset we are trying to get. Most datasets come with a metadata csv file, which include all the codes associated with the dataset. For UKONS, you can download it from here and save it into your local computer:https://www.quandl.com/data/UKONS-United-Kingdom-Office-of-National-Statistics/usage/export So we can read the metadata file using Pandas. There are 73502 datasets for UKONS.
###Code
codes = pd.read_csv('UKONS_metadata.csv', sep =',')
codes
###Output
_____no_output_____
###Markdown
Let's select only the codes that are about Consumer Price Index. They are shown by 'CPI wts'.
###Code
CPI = codes[codes['name'].str.contains('CPI wts')]
CPI
###Output
_____no_output_____
###Markdown
We need to add the string 'UKONS/' to each code:
###Code
CPI.code = 'UKONS/' + CPI.code
CPI
###Output
_____no_output_____
###Markdown
You can rename the column if you want, though this is optional.
###Code
CPI = CPI.rename(columns={'name': 'category'})
###Output
_____no_output_____
###Markdown
We are interested in two columns only.
###Code
CPI = CPI[['code','category']]
CPI
CPI.shape
###Output
_____no_output_____
###Markdown
Now importing two more libraries. re is for spliting the text because we want to remove the string 'CPI wts'.pickle is also for saving the data.
###Code
import re
import pickle
###Output
_____no_output_____
###Markdown
We can add the category column when extracting data from quandl. This is one example:
###Code
category ='CPI wts: Education, health and social protection SPECIAL AGGREGATES (Annual)'
df = quandl.get('UKONS/A9G7_A')
df['category'] = category
df
###Output
_____no_output_____
###Markdown
This function, gets the data from quandl, then add the category column and also split the text based on ':'. It then dumps the extracted data into a pickle file.
###Code
def get_data(code,category):
df =quandl.get(code)
category = re.split(':',category)[1]
df['category'] = category
return df
with open('CPI_UKNONS.p', 'wb') as f:
pickle.dump(df, f)
###Output
_____no_output_____
###Markdown
Here is one example. You can try it with other codes too. You will need to give the code and category to the function.
###Code
get_data('UKONS/A9G7_A','CPI wts: Education, health and social protection SPECIAL AGGREGATES (Annual)')
###Output
_____no_output_____
###Markdown
Now we can concatenate all datasets using pd.concat method:
###Code
df_all = pd.concat(get_data(code, category) for code, category
in CPI.itertuples(index=False))
df_all
###Output
_____no_output_____
###Markdown
Let's save the data into a pickle file:
###Code
with open('CPI_UKNONS.p', 'wb') as f:
pickle.dump(df_all, f)
with open('CPI_UKNONS.p', 'rb') as f:
CPI_Data = pickle.load(f)
CPI_Data
###Output
_____no_output_____
###Markdown
The CPI dataset is now ready for analysis. Here is one simple example. Feel free to explore it further.
###Code
df_all.groupby('category').mean()
###Output
_____no_output_____ |
ps5/pset5_lda.ipynb | ###Markdown
Linear Discriminant Analysis (LDA) [50 pts]In this part of the exercise, you will re-visit the problem of predicting whether a student gets admitted into a university. However, in this part, you will build a linear discriminant analysis (LDA) classifier for this problem.LDA is a generative model for classification that assumes the class covariances are equal. Given a training dataset of positive and negative features (x, y) with y $\in$ {0, 1} , LDA models the data x as generated from class-conditional Gaussians:$P(x, y) = P(x|y)P(y)$ where $P(y = 1) = \pi$ and $P(x|y) = N(x;\mu^y,\Sigma)$where means $\mu^y$ are class-dependent but the covariance matrix $\Sigma$ is class-independent (the same for all classes).A novel feature $x$ is classified as a positive if $P(y = 1|x) > P(y = 0|x)$, whichis equivalent to $a(x)\gt0$, where the linear classifier $a(x) = w^Tx+w_0$ has weights given by $w = \Sigma^{-1}(\mu^1-\mu^0)$.In practice, and in this assignment, we use $a(x)\gt$ some threshold, or equivalently, $w^Tx>T$ for some constant $T$.As we saw in lecture, LDA and logistic regression can be expressed in the same form$P(y=1|x) = \frac{1}{1+e^{-\theta^Tx}}.$However, they generally produce different solutions for the parameter theta. ImplementationIn this assignment, you can assume the prior probabilities for the two classes are the same (although the number of the positive and negative samples in the training data is not the same), and that the threshold $T$ is zero. As a bonus, you are encouraged to explore how the different prior probabilities shift the decision boundary.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
datafile = 'data/ex2data1.txt'
#!head $datafile
cols = np.loadtxt(datafile,delimiter=',',usecols=(0,1,2),unpack=True) #Read in comma separated data
##Form the usual "X" matrix and "y" vector
X = np.transpose(np.array(cols[:-1]))
y = np.transpose(np.array(cols[-1:]))
m = y.size # number of training examples
##Insert the usual column of 1's into the "X" matrix
X = np.insert(X,0,1,axis=1)
#Divide the sample into two: ones with positive classification, one with null classification
pos = np.array([X[i] for i in range(X.shape[0]) if y[i] == 1])
neg = np.array([X[i] for i in range(X.shape[0]) if y[i] == 0])
def plotData():
plt.figure(figsize=(10,6))
plt.plot(pos[:,1],pos[:,2],'k+',label='Admitted')
plt.plot(neg[:,1],neg[:,2],'yo',label='Not admitted')
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.legend()
plt.grid(True)
plotData()
###Output
_____no_output_____
###Markdown
Implement the LDA classifier by completing the code here. As an implementation detail, you should first center the positive and negative data separately, so that each has a mean equal to 0, before computing the covariance, as this tends to give a more accurate estimate. You should center the whole training data set before applying the classifier. Namely, subtract the middle value of the two classes’ means ($\frac{1}{2}$(pos mean+neg mean)), which is on the separating plane when their prior probabilities are the same and becomes the ‘center’ of the data. [5 pts]
###Code
# IMPLEMENT THIS
pos_mean = np.mean(pos,axis=0)
neg_mean = np.mean(neg,axis=0)
print(pos_mean,neg_mean)
pos_data = pos - 0.5*(pos_mean+neg_mean)
neg_data = neg - 0.5*(pos_mean+neg_mean)
plt.figure(figsize=(10,6))
plt.plot(pos_data[:,1],pos_data[:,2],'k+',label='Admitted')
plt.plot(neg_data[:,1],neg_data[:,2],'yo',label='Not admitted')
plt.xlabel('Exam 1 score')
plt.ylabel('Exam 2 score')
plt.legend()
plt.grid(True)
###Output
[ 1. 74.7189227 73.95640208] [ 1. 52.0323011 54.6203921]
###Markdown
Implement the LDA algorithm here (Compute the covariance on all data): [10 pts each for getting cov_all, w and y_lda]
###Code
# IMPLEMENT THIS
X_data = np.concatenate((neg_data,pos_data),axis=0)
label = np.sort(y,axis=None)
cov_all = np.cov(pos_data.T[1:]) + np.cov(neg_data.T[1:]) # SHAPE: (2,2)
print('cov_all shape:',cov_all.shape)
w = np.linalg.inv(cov_all)@((pos_mean-neg_mean)[1:]) # w=cov_all^(-1)(pos_mean-neg_mean)
print('w:',w)
y_lda = np.dot(X_data[:,1:],w) # SHAPE: (100,)
print('y_lda shape:',y_lda.shape)
###Output
cov_all shape: (2, 2)
w: [0.0785182 0.07571361]
y_lda shape: (100,)
###Markdown
Completing the code to compute the training set accuracy. You should get a training accuracy around 89%. [5 pts]
###Code
# IMPLEMENT THIS
y_lda[y_lda<0]=0
y_lda[y_lda>0]=1
print(y_lda)
count = 0
for i in range(m):
if label[i] == y_lda[i]:
count+=1
accuracy = count/m
print(accuracy)
###Output
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 1. 1. 1. 1.
1. 0. 1. 0. 1. 1. 1. 0. 1. 1. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 0. 1.]
0.89
###Markdown
Written Problem [10 pts]Show that the log-odds decision function a(x) for LDA$a = \ln \frac{p(x|C_l)p(C_l)}{p(x|C_k)p(C_k)}$is linear in x, that is, we can express $a(x)=\theta^Tx$ for some $\theta$. Show all your steps. Hint: This is a binary problem. ANSWER:We notice that this is a binary problem. So that there are two classes: class l and k. By definition, the class-conditional density is multi-Gaussian:$$ p(x|C_k) = \frac{1}{(2\pi)^{p/2}|\Sigma_k|^{1/2}}e^{-\frac{1}{2}(x-\mu_k)^{T}\Sigma_k^{-1}(x-\mu_k)}$$And we assume $p(C_l)=p(C_k)=\frac{1}{2}$, $\Sigma=\Sigma_l=\Sigma_k$\begin{align*}a &=\log \frac{p(x|C_l)p(C_l)}{p(x|C_k)p(C_k)} \\ &= \log p(x|C_l)-\log p(x|C_k)+\log \frac{p(C_l)}{p(C_k)} \\ &= \log p(x|C_l)-\log p(x|C_k) \\ &= -\log (2\pi)^{p/2}|\Sigma_l|^{1/2}-\frac{1}{2}(x-\mu_l)^{T}\Sigma_l^{-1}(x-\mu_l)+ \log (2\pi)^{p/2}|\Sigma_k|^{1/2}+\frac{1}{2}(x-\mu_k)^{T}\Sigma_k^{-1}(x-\mu_k)\\ &= -\log (2\pi)^{p/2}|\Sigma|^{1/2}-\frac{1}{2}(x-\mu_l)^{T}\Sigma^{-1}(x-\mu_l)+ \log (2\pi)^{p/2}|\Sigma|^{1/2}+\frac{1}{2}(x-\mu_k)^{T}\Sigma^{-1}(x-\mu_k)\\ &= -\frac{1}{2}(x-\mu_l)^{T}\Sigma^{-1}(x-\mu_l)+\frac{1}{2}(x-\mu_k)^{T}\Sigma^{-1}(x-\mu_k)\\ &= -\frac{1}{2}(x^T\Sigma^{-1}x-2x^T\Sigma^{-1}\mu_l+\mu_l^T\Sigma^{-1}\mu_l)+\frac{1}{2}(x^T\Sigma^{-1}x-2x^T\Sigma^{-1}\mu_k+\mu_k^T\Sigma^{-1}\mu_k)\\ &= -\frac{1}{2}(x^T\Sigma^{-1}x-2x^T\Sigma^{-1}\mu_l+\mu_l^T\Sigma^{-1}\mu_l-x^T\Sigma^{-1}x+2x^T\Sigma^{-1}\mu_k-\mu_k^T\Sigma^{-1}\mu_k)\\ &= -\frac{1}{2}(-2x^T\Sigma^{-1}\mu_l+2x^T\Sigma^{-1}\mu_k+\mu_l^T\Sigma^{-1}\mu_l-\mu_k^T\Sigma^{-1}\mu_k) \\ &= -\frac{1}{2}(-2x^T\Sigma^{-1}(\mu_l-\mu_k)+\mu_l^T\Sigma^{-1}\mu_l-\mu_k^T\Sigma^{-1}\mu_k) \\ &= x^T\Sigma^{-1}(\mu_l-\mu_k) + const \end{align*}which is linear in x. So that we can express $a(x)=\theta^T x + \theta_0$, $\theta = \Sigma^{-1}(\mu_l-\mu_k)$, $\theta_0$ is the const. CNN on MNIST using TensorFlow™ [50 pts] **Note 1**: The following has been verified to work with the current latest version of TensorFlow (1.11)\* Adapted from official TensorFlow™ tour guide.TensorFlow is a powerful library for doing large-scale numerical computation. One of the tasks at which it excels is implementing and training deep neural networks. In this assignment you will learn the basic building blocks of a TensorFlow model while constructing a deep convolutional MNIST classifier.What you are expected to implement in this tutorial:* Create a softmax regression function that is a model for recognizing MNIST digits, based on looking at every pixel in the image* Use Tensorflow to train the model to recognize digits by having it "look" at thousands of examples* Check the model's accuracy with MNIST test data* Build, train, and test a multilayer convolutional neural network to improve the resultsHere is a diagram, created with TensorBoard, of the model we will build: Implement Utilities Weight InitializationTo create this model, we're going to need to create a lot of weights and biases. One should generally initialize weights with a small amount of noise for symmetry breaking, and to prevent 0 gradients. Since we're using ReLU neurons, it is also good practice to initialize them with a slightly positive initial bias to avoid "dead neurons". Instead of doing this repeatedly while we build the model, let's create two handy functions to do it for us.
###Code
import tempfile
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
tf.logging.set_verbosity(tf.logging.ERROR)
def weight_variable(shape):
"""weight_variable generates a weight variable of a given shape."""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
"""bias_variable generates a bias variable of a given shape."""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
###Output
C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Convolution and Pooling [5 pts]Our convolutions uses a stride of one and are zero padded so that the output is the same size as the input. Our pooling is plain old max pooling over 2x2 blocks.NOTE: FOR ALL THE FOLLOWING CODES, DO NOT IMPLEMENT YOUR OWN VERSION. USE THE BUILT-IN METHODS FROM TENSORFLOW.Take a look at [TensorFlow API Docs](https://www.tensorflow.org/api_docs/python/).
###Code
# IMPLEMENT THIS
def conv2d(x, W):
conv2d = tf.layers.conv2d(inputs=x,
filters=W,
kernel_size=[5,5],
padding="same",
)
return conv2d
def max_pool_2x2(x):
max_pool = tf.layers.max_pooling2d(inputs=x,
pool_size=[2,2],
strides=2)
return max_pool
###Output
_____no_output_____
###Markdown
Build the CNN First Convolutional Layer[10 pts]We can now implement our first layer. It will consist of convolution, followed by max pooling. The convolution will compute 32 features for each 5x5 patch. Its weight tensor will have a shape of [5, 5, 1, 32]. The first two dimensions are the patch size, the next is the number of input channels, and the last is the number of output channels. We will also have a bias vector with a component for each output channel.To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to image width and height, and the final dimension corresponding to the number of color channels.We then convolve x_image with the weight tensor, add the bias, apply the ReLU function, and finally max pool. The max_pool_2x2 method will reduce the image size to 14x14. Second Convolutional Layer[5 pts]In order to build a deep network, we stack several layers of this type. The second layer will have 64 features for each 5x5 patch. Fully Connected Layer[10 pts]Now that the image size has been reduced to 7x7, we add a fully-connected layer with 1024 neurons to allow processing on the entire image. We reshape the tensor from the pooling layer into a batch of vectors, multiply by a weight matrix, add a bias, and apply a ReLU. SoftmaxLayer[5 pts]Finally, we add a layer of softmax regression.
###Code
def deepnn(x):
"""
deepnn builds the graph for a deep net for classifying digits.
Args:
x: an input tensor with the dimensions (N_examples, 784), where 784 is the
number of pixels in a standard MNIST image.
Returns:
A tuple (y, keep_prob). y is a tensor of shape (N_examples, 10), with values
equal to the logits of classifying the digit into one of 10 classes (the
digits 0-9). keep_prob is a scalar placeholder for the probability of
dropout.
"""
# Reshape to use within a convolutional neural net.
# Last dimension is for "features" - there is only one here, since images are
# grayscale -- it would be 3 for an RGB image, 4 for RGBA, etc.
with tf.name_scope('reshape'):
x_image = tf.reshape(x,[-1,28,28,1])
# First convolutional layer - maps one grayscale image to 32 feature maps.
with tf.name_scope('conv1'):
h_conv1 = tf.nn.relu(conv2d(x_image,32))
# Pooling layer - downsamples by 2X.
with tf.name_scope('pool1'):
h_pool1 = max_pool_2x2(h_conv1)
# Second convolutional layer -- maps 32 feature maps to 64.
with tf.name_scope('conv2'):
h_conv2 = tf.nn.relu(conv2d(h_pool1,64))
# Second pooling layer.
with tf.name_scope('pool2'):
h_pool2 = max_pool_2x2(h_conv2)
# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image
# is down to 7x7x64 feature maps -- maps this to 1024 features.
with tf.name_scope('fc1'):
h_pool2_flat = tf.contrib.layers.flatten(h_pool2,scope='pool2flat')
h_fc1 = tf.layers.dense(inputs=h_pool2_flat,units=1024,activation=tf.nn.relu)
# Map the 1024 features to 10 classes, one for each digit
with tf.name_scope('fc2'):
y_conv = tf.layers.dense(inputs=h_fc1,units=10)
return y_conv
###Output
_____no_output_____
###Markdown
Complete the Graph[10 pts]We start building the computation graph by creating nodes for the input images and target output classes.
###Code
# Import data
mnist = input_data.read_data_sets('/tmp/tensorflow/mnist/input_data', one_hot=True)
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.int32, [None, 10])
# Build the graph for the deep net
y_conv = deepnn(x)
###Output
Extracting /tmp/tensorflow/mnist/input_data\train-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data\train-labels-idx1-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data\t10k-images-idx3-ubyte.gz
Extracting /tmp/tensorflow/mnist/input_data\t10k-labels-idx1-ubyte.gz
###Markdown
We can specify a loss function just as easily. Loss indicates how bad the model's prediction was on a single example; we try to minimize that while training across all the examples. Here, our loss function is the cross-entropy between the target and the softmax activation function applied to the model's prediction. As in the beginners tutorial, we use the stable formulation:
###Code
with tf.name_scope('loss'):
y_dict = dict(labels=y_,logits=y_conv)
losses = tf.nn.softmax_cross_entropy_with_logits(**y_dict)
cross_entropy = tf.reduce_mean(losses)
with tf.name_scope('adam_optimizer'):
trainer = tf.train.AdamOptimizer(learning_rate=0.001)
train_step = trainer.minimize(cross_entropy)
###Output
_____no_output_____
###Markdown
First we'll figure out where we predicted the correct label. tf.argmax is an extremely useful function which gives you the index of the highest entry in a tensor along some axis. For example, tf.argmax(y,1) is the label our model thinks is most likely for each input, while tf.argmax(y\_,1) is the true label. We can use tf.equal to check if our prediction matches the truth.That gives us a list of booleans. To determine what fraction are correct, we cast to floating point numbers and then take the mean. For example, [True, False, True, True] would become [1,0,1,1] which would become 0.75.
###Code
with tf.name_scope('accuracy'):
y_pred = tf.argmax(tf.nn.softmax(y_conv),axis=1)
y_true = tf.argmax(y_,axis=1)
correct_prediction = tf.equal(tf.cast(y_pred,tf.int64),y_true)
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
# For saving the graph, DO NOT CHANGE.
graph_location = r'./graph'
print('Saving graph to: %s' % graph_location)
train_writer = tf.summary.FileWriter(graph_location)
train_writer.add_graph(tf.get_default_graph())
###Output
Saving graph to: ./graph
###Markdown
Train and Evaluate the Model[5 pts]We will use a more sophisticated ADAM optimizer instead of a Gradient Descent Optimizer.We will add logging to every 100th iteration in the training process. Feel free to run this code. Be aware that it does 20,000 training iterations and may take a while (possibly up to half an hour), depending on your processor.The final test set accuracy after running this code should be approximately 99.2%.We have learned how to quickly and easily build, train, and evaluate a fairly sophisticated deep learning model using TensorFlow.
###Code
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(2000):
batch = mnist.train.next_batch(50)
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1]})
print('step %d, training accuracy %g' % (i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1]})
print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
###Output
step 0, training accuracy 0.08
step 100, training accuracy 0.98
step 200, training accuracy 0.92
step 300, training accuracy 1
step 400, training accuracy 0.98
step 500, training accuracy 0.98
step 600, training accuracy 1
step 700, training accuracy 0.98
step 800, training accuracy 1
step 900, training accuracy 1
step 1000, training accuracy 1
step 1100, training accuracy 0.98
step 1200, training accuracy 1
step 1300, training accuracy 0.96
step 1400, training accuracy 0.98
step 1500, training accuracy 1
step 1600, training accuracy 1
step 1700, training accuracy 1
step 1800, training accuracy 1
step 1900, training accuracy 0.98
test accuracy 0.9905
|
code/model_zoo/pytorch_ipynb/resnet-ex-1.ipynb | ###Markdown
*Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).* Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
###Output
Sebastian Raschka
CPython 3.6.8
IPython 7.2.0
torch 1.0.1.post2
###Markdown
- Runs on CPU or GPU (if available) Model Zoo -- Convolutional ResNet and Residual Blocks Please note that this example does not implement a really deep ResNet as described in literature but rather illustrates how the residual blocks described in He et al. [1] can be implemented in PyTorch.- [1] He, Kaiming, et al. "Deep residual learning for image recognition." *Proceedings of the IEEE conference on computer vision and pattern recognition*. 2016. Imports
###Code
import time
import numpy as np
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision import transforms
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
###Output
_____no_output_____
###Markdown
Settings and Dataset
###Code
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
# Hyperparameters
random_seed = 123
learning_rate = 0.01
num_epochs = 10
batch_size = 128
# Architecture
num_classes = 10
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
###Output
Image batch dimensions: torch.Size([128, 1, 28, 28])
Image label dimensions: torch.Size([128])
###Markdown
ResNet with identity blocks The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches the dimensions of the main path's output, which allows the network to learn identity functions. Such a residual block is illustrated below:
###Code
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
#########################
### 1st residual block
#########################
# 28x28x1 => 28x28x4
self.conv_1 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_1_bn = torch.nn.BatchNorm2d(4)
# 28x28x4 => 28x28x1
self.conv_2 = torch.nn.Conv2d(in_channels=4,
out_channels=1,
kernel_size=(3, 3),
stride=(1, 1),
padding=1)
self.conv_2_bn = torch.nn.BatchNorm2d(1)
#########################
### 2nd residual block
#########################
# 28x28x1 => 28x28x4
self.conv_3 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_3_bn = torch.nn.BatchNorm2d(4)
# 28x28x4 => 28x28x1
self.conv_4 = torch.nn.Conv2d(in_channels=4,
out_channels=1,
kernel_size=(3, 3),
stride=(1, 1),
padding=1)
self.conv_4_bn = torch.nn.BatchNorm2d(1)
#########################
### Fully connected
#########################
self.linear_1 = torch.nn.Linear(28*28*1, num_classes)
def forward(self, x):
#########################
### 1st residual block
#########################
shortcut = x
out = self.conv_1(x)
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out)
out = self.conv_2_bn(out)
out += shortcut
out = F.relu(out)
#########################
### 2nd residual block
#########################
shortcut = out
out = self.conv_3(out)
out = self.conv_3_bn(out)
out = F.relu(out)
out = self.conv_4(out)
out = self.conv_4_bn(out)
out += shortcut
out = F.relu(out)
#########################
### Fully connected
#########################
logits = self.linear_1(out.view(-1, 28*28*1))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
start_time = time.time()
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
###Output
Epoch: 001/010 | Batch 000/469 | Cost: 2.5157
Epoch: 001/010 | Batch 050/469 | Cost: 0.5106
Epoch: 001/010 | Batch 100/469 | Cost: 0.2353
Epoch: 001/010 | Batch 150/469 | Cost: 0.2672
Epoch: 001/010 | Batch 200/469 | Cost: 0.3670
Epoch: 001/010 | Batch 250/469 | Cost: 0.2920
Epoch: 001/010 | Batch 300/469 | Cost: 0.3122
Epoch: 001/010 | Batch 350/469 | Cost: 0.2697
Epoch: 001/010 | Batch 400/469 | Cost: 0.4273
Epoch: 001/010 | Batch 450/469 | Cost: 0.3696
Epoch: 001/010 training accuracy: 92.21%
Time elapsed: 0.25 min
Epoch: 002/010 | Batch 000/469 | Cost: 0.2612
Epoch: 002/010 | Batch 050/469 | Cost: 0.4460
Epoch: 002/010 | Batch 100/469 | Cost: 0.2881
Epoch: 002/010 | Batch 150/469 | Cost: 0.4010
Epoch: 002/010 | Batch 200/469 | Cost: 0.2376
Epoch: 002/010 | Batch 250/469 | Cost: 0.2598
Epoch: 002/010 | Batch 300/469 | Cost: 0.1649
Epoch: 002/010 | Batch 350/469 | Cost: 0.2331
Epoch: 002/010 | Batch 400/469 | Cost: 0.2897
Epoch: 002/010 | Batch 450/469 | Cost: 0.4034
Epoch: 002/010 training accuracy: 92.73%
Time elapsed: 0.51 min
Epoch: 003/010 | Batch 000/469 | Cost: 0.2406
Epoch: 003/010 | Batch 050/469 | Cost: 0.3472
Epoch: 003/010 | Batch 100/469 | Cost: 0.2030
Epoch: 003/010 | Batch 150/469 | Cost: 0.2327
Epoch: 003/010 | Batch 200/469 | Cost: 0.2796
Epoch: 003/010 | Batch 250/469 | Cost: 0.2485
Epoch: 003/010 | Batch 300/469 | Cost: 0.1806
Epoch: 003/010 | Batch 350/469 | Cost: 0.2239
Epoch: 003/010 | Batch 400/469 | Cost: 0.4661
Epoch: 003/010 | Batch 450/469 | Cost: 0.2216
Epoch: 003/010 training accuracy: 93.16%
Time elapsed: 0.76 min
Epoch: 004/010 | Batch 000/469 | Cost: 0.4196
Epoch: 004/010 | Batch 050/469 | Cost: 0.2219
Epoch: 004/010 | Batch 100/469 | Cost: 0.1649
Epoch: 004/010 | Batch 150/469 | Cost: 0.2900
Epoch: 004/010 | Batch 200/469 | Cost: 0.2729
Epoch: 004/010 | Batch 250/469 | Cost: 0.2085
Epoch: 004/010 | Batch 300/469 | Cost: 0.3587
Epoch: 004/010 | Batch 350/469 | Cost: 0.2085
Epoch: 004/010 | Batch 400/469 | Cost: 0.2656
Epoch: 004/010 | Batch 450/469 | Cost: 0.1630
Epoch: 004/010 training accuracy: 93.64%
Time elapsed: 1.01 min
Epoch: 005/010 | Batch 000/469 | Cost: 0.2607
Epoch: 005/010 | Batch 050/469 | Cost: 0.2885
Epoch: 005/010 | Batch 100/469 | Cost: 0.4115
Epoch: 005/010 | Batch 150/469 | Cost: 0.1415
Epoch: 005/010 | Batch 200/469 | Cost: 0.1815
Epoch: 005/010 | Batch 250/469 | Cost: 0.2137
Epoch: 005/010 | Batch 300/469 | Cost: 0.0949
Epoch: 005/010 | Batch 350/469 | Cost: 0.2109
Epoch: 005/010 | Batch 400/469 | Cost: 0.2047
Epoch: 005/010 | Batch 450/469 | Cost: 0.3176
Epoch: 005/010 training accuracy: 93.86%
Time elapsed: 1.26 min
Epoch: 006/010 | Batch 000/469 | Cost: 0.2820
Epoch: 006/010 | Batch 050/469 | Cost: 0.1209
Epoch: 006/010 | Batch 100/469 | Cost: 0.2926
Epoch: 006/010 | Batch 150/469 | Cost: 0.2950
Epoch: 006/010 | Batch 200/469 | Cost: 0.1879
Epoch: 006/010 | Batch 250/469 | Cost: 0.2352
Epoch: 006/010 | Batch 300/469 | Cost: 0.2423
Epoch: 006/010 | Batch 350/469 | Cost: 0.1898
Epoch: 006/010 | Batch 400/469 | Cost: 0.3582
Epoch: 006/010 | Batch 450/469 | Cost: 0.2269
Epoch: 006/010 training accuracy: 93.86%
Time elapsed: 1.51 min
Epoch: 007/010 | Batch 000/469 | Cost: 0.2327
Epoch: 007/010 | Batch 050/469 | Cost: 0.1684
Epoch: 007/010 | Batch 100/469 | Cost: 0.1441
Epoch: 007/010 | Batch 150/469 | Cost: 0.1740
Epoch: 007/010 | Batch 200/469 | Cost: 0.1402
Epoch: 007/010 | Batch 250/469 | Cost: 0.2488
Epoch: 007/010 | Batch 300/469 | Cost: 0.2436
Epoch: 007/010 | Batch 350/469 | Cost: 0.2196
Epoch: 007/010 | Batch 400/469 | Cost: 0.1210
Epoch: 007/010 | Batch 450/469 | Cost: 0.1820
Epoch: 007/010 training accuracy: 94.19%
Time elapsed: 1.76 min
Epoch: 008/010 | Batch 000/469 | Cost: 0.1494
Epoch: 008/010 | Batch 050/469 | Cost: 0.1392
Epoch: 008/010 | Batch 100/469 | Cost: 0.2526
Epoch: 008/010 | Batch 150/469 | Cost: 0.1961
Epoch: 008/010 | Batch 200/469 | Cost: 0.2890
Epoch: 008/010 | Batch 250/469 | Cost: 0.2019
Epoch: 008/010 | Batch 300/469 | Cost: 0.3335
Epoch: 008/010 | Batch 350/469 | Cost: 0.2250
Epoch: 008/010 | Batch 400/469 | Cost: 0.1983
Epoch: 008/010 | Batch 450/469 | Cost: 0.2136
Epoch: 008/010 training accuracy: 94.40%
Time elapsed: 2.01 min
Epoch: 009/010 | Batch 000/469 | Cost: 0.3670
Epoch: 009/010 | Batch 050/469 | Cost: 0.1793
Epoch: 009/010 | Batch 100/469 | Cost: 0.3003
Epoch: 009/010 | Batch 150/469 | Cost: 0.1713
Epoch: 009/010 | Batch 200/469 | Cost: 0.2957
Epoch: 009/010 | Batch 250/469 | Cost: 0.2260
Epoch: 009/010 | Batch 300/469 | Cost: 0.1860
Epoch: 009/010 | Batch 350/469 | Cost: 0.2632
Epoch: 009/010 | Batch 400/469 | Cost: 0.2249
Epoch: 009/010 | Batch 450/469 | Cost: 0.2512
Epoch: 009/010 training accuracy: 94.61%
Time elapsed: 2.26 min
Epoch: 010/010 | Batch 000/469 | Cost: 0.1599
Epoch: 010/010 | Batch 050/469 | Cost: 0.2204
Epoch: 010/010 | Batch 100/469 | Cost: 0.1528
Epoch: 010/010 | Batch 150/469 | Cost: 0.1847
Epoch: 010/010 | Batch 200/469 | Cost: 0.1767
Epoch: 010/010 | Batch 250/469 | Cost: 0.1473
Epoch: 010/010 | Batch 300/469 | Cost: 0.1407
Epoch: 010/010 | Batch 350/469 | Cost: 0.1406
Epoch: 010/010 | Batch 400/469 | Cost: 0.3001
Epoch: 010/010 | Batch 450/469 | Cost: 0.2306
Epoch: 010/010 training accuracy: 93.22%
Time elapsed: 2.51 min
Total Training Time: 2.51 min
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
###Output
Test accuracy: 92.20%
###Markdown
ResNet with convolutional blocks for resizing The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches is resized to dimensions of the main path's output. Such a residual block is illustrated below:
###Code
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
#########################
### 1st residual block
#########################
# 28x28x1 => 14x14x4
self.conv_1 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_1_bn = torch.nn.BatchNorm2d(4)
# 14x14x4 => 14x14x8
self.conv_2 = torch.nn.Conv2d(in_channels=4,
out_channels=8,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_2_bn = torch.nn.BatchNorm2d(8)
# 28x28x1 => 14x14x8
self.conv_shortcut_1 = torch.nn.Conv2d(in_channels=1,
out_channels=8,
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_1_bn = torch.nn.BatchNorm2d(8)
#########################
### 2nd residual block
#########################
# 14x14x8 => 7x7x16
self.conv_3 = torch.nn.Conv2d(in_channels=8,
out_channels=16,
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_3_bn = torch.nn.BatchNorm2d(16)
# 7x7x16 => 7x7x32
self.conv_4 = torch.nn.Conv2d(in_channels=16,
out_channels=32,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_4_bn = torch.nn.BatchNorm2d(32)
# 14x14x8 => 7x7x32
self.conv_shortcut_2 = torch.nn.Conv2d(in_channels=8,
out_channels=32,
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_2_bn = torch.nn.BatchNorm2d(32)
#########################
### Fully connected
#########################
self.linear_1 = torch.nn.Linear(7*7*32, num_classes)
def forward(self, x):
#########################
### 1st residual block
#########################
shortcut = x
out = self.conv_1(x) # 28x28x1 => 14x14x4
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out) # 14x14x4 => 714x14x8
out = self.conv_2_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_1(shortcut)
shortcut = self.conv_shortcut_1_bn(shortcut)
out += shortcut
out = F.relu(out)
#########################
### 2nd residual block
#########################
shortcut = out
out = self.conv_3(out) # 14x14x8 => 7x7x16
out = self.conv_3_bn(out)
out = F.relu(out)
out = self.conv_4(out) # 7x7x16 => 7x7x32
out = self.conv_4_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_2(shortcut)
shortcut = self.conv_shortcut_2_bn(shortcut)
out += shortcut
out = F.relu(out)
#########################
### Fully connected
#########################
logits = self.linear_1(out.view(-1, 7*7*32))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
###Output
Epoch: 001/010 | Batch 000/469 | Cost: 2.3318
Epoch: 001/010 | Batch 050/469 | Cost: 0.1491
Epoch: 001/010 | Batch 100/469 | Cost: 0.2615
Epoch: 001/010 | Batch 150/469 | Cost: 0.0847
Epoch: 001/010 | Batch 200/469 | Cost: 0.1427
Epoch: 001/010 | Batch 250/469 | Cost: 0.1739
Epoch: 001/010 | Batch 300/469 | Cost: 0.1558
Epoch: 001/010 | Batch 350/469 | Cost: 0.0684
Epoch: 001/010 | Batch 400/469 | Cost: 0.0717
Epoch: 001/010 | Batch 450/469 | Cost: 0.0785
Epoch: 001/010 training accuracy: 97.90%
Epoch: 002/010 | Batch 000/469 | Cost: 0.0582
Epoch: 002/010 | Batch 050/469 | Cost: 0.1199
Epoch: 002/010 | Batch 100/469 | Cost: 0.0918
Epoch: 002/010 | Batch 150/469 | Cost: 0.0247
Epoch: 002/010 | Batch 200/469 | Cost: 0.0314
Epoch: 002/010 | Batch 250/469 | Cost: 0.0759
Epoch: 002/010 | Batch 300/469 | Cost: 0.0280
Epoch: 002/010 | Batch 350/469 | Cost: 0.0391
Epoch: 002/010 | Batch 400/469 | Cost: 0.0431
Epoch: 002/010 | Batch 450/469 | Cost: 0.0455
Epoch: 002/010 training accuracy: 98.16%
Epoch: 003/010 | Batch 000/469 | Cost: 0.0303
Epoch: 003/010 | Batch 050/469 | Cost: 0.0433
Epoch: 003/010 | Batch 100/469 | Cost: 0.0465
Epoch: 003/010 | Batch 150/469 | Cost: 0.0243
Epoch: 003/010 | Batch 200/469 | Cost: 0.0258
Epoch: 003/010 | Batch 250/469 | Cost: 0.0403
Epoch: 003/010 | Batch 300/469 | Cost: 0.1024
Epoch: 003/010 | Batch 350/469 | Cost: 0.0241
Epoch: 003/010 | Batch 400/469 | Cost: 0.0299
Epoch: 003/010 | Batch 450/469 | Cost: 0.0354
Epoch: 003/010 training accuracy: 98.08%
Epoch: 004/010 | Batch 000/469 | Cost: 0.0471
Epoch: 004/010 | Batch 050/469 | Cost: 0.0954
Epoch: 004/010 | Batch 100/469 | Cost: 0.0073
Epoch: 004/010 | Batch 150/469 | Cost: 0.0531
Epoch: 004/010 | Batch 200/469 | Cost: 0.0493
Epoch: 004/010 | Batch 250/469 | Cost: 0.1070
Epoch: 004/010 | Batch 300/469 | Cost: 0.0205
Epoch: 004/010 | Batch 350/469 | Cost: 0.0270
Epoch: 004/010 | Batch 400/469 | Cost: 0.0817
Epoch: 004/010 | Batch 450/469 | Cost: 0.0182
Epoch: 004/010 training accuracy: 98.70%
Epoch: 005/010 | Batch 000/469 | Cost: 0.0691
Epoch: 005/010 | Batch 050/469 | Cost: 0.0326
Epoch: 005/010 | Batch 100/469 | Cost: 0.0041
Epoch: 005/010 | Batch 150/469 | Cost: 0.0774
Epoch: 005/010 | Batch 200/469 | Cost: 0.1223
Epoch: 005/010 | Batch 250/469 | Cost: 0.0329
Epoch: 005/010 | Batch 300/469 | Cost: 0.0479
Epoch: 005/010 | Batch 350/469 | Cost: 0.0696
Epoch: 005/010 | Batch 400/469 | Cost: 0.0504
Epoch: 005/010 | Batch 450/469 | Cost: 0.0736
Epoch: 005/010 training accuracy: 98.38%
Epoch: 006/010 | Batch 000/469 | Cost: 0.0318
Epoch: 006/010 | Batch 050/469 | Cost: 0.0303
Epoch: 006/010 | Batch 100/469 | Cost: 0.0267
Epoch: 006/010 | Batch 150/469 | Cost: 0.0912
Epoch: 006/010 | Batch 200/469 | Cost: 0.0131
Epoch: 006/010 | Batch 250/469 | Cost: 0.0164
Epoch: 006/010 | Batch 300/469 | Cost: 0.0109
Epoch: 006/010 | Batch 350/469 | Cost: 0.0699
Epoch: 006/010 | Batch 400/469 | Cost: 0.0030
Epoch: 006/010 | Batch 450/469 | Cost: 0.0237
Epoch: 006/010 training accuracy: 98.74%
Epoch: 007/010 | Batch 000/469 | Cost: 0.0214
Epoch: 007/010 | Batch 050/469 | Cost: 0.0097
Epoch: 007/010 | Batch 100/469 | Cost: 0.0292
Epoch: 007/010 | Batch 150/469 | Cost: 0.0648
Epoch: 007/010 | Batch 200/469 | Cost: 0.0044
Epoch: 007/010 | Batch 250/469 | Cost: 0.0557
Epoch: 007/010 | Batch 300/469 | Cost: 0.0139
Epoch: 007/010 | Batch 350/469 | Cost: 0.0809
Epoch: 007/010 | Batch 400/469 | Cost: 0.0285
Epoch: 007/010 | Batch 450/469 | Cost: 0.0050
Epoch: 007/010 training accuracy: 98.82%
Epoch: 008/010 | Batch 000/469 | Cost: 0.0890
Epoch: 008/010 | Batch 050/469 | Cost: 0.0685
Epoch: 008/010 | Batch 100/469 | Cost: 0.0274
Epoch: 008/010 | Batch 150/469 | Cost: 0.0187
Epoch: 008/010 | Batch 200/469 | Cost: 0.0268
Epoch: 008/010 | Batch 250/469 | Cost: 0.1681
Epoch: 008/010 | Batch 300/469 | Cost: 0.0167
Epoch: 008/010 | Batch 350/469 | Cost: 0.0518
Epoch: 008/010 | Batch 400/469 | Cost: 0.0138
Epoch: 008/010 | Batch 450/469 | Cost: 0.0270
Epoch: 008/010 training accuracy: 99.08%
Epoch: 009/010 | Batch 000/469 | Cost: 0.0458
Epoch: 009/010 | Batch 050/469 | Cost: 0.0039
Epoch: 009/010 | Batch 100/469 | Cost: 0.0597
Epoch: 009/010 | Batch 150/469 | Cost: 0.0120
Epoch: 009/010 | Batch 200/469 | Cost: 0.0580
Epoch: 009/010 | Batch 250/469 | Cost: 0.0280
Epoch: 009/010 | Batch 300/469 | Cost: 0.0570
Epoch: 009/010 | Batch 350/469 | Cost: 0.0831
Epoch: 009/010 | Batch 400/469 | Cost: 0.0732
Epoch: 009/010 | Batch 450/469 | Cost: 0.0327
Epoch: 009/010 training accuracy: 99.05%
Epoch: 010/010 | Batch 000/469 | Cost: 0.0312
Epoch: 010/010 | Batch 050/469 | Cost: 0.0130
Epoch: 010/010 | Batch 100/469 | Cost: 0.0052
Epoch: 010/010 | Batch 150/469 | Cost: 0.0188
Epoch: 010/010 | Batch 200/469 | Cost: 0.0362
Epoch: 010/010 | Batch 250/469 | Cost: 0.1085
Epoch: 010/010 | Batch 300/469 | Cost: 0.0004
Epoch: 010/010 | Batch 350/469 | Cost: 0.0299
Epoch: 010/010 | Batch 400/469 | Cost: 0.0769
Epoch: 010/010 | Batch 450/469 | Cost: 0.0247
Epoch: 010/010 training accuracy: 98.87%
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
###Output
Test accuracy: 97.91%
###Markdown
ResNet with convolutional blocks for resizing (using a helper class) This is the same network as above but uses a `ResidualBlock` helper class.
###Code
class ResidualBlock(torch.nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
self.conv_1 = torch.nn.Conv2d(in_channels=channels[0],
out_channels=channels[1],
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_1_bn = torch.nn.BatchNorm2d(channels[1])
self.conv_2 = torch.nn.Conv2d(in_channels=channels[1],
out_channels=channels[2],
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_2_bn = torch.nn.BatchNorm2d(channels[2])
self.conv_shortcut_1 = torch.nn.Conv2d(in_channels=channels[0],
out_channels=channels[2],
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_1_bn = torch.nn.BatchNorm2d(channels[2])
def forward(self, x):
shortcut = x
out = self.conv_1(x)
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out)
out = self.conv_2_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_1(shortcut)
shortcut = self.conv_shortcut_1_bn(shortcut)
out += shortcut
out = F.relu(out)
return out
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
self.residual_block_1 = ResidualBlock(channels=[1, 4, 8])
self.residual_block_2 = ResidualBlock(channels=[8, 16, 32])
self.linear_1 = torch.nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.residual_block_1.forward(x)
out = self.residual_block_2.forward(out)
logits = self.linear_1(out.view(-1, 7*7*32))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
###Output
Epoch: 001/010 | Batch 000/468 | Cost: 2.3318
Epoch: 001/010 | Batch 050/468 | Cost: 0.1491
Epoch: 001/010 | Batch 100/468 | Cost: 0.2615
Epoch: 001/010 | Batch 150/468 | Cost: 0.0847
Epoch: 001/010 | Batch 200/468 | Cost: 0.1427
Epoch: 001/010 | Batch 250/468 | Cost: 0.1739
Epoch: 001/010 | Batch 300/468 | Cost: 0.1558
Epoch: 001/010 | Batch 350/468 | Cost: 0.0684
Epoch: 001/010 | Batch 400/468 | Cost: 0.0717
Epoch: 001/010 | Batch 450/468 | Cost: 0.0785
Epoch: 001/010 training accuracy: 97.90%
Epoch: 002/010 | Batch 000/468 | Cost: 0.0582
Epoch: 002/010 | Batch 050/468 | Cost: 0.1199
Epoch: 002/010 | Batch 100/468 | Cost: 0.0918
Epoch: 002/010 | Batch 150/468 | Cost: 0.0247
Epoch: 002/010 | Batch 200/468 | Cost: 0.0314
Epoch: 002/010 | Batch 250/468 | Cost: 0.0759
Epoch: 002/010 | Batch 300/468 | Cost: 0.0280
Epoch: 002/010 | Batch 350/468 | Cost: 0.0391
Epoch: 002/010 | Batch 400/468 | Cost: 0.0431
Epoch: 002/010 | Batch 450/468 | Cost: 0.0455
Epoch: 002/010 training accuracy: 98.16%
Epoch: 003/010 | Batch 000/468 | Cost: 0.0303
Epoch: 003/010 | Batch 050/468 | Cost: 0.0433
Epoch: 003/010 | Batch 100/468 | Cost: 0.0465
Epoch: 003/010 | Batch 150/468 | Cost: 0.0243
Epoch: 003/010 | Batch 200/468 | Cost: 0.0258
Epoch: 003/010 | Batch 250/468 | Cost: 0.0403
Epoch: 003/010 | Batch 300/468 | Cost: 0.1024
Epoch: 003/010 | Batch 350/468 | Cost: 0.0241
Epoch: 003/010 | Batch 400/468 | Cost: 0.0299
Epoch: 003/010 | Batch 450/468 | Cost: 0.0354
Epoch: 003/010 training accuracy: 98.08%
Epoch: 004/010 | Batch 000/468 | Cost: 0.0471
Epoch: 004/010 | Batch 050/468 | Cost: 0.0954
Epoch: 004/010 | Batch 100/468 | Cost: 0.0073
Epoch: 004/010 | Batch 150/468 | Cost: 0.0531
Epoch: 004/010 | Batch 200/468 | Cost: 0.0493
Epoch: 004/010 | Batch 250/468 | Cost: 0.1070
Epoch: 004/010 | Batch 300/468 | Cost: 0.0205
Epoch: 004/010 | Batch 350/468 | Cost: 0.0270
Epoch: 004/010 | Batch 400/468 | Cost: 0.0817
Epoch: 004/010 | Batch 450/468 | Cost: 0.0182
Epoch: 004/010 training accuracy: 98.70%
Epoch: 005/010 | Batch 000/468 | Cost: 0.0691
Epoch: 005/010 | Batch 050/468 | Cost: 0.0326
Epoch: 005/010 | Batch 100/468 | Cost: 0.0041
Epoch: 005/010 | Batch 150/468 | Cost: 0.0774
Epoch: 005/010 | Batch 200/468 | Cost: 0.1223
Epoch: 005/010 | Batch 250/468 | Cost: 0.0329
Epoch: 005/010 | Batch 300/468 | Cost: 0.0479
Epoch: 005/010 | Batch 350/468 | Cost: 0.0696
Epoch: 005/010 | Batch 400/468 | Cost: 0.0504
Epoch: 005/010 | Batch 450/468 | Cost: 0.0736
Epoch: 005/010 training accuracy: 98.38%
Epoch: 006/010 | Batch 000/468 | Cost: 0.0318
Epoch: 006/010 | Batch 050/468 | Cost: 0.0303
Epoch: 006/010 | Batch 100/468 | Cost: 0.0267
Epoch: 006/010 | Batch 150/468 | Cost: 0.0912
Epoch: 006/010 | Batch 200/468 | Cost: 0.0131
Epoch: 006/010 | Batch 250/468 | Cost: 0.0164
Epoch: 006/010 | Batch 300/468 | Cost: 0.0109
Epoch: 006/010 | Batch 350/468 | Cost: 0.0699
Epoch: 006/010 | Batch 400/468 | Cost: 0.0030
Epoch: 006/010 | Batch 450/468 | Cost: 0.0237
Epoch: 006/010 training accuracy: 98.74%
Epoch: 007/010 | Batch 000/468 | Cost: 0.0214
Epoch: 007/010 | Batch 050/468 | Cost: 0.0097
Epoch: 007/010 | Batch 100/468 | Cost: 0.0292
Epoch: 007/010 | Batch 150/468 | Cost: 0.0648
Epoch: 007/010 | Batch 200/468 | Cost: 0.0044
Epoch: 007/010 | Batch 250/468 | Cost: 0.0557
Epoch: 007/010 | Batch 300/468 | Cost: 0.0139
Epoch: 007/010 | Batch 350/468 | Cost: 0.0809
Epoch: 007/010 | Batch 400/468 | Cost: 0.0285
Epoch: 007/010 | Batch 450/468 | Cost: 0.0050
Epoch: 007/010 training accuracy: 98.82%
Epoch: 008/010 | Batch 000/468 | Cost: 0.0890
Epoch: 008/010 | Batch 050/468 | Cost: 0.0685
Epoch: 008/010 | Batch 100/468 | Cost: 0.0274
Epoch: 008/010 | Batch 150/468 | Cost: 0.0187
Epoch: 008/010 | Batch 200/468 | Cost: 0.0268
Epoch: 008/010 | Batch 250/468 | Cost: 0.1681
Epoch: 008/010 | Batch 300/468 | Cost: 0.0167
Epoch: 008/010 | Batch 350/468 | Cost: 0.0518
Epoch: 008/010 | Batch 400/468 | Cost: 0.0138
Epoch: 008/010 | Batch 450/468 | Cost: 0.0270
Epoch: 008/010 training accuracy: 99.08%
Epoch: 009/010 | Batch 000/468 | Cost: 0.0458
Epoch: 009/010 | Batch 050/468 | Cost: 0.0039
Epoch: 009/010 | Batch 100/468 | Cost: 0.0597
Epoch: 009/010 | Batch 150/468 | Cost: 0.0120
Epoch: 009/010 | Batch 200/468 | Cost: 0.0580
Epoch: 009/010 | Batch 250/468 | Cost: 0.0280
Epoch: 009/010 | Batch 300/468 | Cost: 0.0570
Epoch: 009/010 | Batch 350/468 | Cost: 0.0831
Epoch: 009/010 | Batch 400/468 | Cost: 0.0732
Epoch: 009/010 | Batch 450/468 | Cost: 0.0327
Epoch: 009/010 training accuracy: 99.05%
Epoch: 010/010 | Batch 000/468 | Cost: 0.0312
Epoch: 010/010 | Batch 050/468 | Cost: 0.0130
Epoch: 010/010 | Batch 100/468 | Cost: 0.0052
Epoch: 010/010 | Batch 150/468 | Cost: 0.0188
Epoch: 010/010 | Batch 200/468 | Cost: 0.0362
Epoch: 010/010 | Batch 250/468 | Cost: 0.1085
Epoch: 010/010 | Batch 300/468 | Cost: 0.0004
Epoch: 010/010 | Batch 350/468 | Cost: 0.0299
Epoch: 010/010 | Batch 400/468 | Cost: 0.0769
Epoch: 010/010 | Batch 450/468 | Cost: 0.0247
Epoch: 010/010 training accuracy: 98.87%
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
%watermark -iv
###Output
numpy 1.15.4
torch 1.0.1.post2
###Markdown
*Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).* Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
###Output
Sebastian Raschka
CPython 3.6.4
IPython 6.2.1
torch 0.4.0
###Markdown
- Runs on CPU or GPU (if available) Model Zoo -- Convolutional ResNet and Residual Blocks Please note that this example does not implement a really deep ResNet as described in literature but rather illustrates how the residual blocks described in He et al. [1] can be implemented in PyTorch.- [1] He, Kaiming, et al. "Deep residual learning for image recognition." *Proceedings of the IEEE conference on computer vision and pattern recognition*. 2016. Imports
###Code
import numpy as np
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision import transforms
###Output
_____no_output_____
###Markdown
Settings and Dataset
###Code
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Hyperparameters
random_seed = 123
learning_rate = 0.01
num_epochs = 10
batch_size = 128
# Architecture
num_classes = 10
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
###Output
Image batch dimensions: torch.Size([128, 1, 28, 28])
Image label dimensions: torch.Size([128])
###Markdown
ResNet with identity blocks The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches the dimensions of the main path's output, which allows the network to learn identity functions. Such a residual block is illustrated below:
###Code
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
#########################
### 1st residual block
#########################
# 28x28x1 => 28x28x4
self.conv_1 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_1_bn = torch.nn.BatchNorm2d(4)
# 28x28x4 => 28x28x1
self.conv_2 = torch.nn.Conv2d(in_channels=4,
out_channels=1,
kernel_size=(3, 3),
stride=(1, 1),
padding=1)
self.conv_2_bn = torch.nn.BatchNorm2d(1)
#########################
### 2nd residual block
#########################
# 28x28x1 => 28x28x4
self.conv_3 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_3_bn = torch.nn.BatchNorm2d(4)
# 28x28x4 => 28x28x1
self.conv_4 = torch.nn.Conv2d(in_channels=4,
out_channels=1,
kernel_size=(3, 3),
stride=(1, 1),
padding=1)
self.conv_4_bn = torch.nn.BatchNorm2d(1)
#########################
### Fully connected
#########################
self.linear_1 = torch.nn.Linear(28*28*1, num_classes)
def forward(self, x):
#########################
### 1st residual block
#########################
shortcut = x
out = self.conv_1(x)
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out)
out = self.conv_2_bn(out)
out += shortcut
out = F.relu(out)
#########################
### 2nd residual block
#########################
shortcut = x
out = self.conv_3(x)
out = self.conv_3_bn(out)
out = F.relu(out)
out = self.conv_4(out)
out = self.conv_4_bn(out)
out += shortcut
out = F.relu(out)
#########################
### Fully connected
#########################
logits = self.linear_1(out.view(-1, 28*28*1))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
##########################
### COST AND OPTIMIZER
##########################
cost_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = cost_fn(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
###Output
Epoch: 001/010 | Batch 000/468 | Cost: 2.3832
Epoch: 001/010 | Batch 050/468 | Cost: 0.2933
Epoch: 001/010 | Batch 100/468 | Cost: 0.3032
Epoch: 001/010 | Batch 150/468 | Cost: 0.3298
Epoch: 001/010 | Batch 200/468 | Cost: 0.2957
Epoch: 001/010 | Batch 250/468 | Cost: 0.3192
Epoch: 001/010 | Batch 300/468 | Cost: 0.2250
Epoch: 001/010 | Batch 350/468 | Cost: 0.3817
Epoch: 001/010 | Batch 400/468 | Cost: 0.2709
Epoch: 001/010 | Batch 450/468 | Cost: 0.4793
Epoch: 001/010 training accuracy: 92.15%
Epoch: 002/010 | Batch 000/468 | Cost: 0.2556
Epoch: 002/010 | Batch 050/468 | Cost: 0.2162
Epoch: 002/010 | Batch 100/468 | Cost: 0.2801
Epoch: 002/010 | Batch 150/468 | Cost: 0.2073
Epoch: 002/010 | Batch 200/468 | Cost: 0.4200
Epoch: 002/010 | Batch 250/468 | Cost: 0.3207
Epoch: 002/010 | Batch 300/468 | Cost: 0.2874
Epoch: 002/010 | Batch 350/468 | Cost: 0.2418
Epoch: 002/010 | Batch 400/468 | Cost: 0.3066
Epoch: 002/010 | Batch 450/468 | Cost: 0.2165
Epoch: 002/010 training accuracy: 92.07%
Epoch: 003/010 | Batch 000/468 | Cost: 0.2350
Epoch: 003/010 | Batch 050/468 | Cost: 0.2049
Epoch: 003/010 | Batch 100/468 | Cost: 0.1669
Epoch: 003/010 | Batch 150/468 | Cost: 0.2425
Epoch: 003/010 | Batch 200/468 | Cost: 0.3889
Epoch: 003/010 | Batch 250/468 | Cost: 0.3450
Epoch: 003/010 | Batch 300/468 | Cost: 0.2193
Epoch: 003/010 | Batch 350/468 | Cost: 0.3778
Epoch: 003/010 | Batch 400/468 | Cost: 0.3700
Epoch: 003/010 | Batch 450/468 | Cost: 0.3343
Epoch: 003/010 training accuracy: 93.08%
Epoch: 004/010 | Batch 000/468 | Cost: 0.1768
Epoch: 004/010 | Batch 050/468 | Cost: 0.1431
Epoch: 004/010 | Batch 100/468 | Cost: 0.2628
Epoch: 004/010 | Batch 150/468 | Cost: 0.2038
Epoch: 004/010 | Batch 200/468 | Cost: 0.1800
Epoch: 004/010 | Batch 250/468 | Cost: 0.2350
Epoch: 004/010 | Batch 300/468 | Cost: 0.3844
Epoch: 004/010 | Batch 350/468 | Cost: 0.1684
Epoch: 004/010 | Batch 400/468 | Cost: 0.4000
Epoch: 004/010 | Batch 450/468 | Cost: 0.2594
Epoch: 004/010 training accuracy: 92.81%
Epoch: 005/010 | Batch 000/468 | Cost: 0.2613
Epoch: 005/010 | Batch 050/468 | Cost: 0.2362
Epoch: 005/010 | Batch 100/468 | Cost: 0.2833
Epoch: 005/010 | Batch 150/468 | Cost: 0.2685
Epoch: 005/010 | Batch 200/468 | Cost: 0.3303
Epoch: 005/010 | Batch 250/468 | Cost: 0.1885
Epoch: 005/010 | Batch 300/468 | Cost: 0.1699
Epoch: 005/010 | Batch 350/468 | Cost: 0.3064
Epoch: 005/010 | Batch 400/468 | Cost: 0.1407
Epoch: 005/010 | Batch 450/468 | Cost: 0.2240
Epoch: 005/010 training accuracy: 93.41%
Epoch: 006/010 | Batch 000/468 | Cost: 0.2983
Epoch: 006/010 | Batch 050/468 | Cost: 0.2638
Epoch: 006/010 | Batch 100/468 | Cost: 0.1992
Epoch: 006/010 | Batch 150/468 | Cost: 0.2698
Epoch: 006/010 | Batch 200/468 | Cost: 0.1564
Epoch: 006/010 | Batch 250/468 | Cost: 0.1708
Epoch: 006/010 | Batch 300/468 | Cost: 0.2452
Epoch: 006/010 | Batch 350/468 | Cost: 0.2990
Epoch: 006/010 | Batch 400/468 | Cost: 0.1879
Epoch: 006/010 | Batch 450/468 | Cost: 0.2715
Epoch: 006/010 training accuracy: 93.00%
Epoch: 007/010 | Batch 000/468 | Cost: 0.3003
Epoch: 007/010 | Batch 050/468 | Cost: 0.2952
Epoch: 007/010 | Batch 100/468 | Cost: 0.3288
Epoch: 007/010 | Batch 150/468 | Cost: 0.2518
Epoch: 007/010 | Batch 200/468 | Cost: 0.2531
Epoch: 007/010 | Batch 250/468 | Cost: 0.2788
Epoch: 007/010 | Batch 300/468 | Cost: 0.2064
Epoch: 007/010 | Batch 350/468 | Cost: 0.2827
Epoch: 007/010 | Batch 400/468 | Cost: 0.1999
Epoch: 007/010 | Batch 450/468 | Cost: 0.1225
Epoch: 007/010 training accuracy: 93.22%
Epoch: 008/010 | Batch 000/468 | Cost: 0.3066
Epoch: 008/010 | Batch 050/468 | Cost: 0.3116
Epoch: 008/010 | Batch 100/468 | Cost: 0.1669
Epoch: 008/010 | Batch 150/468 | Cost: 0.2639
Epoch: 008/010 | Batch 200/468 | Cost: 0.1578
Epoch: 008/010 | Batch 250/468 | Cost: 0.3325
Epoch: 008/010 | Batch 300/468 | Cost: 0.1173
Epoch: 008/010 | Batch 350/468 | Cost: 0.1496
Epoch: 008/010 | Batch 400/468 | Cost: 0.3393
Epoch: 008/010 | Batch 450/468 | Cost: 0.1513
Epoch: 008/010 training accuracy: 92.88%
Epoch: 009/010 | Batch 000/468 | Cost: 0.3105
Epoch: 009/010 | Batch 050/468 | Cost: 0.2021
Epoch: 009/010 | Batch 100/468 | Cost: 0.3218
Epoch: 009/010 | Batch 150/468 | Cost: 0.1331
Epoch: 009/010 | Batch 200/468 | Cost: 0.4478
Epoch: 009/010 | Batch 250/468 | Cost: 0.1711
Epoch: 009/010 | Batch 300/468 | Cost: 0.1965
Epoch: 009/010 | Batch 350/468 | Cost: 0.1369
Epoch: 009/010 | Batch 400/468 | Cost: 0.2484
Epoch: 009/010 | Batch 450/468 | Cost: 0.2101
Epoch: 009/010 training accuracy: 93.44%
Epoch: 010/010 | Batch 000/468 | Cost: 0.3072
Epoch: 010/010 | Batch 050/468 | Cost: 0.1497
Epoch: 010/010 | Batch 100/468 | Cost: 0.2636
Epoch: 010/010 | Batch 150/468 | Cost: 0.2929
Epoch: 010/010 | Batch 200/468 | Cost: 0.4834
Epoch: 010/010 | Batch 250/468 | Cost: 0.2454
Epoch: 010/010 | Batch 300/468 | Cost: 0.1963
Epoch: 010/010 | Batch 350/468 | Cost: 0.2547
Epoch: 010/010 | Batch 400/468 | Cost: 0.2669
Epoch: 010/010 | Batch 450/468 | Cost: 0.4119
Epoch: 010/010 training accuracy: 93.71%
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
###Output
Test accuracy: 92.69%
###Markdown
ResNet with convolutional blocks for resizing The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches is resized to dimensions of the main path's output. Such a residual block is illustrated below:
###Code
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
#########################
### 1st residual block
#########################
# 28x28x1 => 14x14x4
self.conv_1 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_1_bn = torch.nn.BatchNorm2d(4)
# 14x14x4 => 14x14x8
self.conv_2 = torch.nn.Conv2d(in_channels=4,
out_channels=8,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_2_bn = torch.nn.BatchNorm2d(8)
# 28x28x1 => 14x14x8
self.conv_shortcut_1 = torch.nn.Conv2d(in_channels=1,
out_channels=8,
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_1_bn = torch.nn.BatchNorm2d(8)
#########################
### 2nd residual block
#########################
# 14x14x8 => 7x7x16
self.conv_3 = torch.nn.Conv2d(in_channels=8,
out_channels=16,
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_3_bn = torch.nn.BatchNorm2d(16)
# 7x7x16 => 7x7x32
self.conv_4 = torch.nn.Conv2d(in_channels=16,
out_channels=32,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_4_bn = torch.nn.BatchNorm2d(32)
# 14x14x8 => 7x7x32
self.conv_shortcut_2 = torch.nn.Conv2d(in_channels=8,
out_channels=32,
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_2_bn = torch.nn.BatchNorm2d(32)
#########################
### Fully connected
#########################
self.linear_1 = torch.nn.Linear(7*7*32, num_classes)
def forward(self, x):
#########################
### 1st residual block
#########################
shortcut = x
out = self.conv_1(x) # 28x28x1 => 14x14x4
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out) # 14x14x4 => 714x14x8
out = self.conv_2_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_1(shortcut)
shortcut = self.conv_shortcut_1_bn(shortcut)
out += shortcut
out = F.relu(out)
#########################
### 2nd residual block
#########################
shortcut = out
out = self.conv_3(out) # 14x14x8 => 7x7x16
out = self.conv_3_bn(out)
out = F.relu(out)
out = self.conv_4(out) # 7x7x16 => 7x7x32
out = self.conv_4_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_2(shortcut)
shortcut = self.conv_shortcut_2_bn(shortcut)
out += shortcut
out = F.relu(out)
#########################
### Fully connected
#########################
logits = self.linear_1(out.view(-1, 7*7*32))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
##########################
### COST AND OPTIMIZER
##########################
cost_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = cost_fn(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
###Output
Epoch: 001/010 | Batch 000/468 | Cost: 2.3215
Epoch: 001/010 | Batch 050/468 | Cost: 0.1400
Epoch: 001/010 | Batch 100/468 | Cost: 0.0999
Epoch: 001/010 | Batch 150/468 | Cost: 0.1039
Epoch: 001/010 | Batch 200/468 | Cost: 0.0977
Epoch: 001/010 | Batch 250/468 | Cost: 0.1122
Epoch: 001/010 | Batch 300/468 | Cost: 0.0378
Epoch: 001/010 | Batch 350/468 | Cost: 0.2707
Epoch: 001/010 | Batch 400/468 | Cost: 0.1524
Epoch: 001/010 | Batch 450/468 | Cost: 0.1140
Epoch: 001/010 training accuracy: 97.97%
Epoch: 002/010 | Batch 000/468 | Cost: 0.1217
Epoch: 002/010 | Batch 050/468 | Cost: 0.0389
Epoch: 002/010 | Batch 100/468 | Cost: 0.1613
Epoch: 002/010 | Batch 150/468 | Cost: 0.1093
Epoch: 002/010 | Batch 200/468 | Cost: 0.0148
Epoch: 002/010 | Batch 250/468 | Cost: 0.0451
Epoch: 002/010 | Batch 300/468 | Cost: 0.1174
Epoch: 002/010 | Batch 350/468 | Cost: 0.0787
Epoch: 002/010 | Batch 400/468 | Cost: 0.0101
Epoch: 002/010 | Batch 450/468 | Cost: 0.0469
Epoch: 002/010 training accuracy: 97.59%
Epoch: 003/010 | Batch 000/468 | Cost: 0.1096
Epoch: 003/010 | Batch 050/468 | Cost: 0.0058
Epoch: 003/010 | Batch 100/468 | Cost: 0.0121
Epoch: 003/010 | Batch 150/468 | Cost: 0.0570
Epoch: 003/010 | Batch 200/468 | Cost: 0.0225
Epoch: 003/010 | Batch 250/468 | Cost: 0.0808
Epoch: 003/010 | Batch 300/468 | Cost: 0.0158
Epoch: 003/010 | Batch 350/468 | Cost: 0.0852
Epoch: 003/010 | Batch 400/468 | Cost: 0.0216
Epoch: 003/010 | Batch 450/468 | Cost: 0.0628
Epoch: 003/010 training accuracy: 98.78%
Epoch: 004/010 | Batch 000/468 | Cost: 0.0254
Epoch: 004/010 | Batch 050/468 | Cost: 0.0576
Epoch: 004/010 | Batch 100/468 | Cost: 0.0211
Epoch: 004/010 | Batch 150/468 | Cost: 0.0858
Epoch: 004/010 | Batch 200/468 | Cost: 0.0120
Epoch: 004/010 | Batch 250/468 | Cost: 0.0116
Epoch: 004/010 | Batch 300/468 | Cost: 0.0428
Epoch: 004/010 | Batch 350/468 | Cost: 0.0174
Epoch: 004/010 | Batch 400/468 | Cost: 0.0222
Epoch: 004/010 | Batch 450/468 | Cost: 0.0428
Epoch: 004/010 training accuracy: 98.64%
Epoch: 005/010 | Batch 000/468 | Cost: 0.0326
Epoch: 005/010 | Batch 050/468 | Cost: 0.0246
Epoch: 005/010 | Batch 100/468 | Cost: 0.0205
Epoch: 005/010 | Batch 150/468 | Cost: 0.0231
Epoch: 005/010 | Batch 200/468 | Cost: 0.0261
Epoch: 005/010 | Batch 250/468 | Cost: 0.0276
Epoch: 005/010 | Batch 300/468 | Cost: 0.1495
Epoch: 005/010 | Batch 350/468 | Cost: 0.0353
Epoch: 005/010 | Batch 400/468 | Cost: 0.0118
Epoch: 005/010 | Batch 450/468 | Cost: 0.0669
Epoch: 005/010 training accuracy: 98.90%
Epoch: 006/010 | Batch 000/468 | Cost: 0.0154
Epoch: 006/010 | Batch 050/468 | Cost: 0.0188
Epoch: 006/010 | Batch 100/468 | Cost: 0.0347
Epoch: 006/010 | Batch 150/468 | Cost: 0.0365
Epoch: 006/010 | Batch 200/468 | Cost: 0.0390
Epoch: 006/010 | Batch 250/468 | Cost: 0.0401
Epoch: 006/010 | Batch 300/468 | Cost: 0.0283
Epoch: 006/010 | Batch 350/468 | Cost: 0.0516
Epoch: 006/010 | Batch 400/468 | Cost: 0.0139
Epoch: 006/010 | Batch 450/468 | Cost: 0.0431
Epoch: 006/010 training accuracy: 98.94%
Epoch: 007/010 | Batch 000/468 | Cost: 0.0522
Epoch: 007/010 | Batch 050/468 | Cost: 0.0097
Epoch: 007/010 | Batch 100/468 | Cost: 0.0151
Epoch: 007/010 | Batch 150/468 | Cost: 0.0262
Epoch: 007/010 | Batch 200/468 | Cost: 0.0387
Epoch: 007/010 | Batch 250/468 | Cost: 0.0179
Epoch: 007/010 | Batch 300/468 | Cost: 0.0123
Epoch: 007/010 | Batch 350/468 | Cost: 0.0684
Epoch: 007/010 | Batch 400/468 | Cost: 0.0204
Epoch: 007/010 | Batch 450/468 | Cost: 0.0266
Epoch: 007/010 training accuracy: 99.15%
Epoch: 008/010 | Batch 000/468 | Cost: 0.0702
Epoch: 008/010 | Batch 050/468 | Cost: 0.0179
Epoch: 008/010 | Batch 100/468 | Cost: 0.0168
Epoch: 008/010 | Batch 150/468 | Cost: 0.0532
Epoch: 008/010 | Batch 200/468 | Cost: 0.0056
Epoch: 008/010 | Batch 250/468 | Cost: 0.0221
Epoch: 008/010 | Batch 300/468 | Cost: 0.0038
Epoch: 008/010 | Batch 350/468 | Cost: 0.0411
Epoch: 008/010 | Batch 400/468 | Cost: 0.0800
Epoch: 008/010 | Batch 450/468 | Cost: 0.0215
Epoch: 008/010 training accuracy: 99.22%
Epoch: 009/010 | Batch 000/468 | Cost: 0.0187
Epoch: 009/010 | Batch 050/468 | Cost: 0.0071
Epoch: 009/010 | Batch 100/468 | Cost: 0.0047
Epoch: 009/010 | Batch 150/468 | Cost: 0.0527
Epoch: 009/010 | Batch 200/468 | Cost: 0.0789
Epoch: 009/010 | Batch 250/468 | Cost: 0.0323
Epoch: 009/010 | Batch 300/468 | Cost: 0.0354
Epoch: 009/010 | Batch 350/468 | Cost: 0.0733
Epoch: 009/010 | Batch 400/468 | Cost: 0.0894
Epoch: 009/010 | Batch 450/468 | Cost: 0.0228
Epoch: 009/010 training accuracy: 99.22%
Epoch: 010/010 | Batch 000/468 | Cost: 0.0478
Epoch: 010/010 | Batch 050/468 | Cost: 0.0650
Epoch: 010/010 | Batch 100/468 | Cost: 0.0139
Epoch: 010/010 | Batch 150/468 | Cost: 0.0285
Epoch: 010/010 | Batch 200/468 | Cost: 0.0067
Epoch: 010/010 | Batch 250/468 | Cost: 0.0159
Epoch: 010/010 | Batch 300/468 | Cost: 0.0224
Epoch: 010/010 | Batch 350/468 | Cost: 0.0286
Epoch: 010/010 | Batch 400/468 | Cost: 0.0996
Epoch: 010/010 | Batch 450/468 | Cost: 0.0821
Epoch: 010/010 training accuracy: 99.31%
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
###Output
Test accuracy: 98.42%
###Markdown
ResNet with convolutional blocks for resizing (using a helper class) This is the same network as above but uses a `ResidualBlock` helper class.
###Code
class ResidualBlock(torch.nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
self.conv_1 = torch.nn.Conv2d(in_channels=channels[0],
out_channels=channels[1],
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_1_bn = torch.nn.BatchNorm2d(channels[1])
self.conv_2 = torch.nn.Conv2d(in_channels=channels[1],
out_channels=channels[2],
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_2_bn = torch.nn.BatchNorm2d(channels[2])
self.conv_shortcut_1 = torch.nn.Conv2d(in_channels=channels[0],
out_channels=channels[2],
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_1_bn = torch.nn.BatchNorm2d(channels[2])
def forward(self, x):
shortcut = x
out = self.conv_1(x)
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out)
out = self.conv_2_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_1(shortcut)
shortcut = self.conv_shortcut_1_bn(shortcut)
out += shortcut
out = F.relu(out)
return out
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
self.residual_block_1 = ResidualBlock(channels=[1, 4, 8])
self.residual_block_2 = ResidualBlock(channels=[8, 16, 32])
self.linear_1 = torch.nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.residual_block_1.forward(x)
out = self.residual_block_2.forward(out)
logits = self.linear_1(out.view(-1, 7*7*32))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
if torch.cuda.is_available():
model.cuda()
##########################
### COST AND OPTIMIZER
##########################
cost_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = cost_fn(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
###Output
Epoch: 001/010 | Batch 000/468 | Cost: 2.3215
Epoch: 001/010 | Batch 050/468 | Cost: 0.1400
Epoch: 001/010 | Batch 100/468 | Cost: 0.0999
Epoch: 001/010 | Batch 150/468 | Cost: 0.1039
Epoch: 001/010 | Batch 200/468 | Cost: 0.0977
Epoch: 001/010 | Batch 250/468 | Cost: 0.1122
Epoch: 001/010 | Batch 300/468 | Cost: 0.0378
Epoch: 001/010 | Batch 350/468 | Cost: 0.2707
Epoch: 001/010 | Batch 400/468 | Cost: 0.1524
Epoch: 001/010 | Batch 450/468 | Cost: 0.1140
Epoch: 001/010 training accuracy: 97.97%
Epoch: 002/010 | Batch 000/468 | Cost: 0.1217
Epoch: 002/010 | Batch 050/468 | Cost: 0.0389
Epoch: 002/010 | Batch 100/468 | Cost: 0.1613
Epoch: 002/010 | Batch 150/468 | Cost: 0.1093
Epoch: 002/010 | Batch 200/468 | Cost: 0.0148
Epoch: 002/010 | Batch 250/468 | Cost: 0.0451
Epoch: 002/010 | Batch 300/468 | Cost: 0.1174
Epoch: 002/010 | Batch 350/468 | Cost: 0.0787
Epoch: 002/010 | Batch 400/468 | Cost: 0.0101
Epoch: 002/010 | Batch 450/468 | Cost: 0.0469
Epoch: 002/010 training accuracy: 97.59%
Epoch: 003/010 | Batch 000/468 | Cost: 0.1096
Epoch: 003/010 | Batch 050/468 | Cost: 0.0058
Epoch: 003/010 | Batch 100/468 | Cost: 0.0121
Epoch: 003/010 | Batch 150/468 | Cost: 0.0570
Epoch: 003/010 | Batch 200/468 | Cost: 0.0225
Epoch: 003/010 | Batch 250/468 | Cost: 0.0808
Epoch: 003/010 | Batch 300/468 | Cost: 0.0158
Epoch: 003/010 | Batch 350/468 | Cost: 0.0852
Epoch: 003/010 | Batch 400/468 | Cost: 0.0216
Epoch: 003/010 | Batch 450/468 | Cost: 0.0628
Epoch: 003/010 training accuracy: 98.78%
Epoch: 004/010 | Batch 000/468 | Cost: 0.0254
Epoch: 004/010 | Batch 050/468 | Cost: 0.0576
Epoch: 004/010 | Batch 100/468 | Cost: 0.0211
Epoch: 004/010 | Batch 150/468 | Cost: 0.0858
Epoch: 004/010 | Batch 200/468 | Cost: 0.0120
Epoch: 004/010 | Batch 250/468 | Cost: 0.0116
Epoch: 004/010 | Batch 300/468 | Cost: 0.0428
Epoch: 004/010 | Batch 350/468 | Cost: 0.0174
Epoch: 004/010 | Batch 400/468 | Cost: 0.0222
Epoch: 004/010 | Batch 450/468 | Cost: 0.0428
Epoch: 004/010 training accuracy: 98.64%
Epoch: 005/010 | Batch 000/468 | Cost: 0.0326
Epoch: 005/010 | Batch 050/468 | Cost: 0.0246
Epoch: 005/010 | Batch 100/468 | Cost: 0.0205
Epoch: 005/010 | Batch 150/468 | Cost: 0.0231
Epoch: 005/010 | Batch 200/468 | Cost: 0.0261
Epoch: 005/010 | Batch 250/468 | Cost: 0.0276
Epoch: 005/010 | Batch 300/468 | Cost: 0.1495
Epoch: 005/010 | Batch 350/468 | Cost: 0.0353
Epoch: 005/010 | Batch 400/468 | Cost: 0.0118
Epoch: 005/010 | Batch 450/468 | Cost: 0.0669
Epoch: 005/010 training accuracy: 98.90%
Epoch: 006/010 | Batch 000/468 | Cost: 0.0154
Epoch: 006/010 | Batch 050/468 | Cost: 0.0188
Epoch: 006/010 | Batch 100/468 | Cost: 0.0347
Epoch: 006/010 | Batch 150/468 | Cost: 0.0365
Epoch: 006/010 | Batch 200/468 | Cost: 0.0390
Epoch: 006/010 | Batch 250/468 | Cost: 0.0401
Epoch: 006/010 | Batch 300/468 | Cost: 0.0283
Epoch: 006/010 | Batch 350/468 | Cost: 0.0516
Epoch: 006/010 | Batch 400/468 | Cost: 0.0139
Epoch: 006/010 | Batch 450/468 | Cost: 0.0431
Epoch: 006/010 training accuracy: 98.94%
Epoch: 007/010 | Batch 000/468 | Cost: 0.0522
Epoch: 007/010 | Batch 050/468 | Cost: 0.0097
Epoch: 007/010 | Batch 100/468 | Cost: 0.0151
Epoch: 007/010 | Batch 150/468 | Cost: 0.0262
Epoch: 007/010 | Batch 200/468 | Cost: 0.0387
Epoch: 007/010 | Batch 250/468 | Cost: 0.0179
Epoch: 007/010 | Batch 300/468 | Cost: 0.0123
Epoch: 007/010 | Batch 350/468 | Cost: 0.0684
Epoch: 007/010 | Batch 400/468 | Cost: 0.0204
Epoch: 007/010 | Batch 450/468 | Cost: 0.0266
Epoch: 007/010 training accuracy: 99.15%
Epoch: 008/010 | Batch 000/468 | Cost: 0.0702
Epoch: 008/010 | Batch 050/468 | Cost: 0.0179
Epoch: 008/010 | Batch 100/468 | Cost: 0.0168
Epoch: 008/010 | Batch 150/468 | Cost: 0.0532
Epoch: 008/010 | Batch 200/468 | Cost: 0.0056
Epoch: 008/010 | Batch 250/468 | Cost: 0.0221
Epoch: 008/010 | Batch 300/468 | Cost: 0.0038
Epoch: 008/010 | Batch 350/468 | Cost: 0.0411
Epoch: 008/010 | Batch 400/468 | Cost: 0.0800
Epoch: 008/010 | Batch 450/468 | Cost: 0.0215
Epoch: 008/010 training accuracy: 99.22%
Epoch: 009/010 | Batch 000/468 | Cost: 0.0187
Epoch: 009/010 | Batch 050/468 | Cost: 0.0071
Epoch: 009/010 | Batch 100/468 | Cost: 0.0047
Epoch: 009/010 | Batch 150/468 | Cost: 0.0527
Epoch: 009/010 | Batch 200/468 | Cost: 0.0789
Epoch: 009/010 | Batch 250/468 | Cost: 0.0323
Epoch: 009/010 | Batch 300/468 | Cost: 0.0354
Epoch: 009/010 | Batch 350/468 | Cost: 0.0733
Epoch: 009/010 | Batch 400/468 | Cost: 0.0894
Epoch: 009/010 | Batch 450/468 | Cost: 0.0228
Epoch: 009/010 training accuracy: 99.22%
Epoch: 010/010 | Batch 000/468 | Cost: 0.0478
Epoch: 010/010 | Batch 050/468 | Cost: 0.0650
Epoch: 010/010 | Batch 100/468 | Cost: 0.0139
Epoch: 010/010 | Batch 150/468 | Cost: 0.0285
Epoch: 010/010 | Batch 200/468 | Cost: 0.0067
Epoch: 010/010 | Batch 250/468 | Cost: 0.0159
Epoch: 010/010 | Batch 300/468 | Cost: 0.0224
Epoch: 010/010 | Batch 350/468 | Cost: 0.0286
Epoch: 010/010 | Batch 400/468 | Cost: 0.0996
Epoch: 010/010 | Batch 450/468 | Cost: 0.0821
Epoch: 010/010 training accuracy: 99.31%
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
###Output
Test accuracy: 98.42%
###Markdown
*Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).* Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
###Output
Sebastian Raschka
CPython 3.6.4
IPython 6.2.1
torch 0.4.0
###Markdown
- Runs on CPU or GPU (if available) Model Zoo -- Convolutional ResNet and Residual Blocks Please note that this example does not implement a really deep ResNet as described in literature but rather illustrates how the residual blocks described in He et al. [1] can be implemented in PyTorch.- [1] He, Kaiming, et al. "Deep residual learning for image recognition." *Proceedings of the IEEE conference on computer vision and pattern recognition*. 2016. Imports
###Code
import numpy as np
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision import transforms
###Output
_____no_output_____
###Markdown
Settings and Dataset
###Code
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Hyperparameters
random_seed = 123
learning_rate = 0.01
num_epochs = 10
batch_size = 128
# Architecture
num_classes = 10
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
###Output
Image batch dimensions: torch.Size([128, 1, 28, 28])
Image label dimensions: torch.Size([128])
###Markdown
ResNet with identity blocks The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches the dimensions of the main path's output, which allows the network to learn identity functions. Such a residual block is illustrated below:
###Code
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
#########################
### 1st residual block
#########################
# 28x28x1 => 28x28x4
self.conv_1 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_1_bn = torch.nn.BatchNorm2d(4)
# 28x28x4 => 28x28x1
self.conv_2 = torch.nn.Conv2d(in_channels=4,
out_channels=1,
kernel_size=(3, 3),
stride=(1, 1),
padding=1)
self.conv_2_bn = torch.nn.BatchNorm2d(1)
#########################
### 2nd residual block
#########################
# 28x28x1 => 28x28x4
self.conv_3 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_3_bn = torch.nn.BatchNorm2d(4)
# 28x28x4 => 28x28x1
self.conv_4 = torch.nn.Conv2d(in_channels=4,
out_channels=1,
kernel_size=(3, 3),
stride=(1, 1),
padding=1)
self.conv_4_bn = torch.nn.BatchNorm2d(1)
#########################
### Fully connected
#########################
self.linear_1 = torch.nn.Linear(28*28*1, num_classes)
def forward(self, x):
#########################
### 1st residual block
#########################
shortcut = x
out = self.conv_1(x)
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out)
out = self.conv_2_bn(out)
out += shortcut
out = F.relu(out)
#########################
### 2nd residual block
#########################
shortcut = x
out = self.conv_3(x)
out = self.conv_3_bn(out)
out = F.relu(out)
out = self.conv_4(out)
out = self.conv_4_bn(out)
out += shortcut
out = F.relu(out)
#########################
### Fully connected
#########################
logits = self.linear_1(out.view(-1, 28*28*1))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
##########################
### COST AND OPTIMIZER
##########################
cost_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = cost_fn(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
###Output
Epoch: 001/010 | Batch 000/468 | Cost: 2.3832
Epoch: 001/010 | Batch 050/468 | Cost: 0.2933
Epoch: 001/010 | Batch 100/468 | Cost: 0.3032
Epoch: 001/010 | Batch 150/468 | Cost: 0.3298
Epoch: 001/010 | Batch 200/468 | Cost: 0.2957
Epoch: 001/010 | Batch 250/468 | Cost: 0.3192
Epoch: 001/010 | Batch 300/468 | Cost: 0.2250
Epoch: 001/010 | Batch 350/468 | Cost: 0.3817
Epoch: 001/010 | Batch 400/468 | Cost: 0.2709
Epoch: 001/010 | Batch 450/468 | Cost: 0.4793
Epoch: 001/010 training accuracy: 92.15%
Epoch: 002/010 | Batch 000/468 | Cost: 0.2556
Epoch: 002/010 | Batch 050/468 | Cost: 0.2162
Epoch: 002/010 | Batch 100/468 | Cost: 0.2801
Epoch: 002/010 | Batch 150/468 | Cost: 0.2073
Epoch: 002/010 | Batch 200/468 | Cost: 0.4200
Epoch: 002/010 | Batch 250/468 | Cost: 0.3207
Epoch: 002/010 | Batch 300/468 | Cost: 0.2874
Epoch: 002/010 | Batch 350/468 | Cost: 0.2418
Epoch: 002/010 | Batch 400/468 | Cost: 0.3066
Epoch: 002/010 | Batch 450/468 | Cost: 0.2165
Epoch: 002/010 training accuracy: 92.07%
Epoch: 003/010 | Batch 000/468 | Cost: 0.2350
Epoch: 003/010 | Batch 050/468 | Cost: 0.2049
Epoch: 003/010 | Batch 100/468 | Cost: 0.1669
Epoch: 003/010 | Batch 150/468 | Cost: 0.2425
Epoch: 003/010 | Batch 200/468 | Cost: 0.3889
Epoch: 003/010 | Batch 250/468 | Cost: 0.3450
Epoch: 003/010 | Batch 300/468 | Cost: 0.2193
Epoch: 003/010 | Batch 350/468 | Cost: 0.3778
Epoch: 003/010 | Batch 400/468 | Cost: 0.3700
Epoch: 003/010 | Batch 450/468 | Cost: 0.3343
Epoch: 003/010 training accuracy: 93.08%
Epoch: 004/010 | Batch 000/468 | Cost: 0.1768
Epoch: 004/010 | Batch 050/468 | Cost: 0.1431
Epoch: 004/010 | Batch 100/468 | Cost: 0.2628
Epoch: 004/010 | Batch 150/468 | Cost: 0.2038
Epoch: 004/010 | Batch 200/468 | Cost: 0.1800
Epoch: 004/010 | Batch 250/468 | Cost: 0.2350
Epoch: 004/010 | Batch 300/468 | Cost: 0.3844
Epoch: 004/010 | Batch 350/468 | Cost: 0.1684
Epoch: 004/010 | Batch 400/468 | Cost: 0.4000
Epoch: 004/010 | Batch 450/468 | Cost: 0.2594
Epoch: 004/010 training accuracy: 92.81%
Epoch: 005/010 | Batch 000/468 | Cost: 0.2613
Epoch: 005/010 | Batch 050/468 | Cost: 0.2362
Epoch: 005/010 | Batch 100/468 | Cost: 0.2833
Epoch: 005/010 | Batch 150/468 | Cost: 0.2685
Epoch: 005/010 | Batch 200/468 | Cost: 0.3303
Epoch: 005/010 | Batch 250/468 | Cost: 0.1885
Epoch: 005/010 | Batch 300/468 | Cost: 0.1699
Epoch: 005/010 | Batch 350/468 | Cost: 0.3064
Epoch: 005/010 | Batch 400/468 | Cost: 0.1407
Epoch: 005/010 | Batch 450/468 | Cost: 0.2240
Epoch: 005/010 training accuracy: 93.41%
Epoch: 006/010 | Batch 000/468 | Cost: 0.2983
Epoch: 006/010 | Batch 050/468 | Cost: 0.2638
Epoch: 006/010 | Batch 100/468 | Cost: 0.1992
Epoch: 006/010 | Batch 150/468 | Cost: 0.2698
Epoch: 006/010 | Batch 200/468 | Cost: 0.1564
Epoch: 006/010 | Batch 250/468 | Cost: 0.1708
Epoch: 006/010 | Batch 300/468 | Cost: 0.2452
Epoch: 006/010 | Batch 350/468 | Cost: 0.2990
Epoch: 006/010 | Batch 400/468 | Cost: 0.1879
Epoch: 006/010 | Batch 450/468 | Cost: 0.2715
Epoch: 006/010 training accuracy: 93.00%
Epoch: 007/010 | Batch 000/468 | Cost: 0.3003
Epoch: 007/010 | Batch 050/468 | Cost: 0.2952
Epoch: 007/010 | Batch 100/468 | Cost: 0.3288
Epoch: 007/010 | Batch 150/468 | Cost: 0.2518
Epoch: 007/010 | Batch 200/468 | Cost: 0.2531
Epoch: 007/010 | Batch 250/468 | Cost: 0.2788
Epoch: 007/010 | Batch 300/468 | Cost: 0.2064
Epoch: 007/010 | Batch 350/468 | Cost: 0.2827
Epoch: 007/010 | Batch 400/468 | Cost: 0.1999
Epoch: 007/010 | Batch 450/468 | Cost: 0.1225
Epoch: 007/010 training accuracy: 93.22%
Epoch: 008/010 | Batch 000/468 | Cost: 0.3066
Epoch: 008/010 | Batch 050/468 | Cost: 0.3116
Epoch: 008/010 | Batch 100/468 | Cost: 0.1669
Epoch: 008/010 | Batch 150/468 | Cost: 0.2639
Epoch: 008/010 | Batch 200/468 | Cost: 0.1578
Epoch: 008/010 | Batch 250/468 | Cost: 0.3325
Epoch: 008/010 | Batch 300/468 | Cost: 0.1173
Epoch: 008/010 | Batch 350/468 | Cost: 0.1496
Epoch: 008/010 | Batch 400/468 | Cost: 0.3393
Epoch: 008/010 | Batch 450/468 | Cost: 0.1513
Epoch: 008/010 training accuracy: 92.88%
Epoch: 009/010 | Batch 000/468 | Cost: 0.3105
Epoch: 009/010 | Batch 050/468 | Cost: 0.2021
Epoch: 009/010 | Batch 100/468 | Cost: 0.3218
Epoch: 009/010 | Batch 150/468 | Cost: 0.1331
Epoch: 009/010 | Batch 200/468 | Cost: 0.4478
Epoch: 009/010 | Batch 250/468 | Cost: 0.1711
Epoch: 009/010 | Batch 300/468 | Cost: 0.1965
Epoch: 009/010 | Batch 350/468 | Cost: 0.1369
Epoch: 009/010 | Batch 400/468 | Cost: 0.2484
Epoch: 009/010 | Batch 450/468 | Cost: 0.2101
Epoch: 009/010 training accuracy: 93.44%
Epoch: 010/010 | Batch 000/468 | Cost: 0.3072
Epoch: 010/010 | Batch 050/468 | Cost: 0.1497
Epoch: 010/010 | Batch 100/468 | Cost: 0.2636
Epoch: 010/010 | Batch 150/468 | Cost: 0.2929
Epoch: 010/010 | Batch 200/468 | Cost: 0.4834
Epoch: 010/010 | Batch 250/468 | Cost: 0.2454
Epoch: 010/010 | Batch 300/468 | Cost: 0.1963
Epoch: 010/010 | Batch 350/468 | Cost: 0.2547
Epoch: 010/010 | Batch 400/468 | Cost: 0.2669
Epoch: 010/010 | Batch 450/468 | Cost: 0.4119
Epoch: 010/010 training accuracy: 93.71%
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
###Output
Test accuracy: 92.69%
###Markdown
ResNet with convolutional blocks for resizing The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches is resized to dimensions of the main path's output. Such a residual block is illustrated below:
###Code
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
#########################
### 1st residual block
#########################
# 28x28x1 => 14x14x4
self.conv_1 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_1_bn = torch.nn.BatchNorm2d(4)
# 14x14x4 => 14x14x8
self.conv_2 = torch.nn.Conv2d(in_channels=4,
out_channels=8,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_2_bn = torch.nn.BatchNorm2d(8)
# 28x28x1 => 14x14x8
self.conv_shortcut_1 = torch.nn.Conv2d(in_channels=1,
out_channels=8,
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_1_bn = torch.nn.BatchNorm2d(8)
#########################
### 2nd residual block
#########################
# 14x14x8 => 7x7x16
self.conv_3 = torch.nn.Conv2d(in_channels=8,
out_channels=16,
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_3_bn = torch.nn.BatchNorm2d(16)
# 7x7x16 => 7x7x32
self.conv_4 = torch.nn.Conv2d(in_channels=16,
out_channels=32,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_4_bn = torch.nn.BatchNorm2d(32)
# 14x14x8 => 7x7x32
self.conv_shortcut_2 = torch.nn.Conv2d(in_channels=8,
out_channels=32,
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_2_bn = torch.nn.BatchNorm2d(32)
#########################
### Fully connected
#########################
self.linear_1 = torch.nn.Linear(7*7*32, num_classes)
def forward(self, x):
#########################
### 1st residual block
#########################
shortcut = x
out = self.conv_1(x) # 28x28x1 => 14x14x4
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out) # 14x14x4 => 714x14x8
out = self.conv_2_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_1(shortcut)
shortcut = self.conv_shortcut_1_bn(shortcut)
out += shortcut
out = F.relu(out)
#########################
### 2nd residual block
#########################
shortcut = out
out = self.conv_3(out) # 14x14x8 => 7x7x16
out = self.conv_3_bn(out)
out = F.relu(out)
out = self.conv_4(out) # 7x7x16 => 7x7x32
out = self.conv_4_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_2(shortcut)
shortcut = self.conv_shortcut_2_bn(shortcut)
out += shortcut
out = F.relu(out)
#########################
### Fully connected
#########################
logits = self.linear_1(out.view(-1, 7*7*32))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
##########################
### COST AND OPTIMIZER
##########################
cost_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = cost_fn(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
###Output
Epoch: 001/010 | Batch 000/468 | Cost: 2.3215
Epoch: 001/010 | Batch 050/468 | Cost: 0.1400
Epoch: 001/010 | Batch 100/468 | Cost: 0.0999
Epoch: 001/010 | Batch 150/468 | Cost: 0.1039
Epoch: 001/010 | Batch 200/468 | Cost: 0.0977
Epoch: 001/010 | Batch 250/468 | Cost: 0.1122
Epoch: 001/010 | Batch 300/468 | Cost: 0.0378
Epoch: 001/010 | Batch 350/468 | Cost: 0.2707
Epoch: 001/010 | Batch 400/468 | Cost: 0.1524
Epoch: 001/010 | Batch 450/468 | Cost: 0.1140
Epoch: 001/010 training accuracy: 97.97%
Epoch: 002/010 | Batch 000/468 | Cost: 0.1217
Epoch: 002/010 | Batch 050/468 | Cost: 0.0389
Epoch: 002/010 | Batch 100/468 | Cost: 0.1613
Epoch: 002/010 | Batch 150/468 | Cost: 0.1093
Epoch: 002/010 | Batch 200/468 | Cost: 0.0148
Epoch: 002/010 | Batch 250/468 | Cost: 0.0451
Epoch: 002/010 | Batch 300/468 | Cost: 0.1174
Epoch: 002/010 | Batch 350/468 | Cost: 0.0787
Epoch: 002/010 | Batch 400/468 | Cost: 0.0101
Epoch: 002/010 | Batch 450/468 | Cost: 0.0469
Epoch: 002/010 training accuracy: 97.59%
Epoch: 003/010 | Batch 000/468 | Cost: 0.1096
Epoch: 003/010 | Batch 050/468 | Cost: 0.0058
Epoch: 003/010 | Batch 100/468 | Cost: 0.0121
Epoch: 003/010 | Batch 150/468 | Cost: 0.0570
Epoch: 003/010 | Batch 200/468 | Cost: 0.0225
Epoch: 003/010 | Batch 250/468 | Cost: 0.0808
Epoch: 003/010 | Batch 300/468 | Cost: 0.0158
Epoch: 003/010 | Batch 350/468 | Cost: 0.0852
Epoch: 003/010 | Batch 400/468 | Cost: 0.0216
Epoch: 003/010 | Batch 450/468 | Cost: 0.0628
Epoch: 003/010 training accuracy: 98.78%
Epoch: 004/010 | Batch 000/468 | Cost: 0.0254
Epoch: 004/010 | Batch 050/468 | Cost: 0.0576
Epoch: 004/010 | Batch 100/468 | Cost: 0.0211
Epoch: 004/010 | Batch 150/468 | Cost: 0.0858
Epoch: 004/010 | Batch 200/468 | Cost: 0.0120
Epoch: 004/010 | Batch 250/468 | Cost: 0.0116
Epoch: 004/010 | Batch 300/468 | Cost: 0.0428
Epoch: 004/010 | Batch 350/468 | Cost: 0.0174
Epoch: 004/010 | Batch 400/468 | Cost: 0.0222
Epoch: 004/010 | Batch 450/468 | Cost: 0.0428
Epoch: 004/010 training accuracy: 98.64%
Epoch: 005/010 | Batch 000/468 | Cost: 0.0326
Epoch: 005/010 | Batch 050/468 | Cost: 0.0246
Epoch: 005/010 | Batch 100/468 | Cost: 0.0205
Epoch: 005/010 | Batch 150/468 | Cost: 0.0231
Epoch: 005/010 | Batch 200/468 | Cost: 0.0261
Epoch: 005/010 | Batch 250/468 | Cost: 0.0276
Epoch: 005/010 | Batch 300/468 | Cost: 0.1495
Epoch: 005/010 | Batch 350/468 | Cost: 0.0353
Epoch: 005/010 | Batch 400/468 | Cost: 0.0118
Epoch: 005/010 | Batch 450/468 | Cost: 0.0669
Epoch: 005/010 training accuracy: 98.90%
Epoch: 006/010 | Batch 000/468 | Cost: 0.0154
Epoch: 006/010 | Batch 050/468 | Cost: 0.0188
Epoch: 006/010 | Batch 100/468 | Cost: 0.0347
Epoch: 006/010 | Batch 150/468 | Cost: 0.0365
Epoch: 006/010 | Batch 200/468 | Cost: 0.0390
Epoch: 006/010 | Batch 250/468 | Cost: 0.0401
Epoch: 006/010 | Batch 300/468 | Cost: 0.0283
Epoch: 006/010 | Batch 350/468 | Cost: 0.0516
Epoch: 006/010 | Batch 400/468 | Cost: 0.0139
Epoch: 006/010 | Batch 450/468 | Cost: 0.0431
Epoch: 006/010 training accuracy: 98.94%
Epoch: 007/010 | Batch 000/468 | Cost: 0.0522
Epoch: 007/010 | Batch 050/468 | Cost: 0.0097
Epoch: 007/010 | Batch 100/468 | Cost: 0.0151
Epoch: 007/010 | Batch 150/468 | Cost: 0.0262
Epoch: 007/010 | Batch 200/468 | Cost: 0.0387
Epoch: 007/010 | Batch 250/468 | Cost: 0.0179
Epoch: 007/010 | Batch 300/468 | Cost: 0.0123
Epoch: 007/010 | Batch 350/468 | Cost: 0.0684
Epoch: 007/010 | Batch 400/468 | Cost: 0.0204
Epoch: 007/010 | Batch 450/468 | Cost: 0.0266
Epoch: 007/010 training accuracy: 99.15%
Epoch: 008/010 | Batch 000/468 | Cost: 0.0702
Epoch: 008/010 | Batch 050/468 | Cost: 0.0179
Epoch: 008/010 | Batch 100/468 | Cost: 0.0168
Epoch: 008/010 | Batch 150/468 | Cost: 0.0532
Epoch: 008/010 | Batch 200/468 | Cost: 0.0056
Epoch: 008/010 | Batch 250/468 | Cost: 0.0221
Epoch: 008/010 | Batch 300/468 | Cost: 0.0038
Epoch: 008/010 | Batch 350/468 | Cost: 0.0411
Epoch: 008/010 | Batch 400/468 | Cost: 0.0800
Epoch: 008/010 | Batch 450/468 | Cost: 0.0215
Epoch: 008/010 training accuracy: 99.22%
Epoch: 009/010 | Batch 000/468 | Cost: 0.0187
Epoch: 009/010 | Batch 050/468 | Cost: 0.0071
Epoch: 009/010 | Batch 100/468 | Cost: 0.0047
Epoch: 009/010 | Batch 150/468 | Cost: 0.0527
Epoch: 009/010 | Batch 200/468 | Cost: 0.0789
Epoch: 009/010 | Batch 250/468 | Cost: 0.0323
Epoch: 009/010 | Batch 300/468 | Cost: 0.0354
Epoch: 009/010 | Batch 350/468 | Cost: 0.0733
Epoch: 009/010 | Batch 400/468 | Cost: 0.0894
Epoch: 009/010 | Batch 450/468 | Cost: 0.0228
Epoch: 009/010 training accuracy: 99.22%
Epoch: 010/010 | Batch 000/468 | Cost: 0.0478
Epoch: 010/010 | Batch 050/468 | Cost: 0.0650
Epoch: 010/010 | Batch 100/468 | Cost: 0.0139
Epoch: 010/010 | Batch 150/468 | Cost: 0.0285
Epoch: 010/010 | Batch 200/468 | Cost: 0.0067
Epoch: 010/010 | Batch 250/468 | Cost: 0.0159
Epoch: 010/010 | Batch 300/468 | Cost: 0.0224
Epoch: 010/010 | Batch 350/468 | Cost: 0.0286
Epoch: 010/010 | Batch 400/468 | Cost: 0.0996
Epoch: 010/010 | Batch 450/468 | Cost: 0.0821
Epoch: 010/010 training accuracy: 99.31%
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
###Output
Test accuracy: 98.42%
###Markdown
ResNet with convolutional blocks for resizing (using a helper class) This is the same network as above but uses a `ResidualBlock` helper class.
###Code
class ResidualBlock(torch.nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
self.conv_1 = torch.nn.Conv2d(in_channels=channels[0],
out_channels=channels[1],
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_1_bn = torch.nn.BatchNorm2d(channels[1])
self.conv_2 = torch.nn.Conv2d(in_channels=channels[1],
out_channels=channels[2],
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_2_bn = torch.nn.BatchNorm2d(channels[2])
self.conv_shortcut_1 = torch.nn.Conv2d(in_channels=channels[0],
out_channels=channels[2],
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_1_bn = torch.nn.BatchNorm2d(channels[2])
def forward(self, x):
shortcut = x
out = self.conv_1(x)
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out)
out = self.conv_2_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_1(shortcut)
shortcut = self.conv_shortcut_1_bn(shortcut)
out += shortcut
out = F.relu(out)
return out
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
self.residual_block_1 = ResidualBlock(channels=[1, 4, 8])
self.residual_block_2 = ResidualBlock(channels=[8, 16, 32])
self.linear_1 = torch.nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.residual_block_1.forward(x)
out = self.residual_block_2.forward(out)
logits = self.linear_1(out.view(-1, 7*7*32))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
if torch.cuda.is_available():
model.cuda()
##########################
### COST AND OPTIMIZER
##########################
cost_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = cost_fn(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
###Output
Epoch: 001/010 | Batch 000/468 | Cost: 2.3215
Epoch: 001/010 | Batch 050/468 | Cost: 0.1400
Epoch: 001/010 | Batch 100/468 | Cost: 0.0999
Epoch: 001/010 | Batch 150/468 | Cost: 0.1039
Epoch: 001/010 | Batch 200/468 | Cost: 0.0977
Epoch: 001/010 | Batch 250/468 | Cost: 0.1122
Epoch: 001/010 | Batch 300/468 | Cost: 0.0378
Epoch: 001/010 | Batch 350/468 | Cost: 0.2707
Epoch: 001/010 | Batch 400/468 | Cost: 0.1524
Epoch: 001/010 | Batch 450/468 | Cost: 0.1140
Epoch: 001/010 training accuracy: 97.97%
Epoch: 002/010 | Batch 000/468 | Cost: 0.1217
Epoch: 002/010 | Batch 050/468 | Cost: 0.0389
Epoch: 002/010 | Batch 100/468 | Cost: 0.1613
Epoch: 002/010 | Batch 150/468 | Cost: 0.1093
Epoch: 002/010 | Batch 200/468 | Cost: 0.0148
Epoch: 002/010 | Batch 250/468 | Cost: 0.0451
Epoch: 002/010 | Batch 300/468 | Cost: 0.1174
Epoch: 002/010 | Batch 350/468 | Cost: 0.0787
Epoch: 002/010 | Batch 400/468 | Cost: 0.0101
Epoch: 002/010 | Batch 450/468 | Cost: 0.0469
Epoch: 002/010 training accuracy: 97.59%
Epoch: 003/010 | Batch 000/468 | Cost: 0.1096
Epoch: 003/010 | Batch 050/468 | Cost: 0.0058
Epoch: 003/010 | Batch 100/468 | Cost: 0.0121
Epoch: 003/010 | Batch 150/468 | Cost: 0.0570
Epoch: 003/010 | Batch 200/468 | Cost: 0.0225
Epoch: 003/010 | Batch 250/468 | Cost: 0.0808
Epoch: 003/010 | Batch 300/468 | Cost: 0.0158
Epoch: 003/010 | Batch 350/468 | Cost: 0.0852
Epoch: 003/010 | Batch 400/468 | Cost: 0.0216
Epoch: 003/010 | Batch 450/468 | Cost: 0.0628
Epoch: 003/010 training accuracy: 98.78%
Epoch: 004/010 | Batch 000/468 | Cost: 0.0254
Epoch: 004/010 | Batch 050/468 | Cost: 0.0576
Epoch: 004/010 | Batch 100/468 | Cost: 0.0211
Epoch: 004/010 | Batch 150/468 | Cost: 0.0858
Epoch: 004/010 | Batch 200/468 | Cost: 0.0120
Epoch: 004/010 | Batch 250/468 | Cost: 0.0116
Epoch: 004/010 | Batch 300/468 | Cost: 0.0428
Epoch: 004/010 | Batch 350/468 | Cost: 0.0174
Epoch: 004/010 | Batch 400/468 | Cost: 0.0222
Epoch: 004/010 | Batch 450/468 | Cost: 0.0428
Epoch: 004/010 training accuracy: 98.64%
Epoch: 005/010 | Batch 000/468 | Cost: 0.0326
Epoch: 005/010 | Batch 050/468 | Cost: 0.0246
Epoch: 005/010 | Batch 100/468 | Cost: 0.0205
Epoch: 005/010 | Batch 150/468 | Cost: 0.0231
Epoch: 005/010 | Batch 200/468 | Cost: 0.0261
Epoch: 005/010 | Batch 250/468 | Cost: 0.0276
Epoch: 005/010 | Batch 300/468 | Cost: 0.1495
Epoch: 005/010 | Batch 350/468 | Cost: 0.0353
Epoch: 005/010 | Batch 400/468 | Cost: 0.0118
Epoch: 005/010 | Batch 450/468 | Cost: 0.0669
Epoch: 005/010 training accuracy: 98.90%
Epoch: 006/010 | Batch 000/468 | Cost: 0.0154
Epoch: 006/010 | Batch 050/468 | Cost: 0.0188
Epoch: 006/010 | Batch 100/468 | Cost: 0.0347
Epoch: 006/010 | Batch 150/468 | Cost: 0.0365
Epoch: 006/010 | Batch 200/468 | Cost: 0.0390
Epoch: 006/010 | Batch 250/468 | Cost: 0.0401
Epoch: 006/010 | Batch 300/468 | Cost: 0.0283
Epoch: 006/010 | Batch 350/468 | Cost: 0.0516
Epoch: 006/010 | Batch 400/468 | Cost: 0.0139
Epoch: 006/010 | Batch 450/468 | Cost: 0.0431
Epoch: 006/010 training accuracy: 98.94%
Epoch: 007/010 | Batch 000/468 | Cost: 0.0522
Epoch: 007/010 | Batch 050/468 | Cost: 0.0097
Epoch: 007/010 | Batch 100/468 | Cost: 0.0151
Epoch: 007/010 | Batch 150/468 | Cost: 0.0262
Epoch: 007/010 | Batch 200/468 | Cost: 0.0387
Epoch: 007/010 | Batch 250/468 | Cost: 0.0179
Epoch: 007/010 | Batch 300/468 | Cost: 0.0123
Epoch: 007/010 | Batch 350/468 | Cost: 0.0684
Epoch: 007/010 | Batch 400/468 | Cost: 0.0204
Epoch: 007/010 | Batch 450/468 | Cost: 0.0266
Epoch: 007/010 training accuracy: 99.15%
Epoch: 008/010 | Batch 000/468 | Cost: 0.0702
Epoch: 008/010 | Batch 050/468 | Cost: 0.0179
Epoch: 008/010 | Batch 100/468 | Cost: 0.0168
Epoch: 008/010 | Batch 150/468 | Cost: 0.0532
Epoch: 008/010 | Batch 200/468 | Cost: 0.0056
Epoch: 008/010 | Batch 250/468 | Cost: 0.0221
Epoch: 008/010 | Batch 300/468 | Cost: 0.0038
Epoch: 008/010 | Batch 350/468 | Cost: 0.0411
Epoch: 008/010 | Batch 400/468 | Cost: 0.0800
Epoch: 008/010 | Batch 450/468 | Cost: 0.0215
Epoch: 008/010 training accuracy: 99.22%
Epoch: 009/010 | Batch 000/468 | Cost: 0.0187
Epoch: 009/010 | Batch 050/468 | Cost: 0.0071
Epoch: 009/010 | Batch 100/468 | Cost: 0.0047
Epoch: 009/010 | Batch 150/468 | Cost: 0.0527
Epoch: 009/010 | Batch 200/468 | Cost: 0.0789
Epoch: 009/010 | Batch 250/468 | Cost: 0.0323
Epoch: 009/010 | Batch 300/468 | Cost: 0.0354
Epoch: 009/010 | Batch 350/468 | Cost: 0.0733
Epoch: 009/010 | Batch 400/468 | Cost: 0.0894
Epoch: 009/010 | Batch 450/468 | Cost: 0.0228
Epoch: 009/010 training accuracy: 99.22%
Epoch: 010/010 | Batch 000/468 | Cost: 0.0478
Epoch: 010/010 | Batch 050/468 | Cost: 0.0650
Epoch: 010/010 | Batch 100/468 | Cost: 0.0139
Epoch: 010/010 | Batch 150/468 | Cost: 0.0285
Epoch: 010/010 | Batch 200/468 | Cost: 0.0067
Epoch: 010/010 | Batch 250/468 | Cost: 0.0159
Epoch: 010/010 | Batch 300/468 | Cost: 0.0224
Epoch: 010/010 | Batch 350/468 | Cost: 0.0286
Epoch: 010/010 | Batch 400/468 | Cost: 0.0996
Epoch: 010/010 | Batch 450/468 | Cost: 0.0821
Epoch: 010/010 training accuracy: 99.31%
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
###Output
Test accuracy: 98.42%
###Markdown
*Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by [Sebastian Raschka](https://sebastianraschka.com). All code examples are released under the [MIT license](https://github.com/rasbt/deep-learning-book/blob/master/LICENSE). If you find this content useful, please consider supporting the work by buying a [copy of the book](https://leanpub.com/ann-and-deeplearning).* Other code examples and content are available on [GitHub](https://github.com/rasbt/deep-learning-book). The PDF and ebook versions of the book are available through [Leanpub](https://leanpub.com/ann-and-deeplearning).
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -v -p torch
###Output
Sebastian Raschka
CPython 3.6.8
IPython 7.2.0
torch 1.0.0
###Markdown
- Runs on CPU or GPU (if available) Model Zoo -- Convolutional ResNet and Residual Blocks Please note that this example does not implement a really deep ResNet as described in literature but rather illustrates how the residual blocks described in He et al. [1] can be implemented in PyTorch.- [1] He, Kaiming, et al. "Deep residual learning for image recognition." *Proceedings of the IEEE conference on computer vision and pattern recognition*. 2016. Imports
###Code
import time
import numpy as np
import torch
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision import transforms
if torch.cuda.is_available():
torch.backends.cudnn.deterministic = True
###Output
_____no_output_____
###Markdown
Settings and Dataset
###Code
##########################
### SETTINGS
##########################
# Device
device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
# Hyperparameters
random_seed = 123
learning_rate = 0.01
num_epochs = 10
batch_size = 128
# Architecture
num_classes = 10
##########################
### MNIST DATASET
##########################
# Note transforms.ToTensor() scales input images
# to 0-1 range
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = datasets.MNIST(root='data',
train=False,
transform=transforms.ToTensor())
train_loader = DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=True)
test_loader = DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
# Checking the dataset
for images, labels in train_loader:
print('Image batch dimensions:', images.shape)
print('Image label dimensions:', labels.shape)
break
###Output
Image batch dimensions: torch.Size([128, 1, 28, 28])
Image label dimensions: torch.Size([128])
###Markdown
ResNet with identity blocks The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches the dimensions of the main path's output, which allows the network to learn identity functions. Such a residual block is illustrated below:
###Code
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
#########################
### 1st residual block
#########################
# 28x28x1 => 28x28x4
self.conv_1 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_1_bn = torch.nn.BatchNorm2d(4)
# 28x28x4 => 28x28x1
self.conv_2 = torch.nn.Conv2d(in_channels=4,
out_channels=1,
kernel_size=(3, 3),
stride=(1, 1),
padding=1)
self.conv_2_bn = torch.nn.BatchNorm2d(1)
#########################
### 2nd residual block
#########################
# 28x28x1 => 28x28x4
self.conv_3 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_3_bn = torch.nn.BatchNorm2d(4)
# 28x28x4 => 28x28x1
self.conv_4 = torch.nn.Conv2d(in_channels=4,
out_channels=1,
kernel_size=(3, 3),
stride=(1, 1),
padding=1)
self.conv_4_bn = torch.nn.BatchNorm2d(1)
#########################
### Fully connected
#########################
self.linear_1 = torch.nn.Linear(28*28*1, num_classes)
def forward(self, x):
#########################
### 1st residual block
#########################
shortcut = x
out = self.conv_1(x)
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out)
out = self.conv_2_bn(out)
out += shortcut
out = F.relu(out)
#########################
### 2nd residual block
#########################
shortcut = x
out = self.conv_3(x)
out = self.conv_3_bn(out)
out = F.relu(out)
out = self.conv_4(out)
out = self.conv_4_bn(out)
out += shortcut
out = F.relu(out)
#########################
### Fully connected
#########################
logits = self.linear_1(out.view(-1, 28*28*1))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
start_time = time.time()
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))
print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))
###Output
Epoch: 001/010 | Batch 000/469 | Cost: 2.3682
Epoch: 001/010 | Batch 050/469 | Cost: 0.5101
Epoch: 001/010 | Batch 100/469 | Cost: 0.2345
Epoch: 001/010 | Batch 150/469 | Cost: 0.2674
Epoch: 001/010 | Batch 200/469 | Cost: 0.3679
Epoch: 001/010 | Batch 250/469 | Cost: 0.3044
Epoch: 001/010 | Batch 300/469 | Cost: 0.3379
Epoch: 001/010 | Batch 350/469 | Cost: 0.2751
Epoch: 001/010 | Batch 400/469 | Cost: 0.4394
Epoch: 001/010 | Batch 450/469 | Cost: 0.3718
Epoch: 001/010 training accuracy: 92.24%
Time elapsed: 0.22 min
Epoch: 002/010 | Batch 000/469 | Cost: 0.2724
Epoch: 002/010 | Batch 050/469 | Cost: 0.4151
Epoch: 002/010 | Batch 100/469 | Cost: 0.2546
Epoch: 002/010 | Batch 150/469 | Cost: 0.4242
Epoch: 002/010 | Batch 200/469 | Cost: 0.2295
Epoch: 002/010 | Batch 250/469 | Cost: 0.2442
Epoch: 002/010 | Batch 300/469 | Cost: 0.1609
Epoch: 002/010 | Batch 350/469 | Cost: 0.2267
Epoch: 002/010 | Batch 400/469 | Cost: 0.2642
Epoch: 002/010 | Batch 450/469 | Cost: 0.4437
Epoch: 002/010 training accuracy: 92.38%
Time elapsed: 0.44 min
Epoch: 003/010 | Batch 000/469 | Cost: 0.2432
Epoch: 003/010 | Batch 050/469 | Cost: 0.4303
Epoch: 003/010 | Batch 100/469 | Cost: 0.2053
Epoch: 003/010 | Batch 150/469 | Cost: 0.2835
Epoch: 003/010 | Batch 200/469 | Cost: 0.3132
Epoch: 003/010 | Batch 250/469 | Cost: 0.2441
Epoch: 003/010 | Batch 300/469 | Cost: 0.1871
Epoch: 003/010 | Batch 350/469 | Cost: 0.2613
Epoch: 003/010 | Batch 400/469 | Cost: 0.5206
Epoch: 003/010 | Batch 450/469 | Cost: 0.2660
Epoch: 003/010 training accuracy: 92.66%
Time elapsed: 0.66 min
Epoch: 004/010 | Batch 000/469 | Cost: 0.4542
Epoch: 004/010 | Batch 050/469 | Cost: 0.2275
Epoch: 004/010 | Batch 100/469 | Cost: 0.1715
Epoch: 004/010 | Batch 150/469 | Cost: 0.3198
Epoch: 004/010 | Batch 200/469 | Cost: 0.3090
Epoch: 004/010 | Batch 250/469 | Cost: 0.2205
Epoch: 004/010 | Batch 300/469 | Cost: 0.3599
Epoch: 004/010 | Batch 350/469 | Cost: 0.1999
Epoch: 004/010 | Batch 400/469 | Cost: 0.2565
Epoch: 004/010 | Batch 450/469 | Cost: 0.1807
Epoch: 004/010 training accuracy: 93.05%
Time elapsed: 0.88 min
Epoch: 005/010 | Batch 000/469 | Cost: 0.2549
Epoch: 005/010 | Batch 050/469 | Cost: 0.3040
Epoch: 005/010 | Batch 100/469 | Cost: 0.4656
Epoch: 005/010 | Batch 150/469 | Cost: 0.1699
Epoch: 005/010 | Batch 200/469 | Cost: 0.1872
Epoch: 005/010 | Batch 250/469 | Cost: 0.1881
Epoch: 005/010 | Batch 300/469 | Cost: 0.1282
Epoch: 005/010 | Batch 350/469 | Cost: 0.2686
Epoch: 005/010 | Batch 400/469 | Cost: 0.1851
Epoch: 005/010 | Batch 450/469 | Cost: 0.3126
Epoch: 005/010 training accuracy: 93.41%
Time elapsed: 1.09 min
Epoch: 006/010 | Batch 000/469 | Cost: 0.2751
Epoch: 006/010 | Batch 050/469 | Cost: 0.1481
Epoch: 006/010 | Batch 100/469 | Cost: 0.2913
Epoch: 006/010 | Batch 150/469 | Cost: 0.2820
Epoch: 006/010 | Batch 200/469 | Cost: 0.2055
Epoch: 006/010 | Batch 250/469 | Cost: 0.1963
Epoch: 006/010 | Batch 300/469 | Cost: 0.2684
Epoch: 006/010 | Batch 350/469 | Cost: 0.2504
Epoch: 006/010 | Batch 400/469 | Cost: 0.3913
Epoch: 006/010 | Batch 450/469 | Cost: 0.2620
Epoch: 006/010 training accuracy: 93.39%
Time elapsed: 1.31 min
Epoch: 007/010 | Batch 000/469 | Cost: 0.2334
Epoch: 007/010 | Batch 050/469 | Cost: 0.1654
Epoch: 007/010 | Batch 100/469 | Cost: 0.1298
Epoch: 007/010 | Batch 150/469 | Cost: 0.1913
Epoch: 007/010 | Batch 200/469 | Cost: 0.1353
Epoch: 007/010 | Batch 250/469 | Cost: 0.3256
Epoch: 007/010 | Batch 300/469 | Cost: 0.2409
Epoch: 007/010 | Batch 350/469 | Cost: 0.2213
Epoch: 007/010 | Batch 400/469 | Cost: 0.1131
Epoch: 007/010 | Batch 450/469 | Cost: 0.1687
Epoch: 007/010 training accuracy: 93.44%
Time elapsed: 1.52 min
Epoch: 008/010 | Batch 000/469 | Cost: 0.2207
Epoch: 008/010 | Batch 050/469 | Cost: 0.1072
Epoch: 008/010 | Batch 100/469 | Cost: 0.3151
Epoch: 008/010 | Batch 150/469 | Cost: 0.2214
Epoch: 008/010 | Batch 200/469 | Cost: 0.2757
Epoch: 008/010 | Batch 250/469 | Cost: 0.2425
Epoch: 008/010 | Batch 300/469 | Cost: 0.3749
Epoch: 008/010 | Batch 350/469 | Cost: 0.2208
Epoch: 008/010 | Batch 400/469 | Cost: 0.1776
Epoch: 008/010 | Batch 450/469 | Cost: 0.2534
Epoch: 008/010 training accuracy: 93.65%
Time elapsed: 1.74 min
Epoch: 009/010 | Batch 000/469 | Cost: 0.3788
Epoch: 009/010 | Batch 050/469 | Cost: 0.1630
Epoch: 009/010 | Batch 100/469 | Cost: 0.2910
Epoch: 009/010 | Batch 150/469 | Cost: 0.1935
Epoch: 009/010 | Batch 200/469 | Cost: 0.3148
Epoch: 009/010 | Batch 250/469 | Cost: 0.2260
Epoch: 009/010 | Batch 300/469 | Cost: 0.1939
Epoch: 009/010 | Batch 350/469 | Cost: 0.3062
Epoch: 009/010 | Batch 400/469 | Cost: 0.2351
Epoch: 009/010 | Batch 450/469 | Cost: 0.2692
Epoch: 009/010 training accuracy: 93.78%
Time elapsed: 1.95 min
Epoch: 010/010 | Batch 000/469 | Cost: 0.2247
Epoch: 010/010 | Batch 050/469 | Cost: 0.2508
Epoch: 010/010 | Batch 100/469 | Cost: 0.1383
Epoch: 010/010 | Batch 150/469 | Cost: 0.1728
Epoch: 010/010 | Batch 200/469 | Cost: 0.2493
Epoch: 010/010 | Batch 250/469 | Cost: 0.1492
Epoch: 010/010 | Batch 300/469 | Cost: 0.1611
Epoch: 010/010 | Batch 350/469 | Cost: 0.1530
Epoch: 010/010 | Batch 400/469 | Cost: 0.3355
Epoch: 010/010 | Batch 450/469 | Cost: 0.2393
Epoch: 010/010 training accuracy: 93.62%
Time elapsed: 2.17 min
Total Training Time: 2.17 min
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
###Output
Test accuracy: 92.18%
###Markdown
ResNet with convolutional blocks for resizing The following code implements the residual blocks with skip connections such that the input passed via the shortcut matches is resized to dimensions of the main path's output. Such a residual block is illustrated below:
###Code
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
#########################
### 1st residual block
#########################
# 28x28x1 => 14x14x4
self.conv_1 = torch.nn.Conv2d(in_channels=1,
out_channels=4,
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_1_bn = torch.nn.BatchNorm2d(4)
# 14x14x4 => 14x14x8
self.conv_2 = torch.nn.Conv2d(in_channels=4,
out_channels=8,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_2_bn = torch.nn.BatchNorm2d(8)
# 28x28x1 => 14x14x8
self.conv_shortcut_1 = torch.nn.Conv2d(in_channels=1,
out_channels=8,
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_1_bn = torch.nn.BatchNorm2d(8)
#########################
### 2nd residual block
#########################
# 14x14x8 => 7x7x16
self.conv_3 = torch.nn.Conv2d(in_channels=8,
out_channels=16,
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_3_bn = torch.nn.BatchNorm2d(16)
# 7x7x16 => 7x7x32
self.conv_4 = torch.nn.Conv2d(in_channels=16,
out_channels=32,
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_4_bn = torch.nn.BatchNorm2d(32)
# 14x14x8 => 7x7x32
self.conv_shortcut_2 = torch.nn.Conv2d(in_channels=8,
out_channels=32,
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_2_bn = torch.nn.BatchNorm2d(32)
#########################
### Fully connected
#########################
self.linear_1 = torch.nn.Linear(7*7*32, num_classes)
def forward(self, x):
#########################
### 1st residual block
#########################
shortcut = x
out = self.conv_1(x) # 28x28x1 => 14x14x4
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out) # 14x14x4 => 714x14x8
out = self.conv_2_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_1(shortcut)
shortcut = self.conv_shortcut_1_bn(shortcut)
out += shortcut
out = F.relu(out)
#########################
### 2nd residual block
#########################
shortcut = out
out = self.conv_3(out) # 14x14x8 => 7x7x16
out = self.conv_3_bn(out)
out = F.relu(out)
out = self.conv_4(out) # 7x7x16 => 7x7x32
out = self.conv_4_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_2(shortcut)
shortcut = self.conv_shortcut_2_bn(shortcut)
out += shortcut
out = F.relu(out)
#########################
### Fully connected
#########################
logits = self.linear_1(out.view(-1, 7*7*32))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model = model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_loader), cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
###Output
Epoch: 001/010 | Batch 000/469 | Cost: 2.3318
Epoch: 001/010 | Batch 050/469 | Cost: 0.1491
Epoch: 001/010 | Batch 100/469 | Cost: 0.2615
Epoch: 001/010 | Batch 150/469 | Cost: 0.0847
Epoch: 001/010 | Batch 200/469 | Cost: 0.1427
Epoch: 001/010 | Batch 250/469 | Cost: 0.1739
Epoch: 001/010 | Batch 300/469 | Cost: 0.1558
Epoch: 001/010 | Batch 350/469 | Cost: 0.0684
Epoch: 001/010 | Batch 400/469 | Cost: 0.0717
Epoch: 001/010 | Batch 450/469 | Cost: 0.0785
Epoch: 001/010 training accuracy: 97.90%
Epoch: 002/010 | Batch 000/469 | Cost: 0.0582
Epoch: 002/010 | Batch 050/469 | Cost: 0.1199
Epoch: 002/010 | Batch 100/469 | Cost: 0.0918
Epoch: 002/010 | Batch 150/469 | Cost: 0.0247
Epoch: 002/010 | Batch 200/469 | Cost: 0.0314
Epoch: 002/010 | Batch 250/469 | Cost: 0.0759
Epoch: 002/010 | Batch 300/469 | Cost: 0.0280
Epoch: 002/010 | Batch 350/469 | Cost: 0.0391
Epoch: 002/010 | Batch 400/469 | Cost: 0.0431
Epoch: 002/010 | Batch 450/469 | Cost: 0.0455
Epoch: 002/010 training accuracy: 98.16%
Epoch: 003/010 | Batch 000/469 | Cost: 0.0303
Epoch: 003/010 | Batch 050/469 | Cost: 0.0433
Epoch: 003/010 | Batch 100/469 | Cost: 0.0465
Epoch: 003/010 | Batch 150/469 | Cost: 0.0243
Epoch: 003/010 | Batch 200/469 | Cost: 0.0258
Epoch: 003/010 | Batch 250/469 | Cost: 0.0403
Epoch: 003/010 | Batch 300/469 | Cost: 0.1024
Epoch: 003/010 | Batch 350/469 | Cost: 0.0241
Epoch: 003/010 | Batch 400/469 | Cost: 0.0299
Epoch: 003/010 | Batch 450/469 | Cost: 0.0354
Epoch: 003/010 training accuracy: 98.08%
Epoch: 004/010 | Batch 000/469 | Cost: 0.0471
Epoch: 004/010 | Batch 050/469 | Cost: 0.0954
Epoch: 004/010 | Batch 100/469 | Cost: 0.0073
Epoch: 004/010 | Batch 150/469 | Cost: 0.0531
Epoch: 004/010 | Batch 200/469 | Cost: 0.0493
Epoch: 004/010 | Batch 250/469 | Cost: 0.1070
Epoch: 004/010 | Batch 300/469 | Cost: 0.0205
Epoch: 004/010 | Batch 350/469 | Cost: 0.0270
Epoch: 004/010 | Batch 400/469 | Cost: 0.0817
Epoch: 004/010 | Batch 450/469 | Cost: 0.0182
Epoch: 004/010 training accuracy: 98.70%
Epoch: 005/010 | Batch 000/469 | Cost: 0.0691
Epoch: 005/010 | Batch 050/469 | Cost: 0.0326
Epoch: 005/010 | Batch 100/469 | Cost: 0.0041
Epoch: 005/010 | Batch 150/469 | Cost: 0.0774
Epoch: 005/010 | Batch 200/469 | Cost: 0.1223
Epoch: 005/010 | Batch 250/469 | Cost: 0.0329
Epoch: 005/010 | Batch 300/469 | Cost: 0.0479
Epoch: 005/010 | Batch 350/469 | Cost: 0.0696
Epoch: 005/010 | Batch 400/469 | Cost: 0.0504
Epoch: 005/010 | Batch 450/469 | Cost: 0.0736
Epoch: 005/010 training accuracy: 98.38%
Epoch: 006/010 | Batch 000/469 | Cost: 0.0318
Epoch: 006/010 | Batch 050/469 | Cost: 0.0303
Epoch: 006/010 | Batch 100/469 | Cost: 0.0267
Epoch: 006/010 | Batch 150/469 | Cost: 0.0912
Epoch: 006/010 | Batch 200/469 | Cost: 0.0131
Epoch: 006/010 | Batch 250/469 | Cost: 0.0164
Epoch: 006/010 | Batch 300/469 | Cost: 0.0109
Epoch: 006/010 | Batch 350/469 | Cost: 0.0699
Epoch: 006/010 | Batch 400/469 | Cost: 0.0030
Epoch: 006/010 | Batch 450/469 | Cost: 0.0237
Epoch: 006/010 training accuracy: 98.74%
Epoch: 007/010 | Batch 000/469 | Cost: 0.0214
Epoch: 007/010 | Batch 050/469 | Cost: 0.0097
Epoch: 007/010 | Batch 100/469 | Cost: 0.0292
Epoch: 007/010 | Batch 150/469 | Cost: 0.0648
Epoch: 007/010 | Batch 200/469 | Cost: 0.0044
Epoch: 007/010 | Batch 250/469 | Cost: 0.0557
Epoch: 007/010 | Batch 300/469 | Cost: 0.0139
Epoch: 007/010 | Batch 350/469 | Cost: 0.0809
Epoch: 007/010 | Batch 400/469 | Cost: 0.0285
Epoch: 007/010 | Batch 450/469 | Cost: 0.0050
Epoch: 007/010 training accuracy: 98.82%
Epoch: 008/010 | Batch 000/469 | Cost: 0.0890
Epoch: 008/010 | Batch 050/469 | Cost: 0.0685
Epoch: 008/010 | Batch 100/469 | Cost: 0.0274
Epoch: 008/010 | Batch 150/469 | Cost: 0.0187
Epoch: 008/010 | Batch 200/469 | Cost: 0.0268
Epoch: 008/010 | Batch 250/469 | Cost: 0.1681
Epoch: 008/010 | Batch 300/469 | Cost: 0.0167
Epoch: 008/010 | Batch 350/469 | Cost: 0.0518
Epoch: 008/010 | Batch 400/469 | Cost: 0.0138
Epoch: 008/010 | Batch 450/469 | Cost: 0.0270
Epoch: 008/010 training accuracy: 99.08%
Epoch: 009/010 | Batch 000/469 | Cost: 0.0458
Epoch: 009/010 | Batch 050/469 | Cost: 0.0039
Epoch: 009/010 | Batch 100/469 | Cost: 0.0597
Epoch: 009/010 | Batch 150/469 | Cost: 0.0120
Epoch: 009/010 | Batch 200/469 | Cost: 0.0580
Epoch: 009/010 | Batch 250/469 | Cost: 0.0280
Epoch: 009/010 | Batch 300/469 | Cost: 0.0570
Epoch: 009/010 | Batch 350/469 | Cost: 0.0831
Epoch: 009/010 | Batch 400/469 | Cost: 0.0732
Epoch: 009/010 | Batch 450/469 | Cost: 0.0327
Epoch: 009/010 training accuracy: 99.05%
Epoch: 010/010 | Batch 000/469 | Cost: 0.0312
Epoch: 010/010 | Batch 050/469 | Cost: 0.0130
Epoch: 010/010 | Batch 100/469 | Cost: 0.0052
Epoch: 010/010 | Batch 150/469 | Cost: 0.0188
Epoch: 010/010 | Batch 200/469 | Cost: 0.0362
Epoch: 010/010 | Batch 250/469 | Cost: 0.1085
Epoch: 010/010 | Batch 300/469 | Cost: 0.0004
Epoch: 010/010 | Batch 350/469 | Cost: 0.0299
Epoch: 010/010 | Batch 400/469 | Cost: 0.0769
Epoch: 010/010 | Batch 450/469 | Cost: 0.0247
Epoch: 010/010 training accuracy: 98.87%
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
###Output
Test accuracy: 97.91%
###Markdown
ResNet with convolutional blocks for resizing (using a helper class) This is the same network as above but uses a `ResidualBlock` helper class.
###Code
class ResidualBlock(torch.nn.Module):
def __init__(self, channels):
super(ResidualBlock, self).__init__()
self.conv_1 = torch.nn.Conv2d(in_channels=channels[0],
out_channels=channels[1],
kernel_size=(3, 3),
stride=(2, 2),
padding=1)
self.conv_1_bn = torch.nn.BatchNorm2d(channels[1])
self.conv_2 = torch.nn.Conv2d(in_channels=channels[1],
out_channels=channels[2],
kernel_size=(1, 1),
stride=(1, 1),
padding=0)
self.conv_2_bn = torch.nn.BatchNorm2d(channels[2])
self.conv_shortcut_1 = torch.nn.Conv2d(in_channels=channels[0],
out_channels=channels[2],
kernel_size=(1, 1),
stride=(2, 2),
padding=0)
self.conv_shortcut_1_bn = torch.nn.BatchNorm2d(channels[2])
def forward(self, x):
shortcut = x
out = self.conv_1(x)
out = self.conv_1_bn(out)
out = F.relu(out)
out = self.conv_2(out)
out = self.conv_2_bn(out)
# match up dimensions using a linear function (no relu)
shortcut = self.conv_shortcut_1(shortcut)
shortcut = self.conv_shortcut_1_bn(shortcut)
out += shortcut
out = F.relu(out)
return out
##########################
### MODEL
##########################
class ConvNet(torch.nn.Module):
def __init__(self, num_classes):
super(ConvNet, self).__init__()
self.residual_block_1 = ResidualBlock(channels=[1, 4, 8])
self.residual_block_2 = ResidualBlock(channels=[8, 16, 32])
self.linear_1 = torch.nn.Linear(7*7*32, num_classes)
def forward(self, x):
out = self.residual_block_1.forward(x)
out = self.residual_block_2.forward(out)
logits = self.linear_1(out.view(-1, 7*7*32))
probas = F.softmax(logits, dim=1)
return logits, probas
torch.manual_seed(random_seed)
model = ConvNet(num_classes=num_classes)
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
Training
###Code
def compute_accuracy(model, data_loader):
correct_pred, num_examples = 0, 0
for i, (features, targets) in enumerate(data_loader):
features = features.to(device)
targets = targets.to(device)
logits, probas = model(features)
_, predicted_labels = torch.max(probas, 1)
num_examples += targets.size(0)
correct_pred += (predicted_labels == targets).sum()
return correct_pred.float()/num_examples * 100
for epoch in range(num_epochs):
model = model.train()
for batch_idx, (features, targets) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits, probas = model(features)
cost = F.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(train_dataset)//batch_size, cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_loader)))
###Output
Epoch: 001/010 | Batch 000/468 | Cost: 2.3318
Epoch: 001/010 | Batch 050/468 | Cost: 0.1491
Epoch: 001/010 | Batch 100/468 | Cost: 0.2615
Epoch: 001/010 | Batch 150/468 | Cost: 0.0847
Epoch: 001/010 | Batch 200/468 | Cost: 0.1427
Epoch: 001/010 | Batch 250/468 | Cost: 0.1739
Epoch: 001/010 | Batch 300/468 | Cost: 0.1558
Epoch: 001/010 | Batch 350/468 | Cost: 0.0684
Epoch: 001/010 | Batch 400/468 | Cost: 0.0717
Epoch: 001/010 | Batch 450/468 | Cost: 0.0785
Epoch: 001/010 training accuracy: 97.90%
Epoch: 002/010 | Batch 000/468 | Cost: 0.0582
Epoch: 002/010 | Batch 050/468 | Cost: 0.1199
Epoch: 002/010 | Batch 100/468 | Cost: 0.0918
Epoch: 002/010 | Batch 150/468 | Cost: 0.0247
Epoch: 002/010 | Batch 200/468 | Cost: 0.0314
Epoch: 002/010 | Batch 250/468 | Cost: 0.0759
Epoch: 002/010 | Batch 300/468 | Cost: 0.0280
Epoch: 002/010 | Batch 350/468 | Cost: 0.0391
Epoch: 002/010 | Batch 400/468 | Cost: 0.0431
Epoch: 002/010 | Batch 450/468 | Cost: 0.0455
Epoch: 002/010 training accuracy: 98.16%
Epoch: 003/010 | Batch 000/468 | Cost: 0.0303
Epoch: 003/010 | Batch 050/468 | Cost: 0.0433
Epoch: 003/010 | Batch 100/468 | Cost: 0.0465
Epoch: 003/010 | Batch 150/468 | Cost: 0.0243
Epoch: 003/010 | Batch 200/468 | Cost: 0.0258
Epoch: 003/010 | Batch 250/468 | Cost: 0.0403
Epoch: 003/010 | Batch 300/468 | Cost: 0.1024
Epoch: 003/010 | Batch 350/468 | Cost: 0.0241
Epoch: 003/010 | Batch 400/468 | Cost: 0.0299
Epoch: 003/010 | Batch 450/468 | Cost: 0.0354
Epoch: 003/010 training accuracy: 98.08%
Epoch: 004/010 | Batch 000/468 | Cost: 0.0471
Epoch: 004/010 | Batch 050/468 | Cost: 0.0954
Epoch: 004/010 | Batch 100/468 | Cost: 0.0073
Epoch: 004/010 | Batch 150/468 | Cost: 0.0531
Epoch: 004/010 | Batch 200/468 | Cost: 0.0493
Epoch: 004/010 | Batch 250/468 | Cost: 0.1070
Epoch: 004/010 | Batch 300/468 | Cost: 0.0205
Epoch: 004/010 | Batch 350/468 | Cost: 0.0270
Epoch: 004/010 | Batch 400/468 | Cost: 0.0817
Epoch: 004/010 | Batch 450/468 | Cost: 0.0182
Epoch: 004/010 training accuracy: 98.70%
Epoch: 005/010 | Batch 000/468 | Cost: 0.0691
Epoch: 005/010 | Batch 050/468 | Cost: 0.0326
Epoch: 005/010 | Batch 100/468 | Cost: 0.0041
Epoch: 005/010 | Batch 150/468 | Cost: 0.0774
Epoch: 005/010 | Batch 200/468 | Cost: 0.1223
Epoch: 005/010 | Batch 250/468 | Cost: 0.0329
Epoch: 005/010 | Batch 300/468 | Cost: 0.0479
Epoch: 005/010 | Batch 350/468 | Cost: 0.0696
Epoch: 005/010 | Batch 400/468 | Cost: 0.0504
Epoch: 005/010 | Batch 450/468 | Cost: 0.0736
Epoch: 005/010 training accuracy: 98.38%
Epoch: 006/010 | Batch 000/468 | Cost: 0.0318
Epoch: 006/010 | Batch 050/468 | Cost: 0.0303
Epoch: 006/010 | Batch 100/468 | Cost: 0.0267
Epoch: 006/010 | Batch 150/468 | Cost: 0.0912
Epoch: 006/010 | Batch 200/468 | Cost: 0.0131
Epoch: 006/010 | Batch 250/468 | Cost: 0.0164
Epoch: 006/010 | Batch 300/468 | Cost: 0.0109
Epoch: 006/010 | Batch 350/468 | Cost: 0.0699
Epoch: 006/010 | Batch 400/468 | Cost: 0.0030
Epoch: 006/010 | Batch 450/468 | Cost: 0.0237
Epoch: 006/010 training accuracy: 98.74%
Epoch: 007/010 | Batch 000/468 | Cost: 0.0214
Epoch: 007/010 | Batch 050/468 | Cost: 0.0097
Epoch: 007/010 | Batch 100/468 | Cost: 0.0292
Epoch: 007/010 | Batch 150/468 | Cost: 0.0648
Epoch: 007/010 | Batch 200/468 | Cost: 0.0044
Epoch: 007/010 | Batch 250/468 | Cost: 0.0557
Epoch: 007/010 | Batch 300/468 | Cost: 0.0139
Epoch: 007/010 | Batch 350/468 | Cost: 0.0809
Epoch: 007/010 | Batch 400/468 | Cost: 0.0285
Epoch: 007/010 | Batch 450/468 | Cost: 0.0050
Epoch: 007/010 training accuracy: 98.82%
Epoch: 008/010 | Batch 000/468 | Cost: 0.0890
Epoch: 008/010 | Batch 050/468 | Cost: 0.0685
Epoch: 008/010 | Batch 100/468 | Cost: 0.0274
Epoch: 008/010 | Batch 150/468 | Cost: 0.0187
Epoch: 008/010 | Batch 200/468 | Cost: 0.0268
Epoch: 008/010 | Batch 250/468 | Cost: 0.1681
Epoch: 008/010 | Batch 300/468 | Cost: 0.0167
Epoch: 008/010 | Batch 350/468 | Cost: 0.0518
Epoch: 008/010 | Batch 400/468 | Cost: 0.0138
Epoch: 008/010 | Batch 450/468 | Cost: 0.0270
Epoch: 008/010 training accuracy: 99.08%
Epoch: 009/010 | Batch 000/468 | Cost: 0.0458
Epoch: 009/010 | Batch 050/468 | Cost: 0.0039
Epoch: 009/010 | Batch 100/468 | Cost: 0.0597
Epoch: 009/010 | Batch 150/468 | Cost: 0.0120
Epoch: 009/010 | Batch 200/468 | Cost: 0.0580
Epoch: 009/010 | Batch 250/468 | Cost: 0.0280
Epoch: 009/010 | Batch 300/468 | Cost: 0.0570
Epoch: 009/010 | Batch 350/468 | Cost: 0.0831
Epoch: 009/010 | Batch 400/468 | Cost: 0.0732
Epoch: 009/010 | Batch 450/468 | Cost: 0.0327
Epoch: 009/010 training accuracy: 99.05%
Epoch: 010/010 | Batch 000/468 | Cost: 0.0312
Epoch: 010/010 | Batch 050/468 | Cost: 0.0130
Epoch: 010/010 | Batch 100/468 | Cost: 0.0052
Epoch: 010/010 | Batch 150/468 | Cost: 0.0188
Epoch: 010/010 | Batch 200/468 | Cost: 0.0362
Epoch: 010/010 | Batch 250/468 | Cost: 0.1085
Epoch: 010/010 | Batch 300/468 | Cost: 0.0004
Epoch: 010/010 | Batch 350/468 | Cost: 0.0299
Epoch: 010/010 | Batch 400/468 | Cost: 0.0769
Epoch: 010/010 | Batch 450/468 | Cost: 0.0247
Epoch: 010/010 training accuracy: 98.87%
###Markdown
Evaluation
###Code
print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader)))
%watermark -iv
###Output
numpy 1.15.4
torch 1.0.0
|
AATCC/lab-report/w3/practice-leetcode-labs-w3.ipynb | ###Markdown
LeetCode Link1. https://leetcode.com/2. https://leetcode-cn.com/ Demo1. https://www.onlinegdb.com/ DetailsLeetCode: 16, 17, 19 LeetCode 16. 3Sum Closest 整数反转 Given an integer array nums of length n and an integer target, find three integers in nums such that the sum is closest to target.Return the sum of the three integers.You may assume that each input would have exactly one solution.给你一个长度为 n 的整数数组 nums 和 一个目标值 target。请你从 nums 中选出三个整数,使它们的和与 target 最接近。返回这三个数的和。假定每组输入只存在恰好一个解。Example 1:```Input: nums = [-1,2,1,-4], target = 1Output: 2Explanation: The sum that is closest to the target is 2. (-1 + 2 + 1 = 2).```Example 2:```Input: nums = [0,0,0], target = 1Output: 0```Constraints:- 3 <= nums.length <= 1000- -1000 <= nums[i] <= 1000- -10^4 <= target <= 10^4 解题思路这一题的解法是用两个指针夹逼的方法。先对数组进行排序,i 从头开始往后面扫。这里同样需要注意数组中存在多个重复数字的问题。具体处理方法很多,可以用 map 计数去重。这里笔者简单的处理,i 在循环的时候和前一个数进行比较,如果相等,i 继续往后移,直到移到下一个和前一个数字不同的位置。j,k 两个指针开始一前一后夹逼。j 为 i 的下一个数字,k 为数组最后一个数字,由于经过排序,所以 k 的数字最大。j 往后移动,k 往前移动,逐渐夹逼出最接近 target 的值。这道题还可以用暴力解法,三层循环找到距离 target 最近的组合。
###Code
from typing import List
class Solution:
def threeSumClosest(self, nums: List[int], target: int) -> int:
n = len(nums)
nums.sort()
re_min = 0 #存储当前最小的差值
for i in range(n):
low = i+1
high = n-1
while low < high:
three_sum = nums[i] + nums[low] + nums[high]
x = target - three_sum #当前三数的差值
if re_min == 0:
re_min = abs(x)
sum_min = three_sum #sum_min为当前最接近的和
if abs(x) < re_min:
re_min = abs(x)
sum_min = three_sum
if three_sum == target:
return target
elif three_sum < target:
low += 1
else:
high -= 1
return sum_min
###Output
_____no_output_____
###Markdown
LeetCode 17. Letter Combinations of a Phone Number 电话号码的字母组合 Given a string containing digits from 2-9 inclusive, return all possible letter combinations that the number could represent. Return the answer in any order.A mapping of digit to letters (just like on the telephone buttons) is given below. Note that 1 does not map to any letters.给定一个仅包含数字 2-9 的字符串,返回所有它能表示的字母组合。答案可以按 任意顺序 返回。给出数字到字母的映射如下(与电话按键相同)。注意 1 不对应任何字母。Example 1:```Input: digits = "23"Output: ["ad","ae","af","bd","be","bf","cd","ce","cf"]```Example 2:```Input: digits = ""Output: []```Example 3:```Input: digits = "2"Output: ["a","b","c"]```Constraints:- 0 <= digits.length <= 4- digits[i] is a digit in the range ['2', '9'].(digits[i] 是范围 ['2', '9'] 的一个数字。) 解题思路DFS 递归深搜 Reference1. https://www.bilibili.com/video/BV1cy4y167mM/```class Solution(object): def letterCombinations(self, digits): """ 动态规划 dp[i]: 前i个字母的所有组合 由于dp[i]只与dp[i-1]有关,可以使用变量代替列表存储降低空间复杂度 :type digits: str :rtype: List[str] """ if not digits: return [] d = {'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl', '6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz'} n = len(digits) dp = [[] for _ in range(n)] dp[0] = [x for x in d[digits[0]]] for i in range(1, n): dp[i] = [x + y for x in dp[i - 1] for y in d[digits[i]]] return dp[-1] def letterCombinations2(self, digits): """ 使用变量代替上面的列表 降低空间复杂度 :type digits: str :rtype: List[str] """ if not digits: return [] d = {'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl', '6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz'} n = len(digits) res = [''] for i in range(n): res = [x + y for x in res for y in d[digits[i]]] return res def letterCombinations3(self, digits): """ 递归 :param digits: :return: """ d = {'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl', '6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz'} if not digits: return [] if len(digits) == 1: return [x for x in d[digits[0]]] return [x + y for x in d[digits[0]] for y in self.letterCombinations3(digits[1:])]```
###Code
class Solution(object):
def letterCombinations(self, digits):
if not digits:
return []
d = {'2': 'abc', '3': 'def', '4': 'ghi', '5': 'jkl',
'6': 'mno', '7': 'pqrs', '8': 'tuv', '9': 'wxyz'}
n = len(digits)
dp = [[] for _ in range(n)]
dp[0] = [x for x in d[digits[0]]]
for i in range(1, n):
dp[i] = [x + y for x in dp[i - 1] for y in d[digits[i]]]
return dp[-1]
###Output
_____no_output_____
###Markdown
LeetCode 19. Remove Nth Node From End of List 删除链表的倒数第 N 个结点 Given the head of a linked list, remove the $n^{th}$ node from the end of the list and return its head.给你一个链表,删除链表的倒数第 n 个结点,并且返回链表的头结点。Example 1:```Input: head = [1,2,3,4,5], n = 2Output: [1,2,3,5]```Example 2:```Input: head = [1], n = 1Output: []```Example 3:```Input: head = [1,2], n = 1Output: [1]```Constraints:- The number of nodes in the list is sz.(链表中结点的数目为 sz)- 1 <= sz <= 30- 0 <= Node.val <= 100- 1 <= n <= szFollow up: Could you do this in one pass?(你能尝试使用一趟扫描实现吗?) 解题思路- 先循环一次拿到链表的总长度,然后循环到要删除的结点的前一个结点开始删除操作。需要注意的一个特例是,有可能要删除头结点,要单独处理。- 这道题有一种特别简单的解法。设置 2 个指针,一个指针距离前一个指针 n 个距离。同时移动 2 个指针,2 个指针都移动相同的距离。当一个指针移动到了终点,那么前一个指针就是倒数第 n 个节点了。 Referencehttps://stackoverflow.com/questions/61610160/remove-nth-node-from-end-of-listleetcode-python
###Code
"""
class Solution:
def removeNthFromEnd(self, head: ListNode, n: int) -> ListNode:
head_dummy = ListNode()
head_dummy.next = head
slow, fast = head_dummy, head_dummy
while(n!=0): # fast先往前走n步
fast = fast.next
n -= 1
while(fast.next!=None):
slow = slow.next
fast = fast.next
# fast 走到结尾后,slow 的下一个节点为倒数第N个节点
slow.next = slow.next.next # 删除
return head_dummy.next
"""
class Solution:
def removeNthFromEnd(self, head, n):
fast = slow = head
for _ in range(n):
fast = fast.next
if not fast:
return head.next
while fast.next:
fast = fast.next
slow = slow.next
slow.next = slow.next.next
return head
###Output
_____no_output_____ |
NER/Acronyms-from-supp.ipynb | ###Markdown
Reduce data to drugs with consecutive 4 uppercase letters
###Code
pattern_4 = '[A-Z][A-Z][A-Z][A-Z]'
acro_4 = []
for name in supp_list:
if re.findall(pattern_4, name):
acro_4.append(name)
acro_4
len(acro_4)
# Make them as new training data set in this format
# {"id": 9, "document_id": "1253656", "ner_tags": ["O", "O", "O", "O", "O", "O", "O", "O", "B-Chemical", "O", "O", "O", "O", "O", "O"], "tokens": ["The", "sperm", "motion", "parameter", "most", "strongly", "associated", "with", "1N", "was", "straight", "-", "line", "velocity", "."], "spans": [[1397, 1400], [1401, 1406], [1407, 1413], [1414, 1423], [1424, 1428], [1429, 1437], [1438, 1448], [1449, 1453], [1454, 1456], [1457, 1460], [1461, 1469], [1469, 1470], [1470, 1474], [1475, 1483], [1483, 1484]]}
# id starts at 18910
# document_id start at 6032168
# span starts at 36022
import json
import nltk
# nltk.download('punkt')
from nltk.tokenize import word_tokenize
i = 18910
document_id = 6032168
span_0 = 36022
def tokenize(name):
if '-' in name:
t = name.split('-')
tokens = ['-'] * (len(t) * 2 - 1)
tokens[0::2] = t
res = []
for token in tokens:
res += word_tokenize(token)
tokens = res
else:
tokens = word_tokenize(name)
return tokens
def get_spans(tokens, span_0):
spans = []
for t in tokens:
if t not in [',', '-']:
spans += [[span_0, span_0 + len(t)]]
span_0 = span_0 + len(t) + 1
else:
spans += [[span_0 - 1, span_0]]
span_0 += 1
return spans
acro_list = []
for name in acro_4:
dictionary = {}
dictionary["id"] = i
dictionary['document_id'] = document_id
dictionary['ner_tags'] = ["B-Chemical" if i == 0 else "I-Chemical" for i, t in enumerate(tokenize(name))]
dictionary['tokens'] = tokenize(name)
dictionary['spans'] = get_spans(dictionary['tokens'], span_0)
i += 1
document_id += 1
span_0 = dictionary['spans'][-1][-1] + 1
acro_list.append(dictionary)
# acro_list
acro_list[-1]
acro_list.append({"id": 36781, "document_id": 6050038, "ner_tags": ["O", "O", "O"], "tokens": ["Materials", "and", "Methods"], "spans": [[449369, 449378], [449379, 449382], [449383, 449390]]})
len(acro_list)
with open('add_train_1015.json', 'w') as f:
# f.write(json.dumps(i) for i in acro_list[:10] + '\n')
for d in acro_list:
f.write(json.dumps(d))
f.write('\n')
###Output
_____no_output_____ |
notebooks/analysis.ipynb | ###Markdown
Simple completeness stats
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
detected = np.load('../outputs/detected.npy')
not_detected = np.load('../outputs/not_detected.npy')
alpha = 1
lw = 2
props = dict(lw=lw, alpha=alpha, histtype='step')
bins = 50
plt.figure(figsize=(4, 3))
plt.hist(detected[:, 0] * 24 * 60, bins=bins, label='Detected', **props)
plt.hist(not_detected[:, 0] * 24 * 60, bins=bins, label='Not Detected',**props)
plt.legend(loc='lower right')
plt.ylabel('Frequency')
ax = plt.gca()
ax2 = ax.twiny()
ax.set_xlabel('Period [min]')
ax2.set_xlabel('Period [hours]')
for axis in [ax, ax2]:
axis.set_xlim([180, 12*60])
ax2.set_xticklabels(["{0:.1f}".format(float(i)/60)
for i in ax2.get_xticks()])
plt.savefig('plots/period.pdf', bbox_inches='tight')
plt.show()
plt.figure(figsize=(4, 3))
plt.hist(detected[:, 1], bins=bins, label='Detected', **props)
plt.hist(not_detected[:, 1], bins=bins, label='Not Detected', **props)
plt.xlabel('Radius [km]')
plt.ylabel('Frequency')
plt.xlim([500, 2500])
plt.legend(loc='center right')
plt.savefig('plots/radius.pdf', bbox_inches='tight')
plt.show()
plt.hist(detected[:, 2], log=True)
plt.hist(not_detected[:, 2], log=True)
plt.xlabel('S/N')
plt.show()
26/6
###Output
_____no_output_____
###Markdown
Analyzing a runBelow is some helper code for quickly visualizing and analyzing a set of experiments. Using the helper function in `analyze.py`, each experiment is loaded into a pandas DataFrame, with the metrics reported (e.g. SDR, SIR, SAR) for each file in the test dataset. All of the associated configuration info for each experiment is also reported alongsidethe metrics, making it easy to test the effect of different parameters on the performance.Since the test script altered the number of layers and the bidirectionality of the recurrent stack, the analysis below shows the effects of those parameters. Loading the dataframes
###Code
from scripts import analyze
from runners.utils import load_yaml
import pandas as pd
import matplotlib.pyplot as plt
jobs = load_yaml('../experiments/out/music_dpcl/analyze.yml')['jobs']
data = []
for _job in jobs:
_data, _config, _exp = analyze.main(_job['config'])
data.append(_data)
###Output
2020-01-12:17:16:51,724 INFO [experiment_utils.py:50] Experiment is already set up @ /home/pseetharaman/artifacts//cookiecutter/music/5f3e44cd6ed14bb3be13be8c44250f12!
COMET INFO: old comet version (2.0.18) detected. current: 3.0.2 please update your comet lib with command: `pip install --no-cache-dir --upgrade comet_ml`
COMET INFO: Experiment is live on comet.ml https://www.comet.ml/pseeth/cookiecutter-music/5f3e44cd6ed14bb3be13be8c44250f12
2020-01-12:17:16:52,585 INFO [experiment_utils.py:50] Experiment is already set up @ /home/pseetharaman/artifacts//cookiecutter/music/0707b7df620742f6a184b8340253088c!
COMET INFO: old comet version (2.0.18) detected. current: 3.0.2 please update your comet lib with command: `pip install --no-cache-dir --upgrade comet_ml`
COMET INFO: Experiment is live on comet.ml https://www.comet.ml/pseeth/cookiecutter-music/0707b7df620742f6a184b8340253088c
2020-01-12:17:16:53,440 INFO [experiment_utils.py:50] Experiment is already set up @ /home/pseetharaman/artifacts//cookiecutter/music/1231bde463514088a4a6b32f305b480e!
COMET INFO: old comet version (2.0.18) detected. current: 3.0.2 please update your comet lib with command: `pip install --no-cache-dir --upgrade comet_ml`
COMET INFO: Experiment is live on comet.ml https://www.comet.ml/pseeth/cookiecutter-music/1231bde463514088a4a6b32f305b480e
2020-01-12:17:16:54,338 INFO [experiment_utils.py:50] Experiment is already set up @ /home/pseetharaman/artifacts//cookiecutter/music/142f4613f4c34f4aa96e19fceed1efc8!
COMET INFO: old comet version (2.0.18) detected. current: 3.0.2 please update your comet lib with command: `pip install --no-cache-dir --upgrade comet_ml`
COMET INFO: Experiment is live on comet.ml https://www.comet.ml/pseeth/cookiecutter-music/142f4613f4c34f4aa96e19fceed1efc8
###Markdown
Listing all the possible keys contained in the DataFrame
###Code
d.keys()
###Output
_____no_output_____
###Markdown
Effect of bidirectionality
###Code
data = pd.concat(data)
data.boxplot(column='SDR', by='model_config_modules_recurrent_stack_args_bidirectional')
###Output
_____no_output_____
###Markdown
Effect of number of layers
###Code
data.boxplot(column='SDR', by='model_config_modules_recurrent_stack_args_num_layers')
###Output
_____no_output_____
###Markdown
Statistics
###Code
issuetypes = [{
**_.pick(t, ['tag', 'name', 'count']),
'namecount': f"{t['name']} ({t['count']})",
'subcategory': _.find(labelOptions[0]['subcategories'], lambda x: x['subcategory'] == t['subcategories'][0])['name']
} for t in labelTags['tags'] if t['category'] == 'fault']
df_issuetypes = pd.DataFrame(issuetypes)
df_issuetypes = pd.DataFrame(issuetypes)
df_issuetypes = df_issuetypes.sort_values(['subcategory', 'count'], ascending=[True, False])
subcat_order = _.map_([o for o in labelOptions[0]['subcategories'] if o['group'] == 'pipeline'], 'name')
df_issuetypes = df_issuetypes.set_index('subcategory')
df_issuetypes = df_issuetypes.loc[subcat_order].reset_index(level=0)
issuetypes_base = alt.Chart(df_issuetypes).mark_bar().encode(
x=alt.X('count:Q', title=None, scale=alt.Scale(domain=[0, 200])),
y=alt.Y('namecount:N', sort='-x', title=None)
)
issuetypes_data = issuetypes_base.transform_filter(
(datum.subcategory == 'Data')
).mark_bar(color='gray').properties(title='Data: Issues related to the underlying data')
issuetypes_derive = issuetypes_base.transform_filter(
(datum.subcategory == 'Data Transformation')
).mark_bar(color='lightskyblue').properties(title='Data Transformation: Issues introduced when processing data')
issuetypes_graph = issuetypes_base.transform_filter(
(datum.subcategory == 'Graph Drawing')
).mark_bar(color='ForestGreen').properties(title='Graph Drawing: Issues Introduced in the graph drawing process').encode(
issuetypes_base.encoding.x.copy()
)
issuetypes_graph.encoding.x.title = 'Number of Visualizations (Non-exclusive)'
issuetypes_visual = issuetypes_base.transform_filter(
(datum.subcategory == 'Reading')
).mark_bar(color='gold').properties(title='Reading: Difficulties in reading visualizations')
issuetypes_perception = issuetypes_base.transform_filter(
(datum.subcategory == 'Perception')
).mark_bar(color='brown').properties(title='Perception: Issues related to human perception')
issuetypes_logical = issuetypes_base.transform_filter(
(datum.subcategory == 'Message')
).mark_bar(color='purple').properties(title='Message: Visualizations that are trying to convey illogical message').encode(
issuetypes_base.encoding.x.copy()
)
issuetypes_logical.encoding.x.title = 'Number of Visualizations (Non-exclusive)'
issuetypes_chart = (issuetypes_data & issuetypes_derive & issuetypes_graph) | (issuetypes_visual & issuetypes_perception & issuetypes_logical)
issuetypes_chart
# save(issuetypes_chart, f"{charts_dir/'issuetypes.svg'}")
# await svg2png(f"{charts_dir/'issuetypes.svg'}", f"{charts_dir/'issuetypes.png'}")
datatypes = [{
**_.pick(t, ['tag', 'name', 'count']),
'namecount': f"{t['name']} ({t['count']})",
'subcategory': _.find(labelOptions[2]['subcategories'], lambda x: x['subcategory'] == t['subcategory'])['name']
} for t in labelTags['tags'] if t['category'] == 'data']
df_datatypes = pd.DataFrame(datatypes)
df_datatypes = df_datatypes.sort_values(['subcategory', 'count'], ascending=[True, False])
subcat_order = _.map_(labelOptions[2]['subcategories'], 'name')
df_datatypes = df_datatypes.set_index('subcategory')
df_datatypes = df_datatypes.loc[subcat_order].reset_index(level=0)
datatypes_chart = alt.Chart(df_datatypes).mark_bar().encode(
x=alt.X(field='count', type='quantitative', title='Number of Visualizations (Non-exclusive)', scale=alt.Scale(domain=[0, 750])),
y=alt.Y(field='namecount', type='nominal', sort=list(df_datatypes['name']), title=None),
color=alt.Color(field='subcategory', type='nominal', legend=alt.Legend(title=None), sort=subcat_order)
)
datatypes_chart
# save(datatypes_chart, f"{charts_dir/'datatypes.svg'}")
# await svg2png(f"{charts_dir/'datatypes.svg'}", f"{charts_dir/'datatypes.png'}")
charttypes = [{
**_.pick(t, ['tag', 'name', 'count']),
'namecount': f"{t['name']} ({t['count']})",
'subcategory': _.find(labelOptions[1]['subcategories'], lambda x: x['subcategory'] == t['subcategory'])['name']
} for t in labelTags['tags'] if t['category'] == 'form']
df_charttypes = pd.DataFrame(charttypes)
df_charttypes = df_charttypes.sort_values(['subcategory', 'count'], ascending=[True, False])
subcat_order = _.map_(labelOptions[1]['subcategories'], 'name')
df_charttypes = df_charttypes.set_index('subcategory')
df_charttypes = df_charttypes.loc[subcat_order].reset_index(level=0)
charttypes_chart = alt.Chart(df_charttypes).mark_bar().encode(
x=alt.X(field='count', type='quantitative', title='Number of Visualizations (Non-exclusive for multiple view visualizations)', scale=alt.Scale(domain=[0, 300])),
y=alt.Y(field='namecount', type='nominal', sort=list(df_charttypes['name']), title=None),
color=alt.Color(field='subcategory', type='nominal', legend=alt.Legend(title=None), sort=subcat_order)
)
charttypes_chart
# save(charttypes_chart, f"{charts_dir/'charttypes.svg'}")
# await svg2png(f"{charts_dir/'charttypes.svg'}", f"{charts_dir/'charttypes.png'}")
domains = [{
**_.pick(t, ['tag', 'name', 'count']),
'namecount': f"{t['name']} ({t['count']})" if t['name'] != 'Unknown' else f"*{t['name']} ({t['count']})",
} for t in labelTags['tags'] if t['category'] == 'domain']
df_domains = pd.DataFrame(domains)
df_domains = df_domains.sort_values(['count'], ascending=[False])
df_domains
domains_chart = alt.Chart(df_domains).mark_bar(color='gray').encode(
x=alt.X('count:Q', title='Number of Visualizations'),
y=alt.Y('namecount:N', sort=list(df_domains['name']), title=None)
).properties(title='Data Domains')
domains_chart
# save(domains_chart, f"{charts_dir/'domains.svg'}")
# await svg2png(f"{charts_dir/'domains.svg'}", f"{charts_dir/'domains.png'}")
medium = [{
**_.pick(t, ['tag', 'name', 'count']),
'namecount': f"{t['name']} ({t['count']})",
} for t in labelTags['tags'] if t['category'] == 'media']
df_medium = pd.DataFrame(medium)
df_medium = df_medium.sort_values(['count'], ascending=[False])
df_medium
medium_chart = alt.Chart(df_medium).mark_bar(color='gray').encode(
x=alt.X('count:Q', title='Number of Visualizations'),
y=alt.Y('namecount:N', sort='-x', title=None)
).properties(title='Visualization Medium')
medium_chart
# save(medium_chart, f"{charts_dir/'medium.svg'}")
# await svg2png(f"{charts_dir/'medium.svg'}", f"{charts_dir/'medium.png'}")
effects = [{
**_.pick(t, ['tag', 'name', 'count']),
'namecount': f"{t['name']} ({t['count']})",
} for t in labelTags['tags'] if t['category'] == 'effect']
df_effects = pd.DataFrame(effects)
df_effects = df_effects.sort_values(['count'], ascending=[False])
df_effects
effects_chart = alt.Chart(df_effects).mark_bar(color='gray').encode(
x=alt.X('count:Q', title='Number of Visualizations'),
y=alt.Y('namecount:N', sort='-x', title=None)
).properties(title='Perceived Effects')
effects_chart
# save(effects_chart, f"{charts_dir/'effects.svg'}")
# await svg2png(f"{charts_dir/'effects.svg'}", f"{charts_dir/'effects.png'}")
###Output
_____no_output_____
###Markdown
Co-occurrence Charts and Issues
###Code
records = []
for issuetype in issuetypes:
for charttype in charttypes:
records.append({
'Issue Types': issuetype['namecount'],
'issuetype_tag': issuetype['tag'],
'Chart Types': charttype['namecount'],
'charttype_tag': charttype['tag'],
'count': len([image for image in allImages if issuetype['tag'] in image['labels'] and charttype['tag'] in image['labels']])
})
df_charttypes_issuetypes = pd.DataFrame(records)
base = alt.Chart(df_charttypes_issuetypes[df_charttypes_issuetypes['count'] > 0]).encode(
alt.X('Issue Types:N', scale=alt.Scale(paddingInner=0), sort=list(df_issuetypes['namecount'])),
alt.Y('Chart Types:N', scale=alt.Scale(paddingInner=0), sort=list(df_charttypes['namecount']))
)
# Configure heatmap
heatmap = base.mark_rect().encode(
color=alt.Color('count:Q',
scale=alt.Scale(scheme='blues', domain=[0, 100]),
legend=alt.Legend(direction='vertical', title='# of Vis')
)
)
# Configure text
text = base.mark_text(baseline='middle').encode(
text='count:Q',
color=alt.value('white')
)
# Draw the chart
charttypes_issuetypes_chart = heatmap + text
charttypes_issuetypes_chart
# save(charttypes_issuetypes_chart, f"{charts_dir/'charttypes_issuetypes.svg'}")
# await svg2png(f"{charts_dir/'charttypes_issuetypes.svg'}", f"{charts_dir/'charttypes_issuetypes.png'}")
###Output
_____no_output_____
###Markdown
Data and Issues
###Code
records = []
for issuetype in issuetypes:
for datatype in datatypes:
records.append({
'Issue Types': issuetype['namecount'],
'issuetype_tag': issuetype['tag'],
'Data Types': datatype['namecount'],
'datatype_tag': datatype['tag'],
'count': len([image for image in allImages if issuetype['tag'] in image['labels'] and datatype['tag'] in image['labels']])
})
df_datatypes_issuetypes = pd.DataFrame(records)
base = alt.Chart(df_datatypes_issuetypes[df_datatypes_issuetypes['count'] > 0]).encode(
alt.X('Issue Types:N', scale=alt.Scale(paddingInner=0), sort=list(df_issuetypes['namecount'])),
alt.Y('Data Types:N', scale=alt.Scale(paddingInner=0), sort=list(df_datatypes['namecount']))
)
# Configure heatmap
heatmap = base.mark_rect().encode(
color=alt.Color('count:Q',
scale=alt.Scale(scheme='blues', domain=[0, 100]),
legend=alt.Legend(direction='vertical', title='# of Vis')
)
)
# Configure text
text = base.mark_text(baseline='middle').encode(
text='count:Q',
color=alt.value('white')
)
# Draw the chart
datatypes_issuetypes_chart = (heatmap + text).properties(width=1280)
datatypes_issuetypes_chart
# save(datatypes_issuetypes_chart, f"{charts_dir/'datatypes_issuetypes.svg'}")
# await svg2png(f"{charts_dir/'datatypes_issuetypes.svg'}", f"{charts_dir/'datatypes_issuetypes.png'}")
###Output
_____no_output_____
###Markdown
Domains and Issues
###Code
records = []
for issuetype in issuetypes:
for domain in domains:
records.append({
'Issue Types': issuetype['namecount'],
'issuetype_tag': issuetype['tag'],
'Domains': domain['namecount'],
'domain_tag': domain['tag'],
'count': len([image for image in allImages if issuetype['tag'] in image['labels'] and domain['tag'] in image['labels']])
})
df_domains_issuetypes = pd.DataFrame(records)
base = alt.Chart(df_domains_issuetypes[df_domains_issuetypes['count'] > 0]).encode(
alt.X('Issue Types:N', scale=alt.Scale(paddingInner=0), sort=list(df_issuetypes['namecount'])),
alt.Y('Domains:N', scale=alt.Scale(paddingInner=0), sort=list(df_domains['namecount']))
)
# Configure heatmap
heatmap = base.mark_rect().encode(
color=alt.Color('count:Q',
scale=alt.Scale(scheme='blues', domain=[0, 100]),
legend=alt.Legend(direction='vertical', title='# of Vis')
)
)
# Configure text
text = base.mark_text(baseline='middle').encode(
text='count:Q',
color=alt.value('white')
)
# Draw the chart
domains_issuetypes_chart = (heatmap + text)
domains_issuetypes_chart
# save(domains_issuetypes_chart, f"{charts_dir/'domains_issuetypes.svg'}")
# await svg2png(f"{charts_dir/'domains_issuetypes.svg'}", f"{charts_dir/'domains_issuetypes.png'}")
###Output
_____no_output_____
###Markdown
Domains and Charts
###Code
records = []
for charttype in charttypes:
for domain in domains:
records.append({
'Chart Types': charttype['namecount'],
'charttype_tag': charttype['tag'],
'Domains': domain['namecount'],
'domain_tag': domain['tag'],
'count': len([image for image in allImages if charttype['tag'] in image['labels'] and domain['tag'] in image['labels']])
})
df_domains_charttypes = pd.DataFrame(records)
base = alt.Chart(df_domains_charttypes[df_domains_charttypes['count'] > 0]).encode(
alt.X('Chart Types:N', scale=alt.Scale(paddingInner=0), sort=list(df_charttypes['namecount'])),
alt.Y('Domains:N', scale=alt.Scale(paddingInner=0), sort=list(df_domains['namecount']))
)
# Configure heatmap
heatmap = base.mark_rect().encode(
color=alt.Color('count:Q',
scale=alt.Scale(scheme='oranges', domain=[0, 100]),
legend=alt.Legend(direction='vertical', title='# of Vis')
)
)
# Configure text
text = base.mark_text(baseline='middle').encode(
text='count:Q',
color=alt.value('white')
)
# Draw the charttype
domains_charttypes_chart = (heatmap + text)
domains_charttypes_chart
# save(domains_charttypes_chart, f"{charts_dir/'domains_charttypes.svg'}")
# await svg2png(f"{charts_dir/'domains_charttypes.svg'}", f"{charts_dir/'domains_charttypes.png'}")
###Output
_____no_output_____
###Markdown
Effects and Issues
###Code
records = []
for issuetype in issuetypes:
for effect in effects:
records.append({
'Issue Types': issuetype['namecount'],
'issuetype_tag': issuetype['tag'],
'Effects': effect['namecount'],
'effect_tag': effect['tag'],
'count': len([image for image in allImages if issuetype['tag'] in image['labels'] and effect['tag'] in image['labels']])
})
df_effects_issuetypes = pd.DataFrame(records)
base = alt.Chart(df_effects_issuetypes[df_effects_issuetypes['count'] > 0]).encode(
alt.X('Issue Types:N', scale=alt.Scale(paddingInner=0), sort=list(df_issuetypes['namecount'])),
alt.Y('Effects:N', scale=alt.Scale(paddingInner=0), sort=list(df_effects['namecount']))
)
# Configure heatmap
heatmap = base.mark_rect().encode(
color=alt.Color('count:Q',
scale=alt.Scale(scheme='blues', domain=[0, 100]),
legend=alt.Legend(direction='vertical', title='# of Vis')
)
)
# Configure text
text = base.mark_text(baseline='middle').encode(
text='count:Q',
color=alt.value('white')
)
# Draw the chart
effects_issuetypes_chart = (heatmap + text)
effects_issuetypes_chart
# save(effects_issuetypes_chart, f"{charts_dir/'effects_issuetypes.svg'}")
# await svg2png(f"{charts_dir/'effects_issuetypes.svg'}", f"{charts_dir/'effects_issuetypes.png'}")
###Output
_____no_output_____
###Markdown
Effects and Charts
###Code
records = []
for charttype in charttypes:
for effect in effects:
records.append({
'Chart Types': charttype['namecount'],
'charttype_tag': charttype['tag'],
'Effects': effect['namecount'],
'effect_tag': effect['tag'],
'count': len([image for image in allImages if charttype['tag'] in image['labels'] and effect['tag'] in image['labels']])
})
df_effects_charttypes = pd.DataFrame(records)
base = alt.Chart(df_effects_charttypes[df_effects_charttypes['count'] > 0]).encode(
alt.X('Chart Types:N', scale=alt.Scale(paddingInner=0), sort=list(df_charttypes['namecount'])),
alt.Y('Effects:N', scale=alt.Scale(paddingInner=0), sort=list(df_effects['namecount']))
)
# Configure heatmap
heatmap = base.mark_rect().encode(
color=alt.Color('count:Q',
scale=alt.Scale(scheme='oranges', domain=[0, 100]),
legend=alt.Legend(direction='vertical', title='# of Vis')
)
)
# Configure text
text = base.mark_text(baseline='middle').encode(
text='count:Q',
color=alt.value('white')
)
# Draw the chart
effects_charttypes_chart = (heatmap + text)
effects_charttypes_chart
# save(effects_charttypes_chart, f"{charts_dir/'effects_charttypes.svg'}")
# await svg2png(f"{charts_dir/'effects_charttypes.svg'}", f"{charts_dir/'effects_charttypes.png'}")
###Output
_____no_output_____
###Markdown
Consumer Price Index analysisBy Ben WelshA rudimentary analysis of the Consumer Price Index published by the U.S. Bureau of Labor Statistics. It was developed to verify the accuracy of the [cpi](https://github.com/datadesk/cpi) open-source Python wrapper that eases access to the official government data. Import Python tools
###Code
import os
import json
import warnings
import pandas as pd
import altair as alt
from datetime import date, datetime, timedelta
import altair_latimes as lat
alt.themes.register('latimes', lat.theme)
alt.themes.enable('latimes')
warnings.simplefilter("ignore")
###Output
_____no_output_____
###Markdown
Import the development version of this library
###Code
import os
import sys
this_dir = os.path.dirname(os.getcwd())
sys.path.insert(0, this_dir)
import cpi
###Output
_____no_output_____
###Markdown
Top-level numbers for the latest month
###Code
def get_last13(**kwargs):
df = cpi.series.get(**kwargs).to_dataframe()
# Filter down to monthly values
df = df[df.period_type == 'monthly']
# Cut down to the last 13 months
df = df.sort_values("date").tail(14)
# Return it
return df
def analyze_last13(df):
# Calculate the monthly percentage change
df['pct_change'] = (df.value.pct_change()*100)
# Calculate the monthly percentage change
df['pct_change_rounded'] = df['pct_change'].round(1)
# Get latest months
latest_month, latest_change = df.sort_values("date", ascending=False)[['date', 'pct_change_rounded']].to_records(index=False)[0]
previous_month, previous_change = df.sort_values("date", ascending=False)[['date', 'pct_change_rounded']].to_records(index=False)[1]
# Pass it back
return dict(
latest_month=latest_month,
latest_change=latest_change,
previous_month=previous_month,
previous_change=previous_change,
)
###Output
_____no_output_____
###Markdown
Query the seasonally-adjusted CPI-U, which is the variation used by the BLS in its release.
###Code
adjusted_cpiu_last13 = get_last13(seasonally_adjusted=True)
adjusted_cpi_analysis = analyze_last13(adjusted_cpiu_last13)
adjusted_cpi_analysis
adjusted_food_analysis = analyze_last13(get_last13(seasonally_adjusted=True, items="Food"))
adjusted_food_analysis
adjusted_energy_analysis = analyze_last13(get_last13(seasonally_adjusted=True, items="Energy"))
adjusted_energy_analysis
adjusted_all_less_food_and_energy_analysis = analyze_last13(get_last13(seasonally_adjusted=True, items="All items less food and energy"))
adjusted_all_less_food_and_energy_analysis
base = alt.Chart(
adjusted_cpiu_last13,
title="One-month percent change in CPI for All Urban Consumers (CPI-U), seasonally adjusted"
).properties(width=700)
bars = base.mark_bar().encode(
x=alt.X(
"date:O",
timeUnit="utcyearmonth",
axis=alt.Axis(title=None, labelAngle=0),
),
y=alt.Y(
"pct_change_rounded:Q",
axis=alt.Axis(title=None),
scale=alt.Scale(domain=[
adjusted_cpiu_last13['pct_change'].min()-0.1,
adjusted_cpiu_last13['pct_change'].max()+0.05
])
)
)
text = base.encode(
x=alt.X("date:O", timeUnit="utcyearmonth"),
y="pct_change_rounded:Q",
text='pct_change_rounded'
)
textAbove = text.transform_filter(alt.datum.pct_change > 0).mark_text(
align='center',
baseline='middle',
fontSize=14,
dy=-10
)
textBelow = text.transform_filter(alt.datum.pct_change < 0).mark_text(
align='center',
baseline='middle',
fontSize=14,
dy=12
)
bars + textAbove + textBelow
###Output
_____no_output_____
###Markdown
Get the year over year change
###Code
unadjusted_cpiu = cpi.series.get(seasonally_adjusted=False).to_dataframe()
unadjusted_cpiu_monthly = unadjusted_cpiu[unadjusted_cpiu.period_type == 'monthly'].sort_values("date", ascending=False)
unadjusted_cpiu_monthly.head(13)[[
'date',
'value'
]]
lastest_unadjusted, one_year_ago_unadjusted = pd.concat([
unadjusted_cpiu_monthly.head(1),
unadjusted_cpiu_monthly.head(13).tail(1),
]).value.tolist()
lastest_unadjusted, one_year_ago_unadjusted
yoy_change = round(((lastest_unadjusted-one_year_ago_unadjusted)/one_year_ago_unadjusted)*100, 1)
yoy_change
with open("./latest.json", "w") as fp:
fp.write(json.dumps(dict(
all=adjusted_cpi_analysis,
food=adjusted_food_analysis,
energy=adjusted_energy_analysis,
less_food_and_energy=adjusted_all_less_food_and_energy_analysis,
yoy_change=yoy_change,
)))
adjusted_cpiu_last13[~pd.isnull(adjusted_cpiu_last13.pct_change_rounded)][[
'date',
'pct_change',
'pct_change_rounded'
]].to_csv("./cpi-mom.csv", index=False)
###Output
_____no_output_____
###Markdown
Match category analysis published by the BLSIn an October 2018 [post](https://www.bls.gov/opub/ted/2018/consumer-prices-up-2-point-3-percent-over-year-ended-september-2018.htm) the BLS published the following chart showing the month to month percentage change in the Consumer Price Index for All Urban Consumers across a select group of categories. We will replicate it below. Query the data series charted by the BLS
###Code
all_items = cpi.series.get(seasonally_adjusted=False).to_dataframe()
energy = cpi.series.get(items="Energy", seasonally_adjusted=False).to_dataframe()
food = cpi.series.get(items="Food", seasonally_adjusted=False).to_dataframe()
###Output
_____no_output_____
###Markdown
Write a function to prepare each series for presentation
###Code
def prep_yoy(df):
# Trim down to monthly values
df = df[df.period_type == 'monthly']
# Calculate percentage change year to year
df['pct_change'] = df.value.pct_change(12)
# Trim down to the last 13 months
return df.sort_values("date")
all_items_prepped = prep_yoy(all_items)
energy_prepped = prep_yoy(energy)
food_prepped = prep_yoy(food)
three_cats = pd.concat([
all_items_prepped.tail(12*10),
energy_prepped.tail(12*10),
food_prepped.tail(12*10)
])
base = alt.Chart(
three_cats,
title="12-month percentage change, Consumer Price Index, selected categories"
).encode(
x=alt.X(
"date:T",
timeUnit="yearmonth",
axis=alt.Axis(
title=None,
labelAngle=0,
grid=False,
# A truly gnarly hack from https://github.com/altair-viz/altair/issues/187
values=list(pd.to_datetime([
'2008-11-01',
'2010-11-01',
'2012-11-01',
'2014-11-01',
'2016-11-01',
'2018-11-01'
]).astype(int) / 10 ** 6)
),
),
y=alt.Y(
"pct_change:Q",
axis=alt.Axis(title=None, format='%'),
scale=alt.Scale(domain=[-0.4, 0.3])
),
color=alt.Color(
"series_items_name:N",
legend=alt.Legend(title="Category"),
# scale=alt.Scale(range=["#423a51", "#449cb0", "#d09972"])
)
)
all_items = base.transform_filter(
alt.datum.series_items_name == 'All items'
).mark_line(strokeDash=[3, 2])
other_items = base.transform_filter(
alt.datum.series_items_name != 'All items'
).mark_line()
(all_items + other_items).properties(width=600)
three_cats.to_csv("./three-categories-yoy.csv", index=False)
###Output
_____no_output_____
###Markdown
A similar chart with a shorter timeframe Here's another one. 
###Code
all_less_energy_and_food = cpi.series.get(items="All items less food and energy", seasonally_adjusted=False).to_dataframe()
all_less_energy_and_food_prepped = prep_yoy(all_less_energy_and_food)
two_cats = pd.concat([
all_items_prepped.tail(13),
all_less_energy_and_food_prepped.tail(13),
])
base = alt.Chart(
two_cats,
title="12-month percent change in CPI for All Urban Consumers (CPI-U), not seasonally adjusted"
).encode(
x=alt.X(
"date:T",
timeUnit="utcyearmonth",
axis=alt.Axis(
title=None,
labelAngle=0,
grid=False,
format="%b"
)
),
y=alt.Y(
"pct_change:Q",
axis=alt.Axis(title="Percent change", format='%'),
scale=alt.Scale(domain=[0.012, 0.03])
),
color=alt.Color(
"series_items_name:N",
legend=alt.Legend(title="Category"),
# scale=alt.Scale(range=["#336EFF", "#B03A2E",])
)
)
line = base.mark_line(strokeWidth=0.85)
exes = base.transform_filter(alt.datum.series_items_name == 'All items').mark_point(shape="triangle-down", size=25)
points = base.transform_filter(alt.datum.series_items_name == 'All items less food and energy').mark_point(size=25, fill="#B03A2E")
(line + exes + points).properties(width=600, height=225)
two_cats.to_csv("./two-categories-yoy.csv", index=False)
###Output
_____no_output_____
###Markdown
Table 2Unsupervised anomaly detection results (F1 score, precision, and recall) per pipeline on each dataset.**data source**: `data/results.csv`
###Code
make_table_2()
###Output
_____no_output_____
###Markdown
Figure 7.aPipeline computational performance**data source**: `data/comp_performance.csv`
###Code
make_figure_7a()
###Output
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
###Markdown
Figure 7.bDifference in runtime between stand-alone primitives and end-to-end pipelines.**data source**: `data/delta.csv`
###Code
make_figure_7b()
###Output
_____no_output_____
###Markdown
Figure 7.cF1 Scores prior and after tuning pipelines on NAB dataset using a ground truth set of anomalies.**data source**: `data/untuned_results.csv` + `data/tuned_results.csv`
###Code
make_figure_7c()
###Output
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
###Markdown
Figure 8.aSemi-supervised pipeline performance on NAB through simulating annotations from different starting points.
###Code
make_figure_8a()
###Output
_____no_output_____
###Markdown
Get top 10 by mean cost and mean time
###Code
top_mean_cost = stats.sort_values(by=["cost_mean"], ascending=True)[0:5]
top_mean_cost_time = stats[stats.time_mean < 500].sort_values(by=["cost_mean"], ascending=True)[0:5]
top_mean_cost_time
def transform(df):
response = {}
a = df.setting.split()
response["alg"] = int(float(a[0]))
response["T0"] = "frac{S0}{2}" if float(a[1]) % 10 == 0 else "frac{S0}{10}"
print(round(df.cost_mean, 3), round(df.time_mean, 3))
response["Tf"] = a[2]
response["it"] = int(float(a[3]))
response["beta"] = a[4]
response["cost_mean"] = round(df.cost_mean, 3)
#response["cost_sd"] = round(df.cost_sd, 3)
response["time_mean"] = round(df.time_mean, 3)
#response["time_sd"] = round(df.time_sd, 3)
return pd.Series(response)
top_mean_cost_time[0:5].apply(transform, axis=1)#.to_csv("top_costs_stndr10.csv")
# 2.0 14867566.5 1000.0 30.0 0.99
top_mean_cost.setting.values
top_costs = solutions[solutions.setting.isin(top_mean_cost.setting.values)]
top_costs_time = solutions[solutions.setting.isin(top_mean_cost_time.setting.values)]
fig, ax = plt.subplots(figsize=(10, 5))
ax = sns.boxplot(x="cost", y="setting", data=top_costs_time)
ax = sns.swarmplot(x="cost", y="setting", data=top_costs_time, color=".25")
ax.set_xlabel("Costo")
ax.set_ylabel("Configuración")
fig, ax = plt.subplots(figsize=(10, 5))
ax = sns.boxplot(x="total_time", y="setting", data=top_costs_time)
ax = sns.swarmplot(x="total_time", y="setting", data=top_costs_time, color=".25")
ax.set_xlabel("Tiempo [s]")
ax.set_ylabel("Configuración")
fig, ax = plt.subplots(figsize=(15, 10))
ax = sns.boxplot(x="cost", y="setting", data=top_costs_time)
ax = sns.swarmplot(x="cost", y="setting", data=top_costs_time, color=".25")
top_mean_cost_time
stats.to_csv("juanito.csv")
fig, ax = plt.subplots(figsize=(10, 6))
g = sns.scatterplot(x="time_mean",
y="cost_mean",
hue="Vecindad",
data=stats,
legend="full")
g.set_ylabel("Costo promedio")
g.set_xlabel("Tiempo promedio [s]")
mask = (stats.time_mean > 900) & (stats.time_mean < 1000)
stats[mask].sort_values(by=["time_mean"], ascending=True)
###Output
_____no_output_____
###Markdown
algorithm t0 tf it lambda2 14867566.5 1000 30 0.99 best1 14867566.5 100000 15 0.79 worst1 49558555 1000 30 0.94 mid2 14867566.5 10000 15 0.99 mid
###Code
10**3
fig, ax = plt.subplots(figsize=(15, 10))
ax = sns.scatterplot(x="time_mean", y="cost_mean", data=stats)
import os
results = []
for i, el in enumerate(os.listdir("../output/")):
if instance in el and "costs" in el:
print(el)
aux = pd.read_csv(f"../output/{el}", index_col=False)
aux["setting"] = f"{el[14:-4]}"
del aux["Unnamed: 0"]
results.append(aux)
results
some_solutions = pd.concat(results)
some_solutions
fig, ax = plt.subplots(figsize=(10, 6))
ax = sns.scatterplot(x="x",
y="costs",
hue="setting",
data=some_solutions,
edgecolor=None,
s = 1)
ax.set_ylabel("Costo")
ax.set_xlabel("Iteraciones")
###Output
_____no_output_____
###Markdown
Notebook usado para analizar en un pequeño ejemplo de datos como se distribuyen los tags creados, como son los datos numéricos, como correlacionan los datos entre ellos. (Se ha hecho con una sola compañía para aligerar el proceso. Tras esto suponemos que cuando tengamos el resto de datos para las demás compañías, aunque no de la misma manera, pero se comporte parecido, es decir, que el mejor modelo sea el mismo, y que las variables correlacionen de la misma manera o muy parecida).Además en este notebook está todo el trabajo de comprobar que modelo de datos es el más adecuado para el problema de predecir, el modelo de datos usado finalmente será (Robust Scaler + Random Forest sin outliers).
###Code
import sys
import warnings
from datetime import datetime
from dateutil.relativedelta import relativedelta
import pandas as pd
import numpy as np
import numba as nb
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
from sklearn.preprocessing import RobustScaler, StandardScaler
from sklearn.ensemble import RandomForestClassifier
sns.set()
warnings.filterwarnings("ignore")
app_path = '/Users/esanc147/Documents/business/bsm03/web_app'
if app_path in sys.path:
from tools.tags import create_tags
else:
sys.path.append('/Users/esanc147/Documents/business/bsm03/web_app')
from tools.tags import create_tags
COLUMNS_TECH = ['symbol', 'date', 'MACD_Signal', 'MACD_Hist', 'MACD', 'SlowK', 'SlowD',
'Chaikin A/D', 'OBV', 'EMA21', 'SMA21', 'WMA21', 'RSI21', 'ADX21',
'CCI21', 'Aroon Up21', 'Aroon Down21', 'Real Lower Band21',
'Real Upper Band21', 'Real Middle Band21', 'EMA28', 'SMA28', 'WMA28',
'RSI28', 'ADX28', 'CCI28', 'Aroon Down28', 'Aroon Up28',
'Real Lower Band28', 'Real Upper Band28', 'Real Middle Band28', 'EMA50',
'SMA50', 'WMA50', 'RSI50', 'ADX50', 'CCI50', 'Aroon Up50',
'Aroon Down50', 'Real Middle Band50', 'Real Lower Band50',
'Real Upper Band50']
COLUMNS = ['symbol', 'date', 'close', 'volume', 'open', 'high', 'low']
U_COLUMNS = ['close', 'volume', 'MACD_Signal', 'MACD_Hist', 'MACD', 'SlowK', 'SlowD',
'Chaikin A/D', 'OBV', 'RSI21', 'ADX21', 'CCI21', 'Aroon Up21', 'Aroon Down21',
'RSI28', 'ADX28', 'CCI28', 'Aroon Down28', 'Aroon Up28', 'Real Lower Band28',
'Real Upper Band28', 'Real Middle Band28', 'SMA50', 'RSI50', 'ADX50', 'CCI50',
'Aroon Up50', 'Aroon Down50']
COLS_WO_FIN = ['close', 'volume', 'MACD_Signal', 'MACD_Hist',
'MACD', 'SlowK', 'SlowD', 'Chaikin A/D', 'OBV', 'EMA21', 'SMA21',
'WMA21', 'RSI21', 'ADX21', 'CCI21', 'Aroon Up21', 'Aroon Down21',
'Real Lower Band21', 'Real Upper Band21', 'Real Middle Band21', 'EMA28',
'SMA28', 'WMA28', 'RSI28', 'ADX28', 'CCI28', 'Aroon Down28',
'Aroon Up28', 'Real Lower Band28', 'Real Upper Band28',
'Real Middle Band28', 'EMA50', 'SMA50', 'WMA50', 'RSI50', 'ADX50',
'CCI50', 'Aroon Up50', 'Aroon Down50', 'Real Middle Band50',
'Real Lower Band50', 'Real Upper Band50']
SYMBOLS = ['AAPL', 'MSFT', 'AMZN']
FULL_PATH = "/Users/esanc147/Documents/business/bsm03/web_app/data"
symbol = SYMBOLS[2]
path_close = f"{FULL_PATH}/close/{symbol}.csv"
df_close = pd.read_csv(path_close, names=COLUMNS)
df_close['date'] = pd.to_datetime(df_close['date'])
df_close['volume'] = df_close['volume'].astype(float)
path_tech = f"{FULL_PATH}/tech/{symbol}.csv"
df_tech = pd.read_csv(path_tech, names=COLUMNS_TECH)
df_tech['date'] = pd.to_datetime(df_tech['date'])
list_df_tagged = []
for period in [7, 14, 21, 28]:
df_aux = create_tags(df_close, period)
df_aux[f"pct_change_{period}"] = df_aux[f"pct_change_{period}"].astype(float)
df_aux[f"pct_change_{period}"] = df_aux[f"pct_change_{period}"].astype(float)
list_df_tagged.append(df_aux)
df_tagged = pd.concat(list_df_tagged, axis=1)
df_tagged.dropna(inplace=True)
df_close = df_close.set_index(['symbol', 'date'])
df_tech = df_tech.set_index(['symbol', 'date'])
dataframe = pd.concat([df_close, df_tech, df_tagged], join='inner', axis=1)
###Output
_____no_output_____
###Markdown
Tamaño del dataframe
###Code
dataframe.shape
###Output
_____no_output_____
###Markdown
Comprobar si alguna columna tiene nulos
###Code
dataframe.isnull().any().any()
###Output
_____no_output_____
###Markdown
Límites de fechas que tenemos
###Code
dataframe.index.min()
dataframe.index.max()
###Output
_____no_output_____
###Markdown
Tipos de datos que tenemos en el dataframe
###Code
dataframe.dtypes.value_counts()
###Output
_____no_output_____
###Markdown
Como se distribuyen los datos de tagging
###Code
dataframe.select_dtypes(object)['tag_7'].value_counts().plot.bar();
norm = dataframe.select_dtypes(object)['tag_7'].value_counts(normalize=True) * 100
print(f"Porcentaje de subida: {round(norm[norm.index.str.endswith('bull')].sum(), 2)}% | Porcentaje de bajada: {round(norm[norm.index.str.endswith('bear')].sum(), 2)}%")
norm
dataframe.select_dtypes(object)['tag_14'].value_counts().plot.bar();
norm = dataframe.select_dtypes(object)['tag_14'].value_counts(normalize=True) * 100
print(f"Porcentaje de subida: {round(norm[norm.index.str.endswith('bull')].sum(), 2)}% | Porcentaje de bajada: {round(norm[norm.index.str.endswith('bear')].sum(), 2)}%")
norm
dataframe.select_dtypes(object)['tag_21'].value_counts().plot.bar();
norm = dataframe.select_dtypes(object)['tag_21'].value_counts(normalize=True) * 100
print(f"Porcentaje de subida: {round(norm[norm.index.str.endswith('bull')].sum(), 2)}% | Porcentaje de bajada: {round(norm[norm.index.str.endswith('bear')].sum(), 2)}%")
norm
dataframe.select_dtypes(object)['tag_28'].value_counts().plot.bar();
norm = dataframe.select_dtypes(object)['tag_28'].value_counts(normalize=True) * 100
print(f"Porcentaje de subida: {round(norm[norm.index.str.endswith('bull')].sum(), 2)}% | Porcentaje de bajada: {round(norm[norm.index.str.endswith('bear')].sum(), 2)}%")
norm
###Output
Porcentaje de subida: 58.05% | Porcentaje de bajada: 30.2%
###Markdown
Como se distribuyen los datos numéricos
###Code
pd.set_option('display.max_columns', 30)
dataframe.select_dtypes(float)[U_COLUMNS].describe()
###Output
_____no_output_____
###Markdown
Mapa de calor de la correlación de las variables
###Code
fig, ax = plt.subplots(figsize=(15,10))
sns.heatmap(dataframe[U_COLUMNS].corr(),
xticklabels=True, yticklabels=True,
cmap="icefire");
###Output
_____no_output_____
###Markdown
Función creada por usuario RobustScaler
###Code
@nb.jit(nopython=False)
def ud_robust_scaler(data, columns, rel_delta):
row = 0
arr_sd = data[:, 0:2]
result = np.zeros((data.shape[0], len(columns[2:])))
for date in data[:, 1]:
# Filtrado de los ultimos 4 meses a fecha dada
datetime_ = datetime(date.year, date.month, date.day)
arr_aux = data[(data[:, 1] >= datetime_-rel_delta) \
& (data[:, 1] <= datetime_)][:, 2:]
if arr_aux.shape[0] <= 70:
break
for col in range(len(columns[2:])):
p25 = np.percentile(arr_aux[:, col], 25)
p75 = np.percentile(arr_aux[:, col], 75)
p50 = np.percentile(arr_aux[:, col], 50)
iqr = p75 - p25
result[row, col] = (arr_aux[0, col] - p50) / iqr
row += 1
return result
###Output
_____no_output_____
###Markdown
KNN
###Code
dataframe_reset = dataframe.reset_index()
dataframe_train = dataframe_reset[dataframe_reset['date'].dt.year <= 2017] \
.sort_values(by='date', ascending=False) \
.set_index(['symbol', 'date'])
dataframe_test = dataframe_reset[(dataframe_reset['date'].dt.year > 2017)
& (dataframe_reset['date'].dt.year < 2019)] \
.sort_values(by='date', ascending=False) \
.set_index(['symbol', 'date'])
###Output
_____no_output_____
###Markdown
KNN - Todos los datos | UDF | Con outliers
###Code
knn = KNeighborsClassifier(weights='distance')
df_aux = dataframe_train.reset_index()
cols = ['symbol', 'date']
cols.extend(list(df_aux.select_dtypes(float).columns))
df_aux = df_aux[cols]
X_train = ud_robust_scaler(df_aux.values, df_aux.columns, relativedelta(months=4))
y_train = dataframe_train['tag_28'].values
df_aux = dataframe_test.reset_index()
cols = ['symbol', 'date']
cols.extend(list(df_aux.select_dtypes(float).columns))
df_aux = df_aux[cols]
X_test = ud_robust_scaler(df_aux.values, df_aux.columns, relativedelta(months=4))
y_test = dataframe_test['tag_28'].values
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.08 0.39 0.13 18
bull 0.54 0.38 0.44 98
keep 0.30 0.15 0.20 39
outlier bear 0.14 1.00 0.25 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.45 0.27 0.33 49
strong bull 0.34 0.27 0.30 44
accuracy 0.30 251
macro avg 0.26 0.35 0.24 251
weighted avg 0.41 0.30 0.33 251
###Markdown
KNN - Todos los datos | Sin escalado | Con outliers
###Code
knn = KNeighborsClassifier(weights='distance')
X_train = dataframe_train.select_dtypes(float).values
y_train = dataframe_train['tag_28'].values
X_test = dataframe_test.select_dtypes(float).values
y_test = dataframe_test['tag_28'].values
knn.fit(X_train, y_train);
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.00 0.00 0.00 98
keep 0.00 0.00 0.00 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.78 0.59 0.67 49
strong bull 0.21 1.00 0.34 44
accuracy 0.29 251
macro avg 0.14 0.23 0.15 251
weighted avg 0.19 0.29 0.19 251
###Markdown
KNN - COLS_WO_FIN | Sin escalado | Con outliers
###Code
knn = KNeighborsClassifier(weights='distance')
X_train = dataframe_train.select_dtypes(float)[COLS_WO_FIN].values
y_train = dataframe_train['tag_28'].values
X_test = dataframe_test.select_dtypes(float)[COLS_WO_FIN].values
y_test = dataframe_test['tag_28'].values
knn.fit(X_train, y_train);
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.00 0.00 0.00 98
keep 0.00 0.00 0.00 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.78 0.59 0.67 49
strong bull 0.21 1.00 0.34 44
accuracy 0.29 251
macro avg 0.14 0.23 0.15 251
weighted avg 0.19 0.29 0.19 251
###Markdown
KNN - U_COLUMNS | Sin escalado | Con outliers
###Code
knn = KNeighborsClassifier(weights='distance')
X_train = dataframe_train.select_dtypes(float)[U_COLUMNS].values
y_train = dataframe_train['tag_28'].values
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
y_test = dataframe_test['tag_28'].values
knn.fit(X_train, y_train);
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.00 0.00 0.00 98
keep 0.00 0.00 0.00 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.78 0.59 0.67 49
strong bull 0.21 1.00 0.34 44
accuracy 0.29 251
macro avg 0.14 0.23 0.15 251
weighted avg 0.19 0.29 0.19 251
###Markdown
KNN - U_COLUMNS | UDF | Con outliers
###Code
knn = KNeighborsClassifier(weights='distance')
df_aux = dataframe_train.reset_index()
cols = ['symbol', 'date']
cols.extend(U_COLUMNS)
df_aux = df_aux[cols]
X_train = ud_robust_scaler(df_aux.values, df_aux.columns, relativedelta(months=4))
y_train = dataframe_train['tag_28'].values
df_aux = dataframe_test.reset_index()
cols = ['symbol', 'date']
cols.extend(U_COLUMNS)
df_aux = df_aux[cols]
X_test = ud_robust_scaler(df_aux.values, df_aux.columns, relativedelta(months=4))
y_test = dataframe_test['tag_28'].values
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.10 0.11 0.11 18
bull 0.38 0.26 0.30 98
keep 0.11 0.05 0.07 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.21 0.14 0.17 49
strong bull 0.26 0.25 0.26 44
accuracy 0.19 251
macro avg 0.15 0.12 0.13 251
weighted avg 0.26 0.19 0.22 251
###Markdown
KNN - Todos los datos | Sin escalado | Sin outliers
###Code
knn = KNeighborsClassifier(weights='distance')
X_train = dataframe_train.select_dtypes(float).values
y_train = dataframe_train['tag_28'].values
y_train = np.where(y_train == 'outlier bear', 'strong bear', y_train)
y_train = np.where(y_train == 'outlier bull', 'strong bull', y_train)
X_test = dataframe_test.select_dtypes(float).values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
# X_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])] \
# .select_dtypes(float).values
# y_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])]['tag_28'].values
# X_test = dataframe_test.select_dtypes(float).values
# y_test = dataframe_test['tag_28'].values
# y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
# y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
knn.fit(X_train, y_train);
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.00 0.00 0.00 98
keep 0.00 0.00 0.00 39
strong bear 0.78 0.58 0.67 50
strong bull 0.21 1.00 0.35 46
accuracy 0.30 251
macro avg 0.20 0.32 0.20 251
weighted avg 0.20 0.30 0.20 251
###Markdown
KNN - COLS_WO_FIN | Sin escalado | Sin outliers
###Code
knn = KNeighborsClassifier(weights='distance')
X_train = dataframe_train.select_dtypes(float)[COLS_WO_FIN].values
y_train = dataframe_train['tag_28'].values
y_train = np.where(y_train == 'outlier bear', 'strong bear', y_train)
y_train = np.where(y_train == 'outlier bull', 'strong bull', y_train)
X_test = dataframe_test.select_dtypes(float)[COLS_WO_FIN].values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
# X_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])] \
# .select_dtypes(float)[COLS_WO_FIN].values
# y_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])]['tag_28'].values
# X_test = dataframe_test.select_dtypes(float)[COLS_WO_FIN].values
# y_test = dataframe_test['tag_28'].values
# y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
# y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
knn.fit(X_train, y_train);
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.00 0.00 0.00 98
keep 0.00 0.00 0.00 39
strong bear 0.78 0.58 0.67 50
strong bull 0.21 1.00 0.35 46
accuracy 0.30 251
macro avg 0.20 0.32 0.20 251
weighted avg 0.20 0.30 0.20 251
###Markdown
KNN - U_COLUMNS | Sin escalado | Sin outliers
###Code
knn = KNeighborsClassifier(weights='distance')
X_train = dataframe_train.select_dtypes(float)[U_COLUMNS].values
y_train = dataframe_train['tag_28'].values
y_train = np.where(y_train == 'outlier bear', 'strong bear', y_train)
y_train = np.where(y_train == 'outlier bull', 'strong bull', y_train)
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
knn.fit(X_train, y_train);
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
knn = KNeighborsClassifier(weights='distance')
X_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])] \
.select_dtypes(float)[U_COLUMNS].values
y_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])]['tag_28'].values
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
knn.fit(X_train, y_train);
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.00 0.00 0.00 98
keep 0.00 0.00 0.00 39
strong bear 0.78 0.58 0.67 50
strong bull 0.21 1.00 0.35 46
accuracy 0.30 251
macro avg 0.20 0.32 0.20 251
weighted avg 0.20 0.30 0.20 251
###Markdown
KNN - U_COLUMNS | Robust Scaler | Con outliers
###Code
knn = KNeighborsClassifier(weights='distance')
scl = RobustScaler()
X_train = dataframe_train.select_dtypes(float)[U_COLUMNS].values
X_scl_train = scl.fit_transform(X_train)
y_train = dataframe_train['tag_28'].values
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
X_scl_test = scl.transform(X_test)
y_test = dataframe_test['tag_28'].values
knn.fit(X_scl_train, y_train);
y_pred = knn.predict(X_scl_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.40 0.62 0.49 98
keep 0.00 0.00 0.00 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.00 0.00 0.00 49
strong bull 0.18 0.32 0.23 44
accuracy 0.30 251
macro avg 0.08 0.13 0.10 251
weighted avg 0.19 0.30 0.23 251
###Markdown
KNN - U_COLUMNS | Robust Scaler | Sin outliers
###Code
knn = KNeighborsClassifier(weights='distance')
scl = RobustScaler()
X_train = dataframe_train.select_dtypes(float)[U_COLUMNS].values
y_train = dataframe_train['tag_28'].values
y_train = np.where(y_train == 'outlier bear', 'strong bear', y_train)
y_train = np.where(y_train == 'outlier bull', 'strong bull', y_train)
X_scl_train = scl.fit_transform(X_train)
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
X_scl_test = scl.transform(X_test)
knn.fit(X_scl_train, y_train);
y_pred = knn.predict(X_scl_test)
print(classification_report(y_test, y_pred))
knn = KNeighborsClassifier(weights='distance')
scl = RobustScaler()
X_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])] \
.select_dtypes(float)[U_COLUMNS].values
y_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])]['tag_28'].values
X_scl_train = scl.fit_transform(X_train)
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
X_scl_test = scl.transform(X_test)
knn.fit(X_scl_train, y_train);
y_pred = knn.predict(X_scl_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.40 0.62 0.49 98
keep 0.00 0.00 0.00 39
strong bear 0.00 0.00 0.00 50
strong bull 0.22 0.35 0.27 46
accuracy 0.31 251
macro avg 0.12 0.19 0.15 251
weighted avg 0.20 0.31 0.24 251
###Markdown
Random Forest
###Code
dataframe_reset = dataframe.reset_index()
dataframe_train = dataframe_reset[dataframe_reset['date'].dt.year <= 2017].set_index(['symbol', 'date'])
dataframe_test = dataframe_reset[(dataframe_reset['date'].dt.year > 2017)
& (dataframe_reset['date'].dt.year < 2019)].set_index(['symbol', 'date'])
###Output
_____no_output_____
###Markdown
Random Forest - Todos los datos | Sin escalado | Con outliers
###Code
clf = RandomForestClassifier(criterion='entropy')
X_train = dataframe_train.select_dtypes(float).values
y_train = dataframe_train['tag_28'].values
X_test = dataframe_test.select_dtypes(float).values
y_test = dataframe_test['tag_28'].values
clf.fit(X_train, y_train);
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.32 0.44 0.37 18
bull 0.64 0.81 0.71 98
keep 0.88 0.18 0.30 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.74 0.82 0.78 49
strong bull 0.46 0.43 0.45 44
accuracy 0.61 251
macro avg 0.43 0.38 0.37 251
weighted avg 0.64 0.61 0.58 251
###Markdown
Random Forest - COLS_WO_FIN | Sin escalado | Con outliers
###Code
clf = RandomForestClassifier(criterion='entropy')
X_train = dataframe_train.select_dtypes(float)[COLS_WO_FIN].values
y_train = dataframe_train['tag_28'].values
X_test = dataframe_test.select_dtypes(float)[COLS_WO_FIN].values
y_test = dataframe_test['tag_28'].values
clf.fit(X_train, y_train);
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.31 0.52 0.39 98
keep 0.00 0.00 0.00 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.08 0.02 0.03 49
strong bull 0.25 0.32 0.28 44
accuracy 0.26 251
macro avg 0.09 0.12 0.10 251
weighted avg 0.18 0.26 0.21 251
###Markdown
Random Forest - U_COLUMNS | Sin escalado | Con outliers
###Code
clf = RandomForestClassifier(criterion='entropy')
X_train = dataframe_train.select_dtypes(float)[U_COLUMNS].values
y_train = dataframe_train['tag_28'].values
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
y_test = dataframe_test['tag_28'].values
clf.fit(X_train, y_train);
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.37 0.58 0.45 98
keep 0.20 0.03 0.05 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.31 0.10 0.15 49
strong bull 0.35 0.45 0.40 44
accuracy 0.33 251
macro avg 0.18 0.17 0.15 251
weighted avg 0.30 0.33 0.28 251
###Markdown
Random Forest - Todos los datos | Sin escalado | Sin outliers
###Code
clf = RandomForestClassifier(criterion='entropy')
X_train = dataframe_train.select_dtypes(float).values
y_train = dataframe_train['tag_28'].values
y_train = np.where(y_train == 'outlier bear', 'strong bear', y_train)
y_train = np.where(y_train == 'outlier bull', 'strong bull', y_train)
X_test = dataframe_test.select_dtypes(float).values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
X_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])] \
.select_dtypes(float).values
y_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])]['tag_28'].values
X_test = dataframe_test.select_dtypes(float).values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
clf.fit(X_train, y_train);
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.35 0.50 0.41 18
bull 0.66 0.66 0.66 98
keep 1.00 0.23 0.38 39
strong bear 0.76 0.82 0.79 50
strong bull 0.47 0.65 0.55 46
accuracy 0.61 251
macro avg 0.65 0.57 0.56 251
weighted avg 0.68 0.61 0.60 251
###Markdown
Random Forest - COLS_WO_FIN | Sin escalado | Sin outliers
###Code
clf = RandomForestClassifier(criterion='entropy')
X_train = dataframe_train.select_dtypes(float)[COLS_WO_FIN].values
y_train = dataframe_train['tag_28'].values
y_train = np.where(y_train == 'outlier bear', 'strong bear', y_train)
y_train = np.where(y_train == 'outlier bull', 'strong bull', y_train)
X_test = dataframe_test.select_dtypes(float)[COLS_WO_FIN].values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
# X_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])] \
# .select_dtypes(float)[COLS_WO_FIN].values
# y_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])]['tag_28'].values
# X_test = dataframe_test.select_dtypes(float)[COLS_WO_FIN].values
# y_test = dataframe_test['tag_28'].values
# y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
# y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
clf.fit(X_train, y_train);
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.32 0.51 0.40 98
keep 0.00 0.00 0.00 39
strong bear 0.00 0.00 0.00 50
strong bull 0.31 0.43 0.36 46
accuracy 0.28 251
macro avg 0.13 0.19 0.15 251
weighted avg 0.18 0.28 0.22 251
###Markdown
Random Forest - U_COLUMNS | Sin escalado | Sin outliers
###Code
clf = RandomForestClassifier(criterion='entropy')
X_train = dataframe_train.select_dtypes(float)[U_COLUMNS].values
y_train = dataframe_train['tag_28'].values
y_train = np.where(y_train == 'outlier bear', 'strong bear', y_train)
y_train = np.where(y_train == 'outlier bull', 'strong bull', y_train)
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
clf.fit(X_train, y_train);
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
clf = RandomForestClassifier(criterion='entropy')
X_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])] \
.select_dtypes(float)[U_COLUMNS].values
y_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])]['tag_28'].values
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
clf.fit(X_train, y_train);
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.34 0.57 0.43 98
keep 1.00 0.03 0.05 39
strong bear 0.18 0.12 0.14 50
strong bull 0.50 0.43 0.47 46
accuracy 0.33 251
macro avg 0.40 0.23 0.22 251
weighted avg 0.42 0.33 0.29 251
###Markdown
Random Forest - U_COLUMNS | Robust Scaler | Con outliers
###Code
clf = RandomForestClassifier(criterion='entropy')
scl = RobustScaler()
X_train = dataframe_train.select_dtypes(float)[U_COLUMNS].values
X_scl_train = scl.fit_transform(X_train)
y_train = dataframe_train['tag_28'].values
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
X_scl_test = scl.transform(X_test)
y_test = dataframe_test['tag_28'].values
clf.fit(X_scl_train, y_train);
y_pred = clf.predict(X_scl_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.33 0.48 0.39 98
keep 0.50 0.05 0.09 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.11 0.06 0.08 49
strong bull 0.26 0.34 0.30 44
accuracy 0.27 251
macro avg 0.17 0.13 0.12 251
weighted avg 0.28 0.27 0.24 251
###Markdown
Random Forest - U_COLUMNS | Robust Scaler | Sin outliers
###Code
clf = RandomForestClassifier(criterion='entropy')
scl = RobustScaler()
X_train = dataframe_train.select_dtypes(float)[U_COLUMNS].values
y_train = dataframe_train['tag_28'].values
y_train = np.where(y_train == 'outlier bear', 'strong bear', y_train)
y_train = np.where(y_train == 'outlier bull', 'strong bull', y_train)
X_scl_train = scl.fit_transform(X_train)
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
X_scl_test = scl.transform(X_test)
clf.fit(X_scl_train, y_train);
y_pred = clf.predict(X_scl_test)
print(classification_report(y_test, y_pred))
clf = RandomForestClassifier(criterion='entropy')
scl = RobustScaler()
X_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])] \
.select_dtypes(float)[U_COLUMNS].values
y_train = dataframe_train[~dataframe_train['tag_28'].isin(['outlier bear', 'outlier bull'])]['tag_28'].values
X_scl_train = scl.fit_transform(X_train)
X_test = dataframe_test.select_dtypes(float)[U_COLUMNS].values
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
X_scl_test = scl.transform(X_test)
clf.fit(X_scl_train, y_train);
y_pred = clf.predict(X_scl_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.36 0.50 0.42 98
keep 0.75 0.08 0.14 39
strong bear 0.23 0.14 0.18 50
strong bull 0.40 0.50 0.45 46
accuracy 0.33 251
macro avg 0.35 0.24 0.24 251
weighted avg 0.38 0.33 0.30 251
###Markdown
Random Forest - U_COLUMNS | UDF | Con outliers
###Code
knn = RandomForestClassifier(criterion='entropy')
df_aux = dataframe_train.reset_index()
cols = ['symbol', 'date']
cols.extend(U_COLUMNS)
df_aux = df_aux[cols]
X_train = ud_robust_scaler(df_aux.values, df_aux.columns, relativedelta(months=4))
y_train = dataframe_train['tag_28'].values
df_aux = dataframe_test.reset_index()
cols = ['symbol', 'date']
cols.extend(U_COLUMNS)
df_aux = df_aux[cols]
X_test = ud_robust_scaler(df_aux.values, df_aux.columns, relativedelta(months=4))
y_test = dataframe_test['tag_28'].values
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.39 0.98 0.55 98
keep 0.00 0.00 0.00 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.00 0.00 0.00 49
strong bull 0.00 0.00 0.00 44
accuracy 0.38 251
macro avg 0.06 0.14 0.08 251
weighted avg 0.15 0.38 0.22 251
###Markdown
Random Forest - COLS_WO_FIN | UDF | Con outliers
###Code
knn = RandomForestClassifier(criterion='entropy')
df_aux = dataframe_train.reset_index()
cols = ['symbol', 'date']
cols.extend(COLS_WO_FIN)
df_aux = df_aux[cols]
X_train = ud_robust_scaler(df_aux.values, df_aux.columns, relativedelta(months=4))
y_train = dataframe_train['tag_28'].values
df_aux = dataframe_test.reset_index()
cols = ['symbol', 'date']
cols.extend(COLS_WO_FIN)
df_aux = df_aux[cols]
X_test = ud_robust_scaler(df_aux.values, df_aux.columns, relativedelta(months=4))
y_test = dataframe_test['tag_28'].values
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.39 0.98 0.55 98
keep 0.00 0.00 0.00 39
outlier bear 0.00 0.00 0.00 1
outlier bull 0.00 0.00 0.00 2
strong bear 0.00 0.00 0.00 49
strong bull 0.00 0.00 0.00 44
accuracy 0.38 251
macro avg 0.06 0.14 0.08 251
weighted avg 0.15 0.38 0.22 251
###Markdown
Random Forest - U_COLUMNS | UDF | Sin outliers
###Code
clf = RandomForestClassifier(criterion='entropy')
df_aux = dataframe_train.reset_index()
cols = ['symbol', 'date']
cols.extend(U_COLUMNS)
df_aux = df_aux[cols]
X_train = ud_robust_scaler(df_aux.values, df_aux.columns, relativedelta(months=4))
y_train = dataframe_train['tag_28'].values
y_train = np.where(y_train == 'outlier bear', 'strong bear', y_train)
y_train = np.where(y_train == 'outlier bull', 'strong bull', y_train)
df_aux = dataframe_test.reset_index()
cols = ['symbol', 'date']
cols.extend(U_COLUMNS)
df_aux = df_aux[cols]
X_test = ud_robust_scaler(df_aux.values, df_aux.columns, relativedelta(months=4))
y_test = dataframe_test['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
clf.fit(X_train, y_train);
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
clf = RandomForestClassifier(criterion='entropy')
df_aux = dataframe_train.reset_index()
cols = ['symbol', 'date']
cols.extend(U_COLUMNS)
df_aux = df_aux[~df_aux['tag_28'].isin(['outlier bear', 'outlier bull'])]
X_train = ud_robust_scaler(df_aux[cols].values, df_aux.columns, relativedelta(months=4))
y_train = df_aux[~df_aux['tag_28'].isin(['outlier bear', 'outlier bull'])]['tag_28'].values
df_aux = dataframe_test.reset_index()
cols = ['symbol', 'date']
cols.extend(U_COLUMNS)
df_aux = df_aux[~df_aux['tag_28'].isin(['outlier bear', 'outlier bull'])]
X_test = ud_robust_scaler(df_aux[cols].values, df_aux.columns, relativedelta(months=4))
y_test = df_aux['tag_28'].values
y_test = np.where(y_test == 'outlier bear', 'strong bear', y_test)
y_test = np.where(y_test == 'outlier bull', 'strong bull', y_test)
clf.fit(X_train, y_train);
y_pred = clf.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
bear 0.00 0.00 0.00 18
bull 0.39 0.98 0.55 98
keep 0.00 0.00 0.00 39
strong bear 0.00 0.00 0.00 50
strong bull 0.00 0.00 0.00 46
accuracy 0.38 251
macro avg 0.08 0.20 0.11 251
weighted avg 0.15 0.38 0.22 251
###Markdown
Project Prism
###Code
print("Hello, World!")
###Output
_____no_output_____
###Markdown
Analysis logs
###Code
import numpy as np
import matplotlib.pyplot as plt
import os
import json
path = '../resources'
datapath = os.path.join(path, 'data')
paramspath = os.path.join(path, 'params')
imagepath = os.path.join(path, 'plots')
file = '2_custom_reward_train'
def plot(path, filename):
filetext = os.path.join(os.path.join(path, 'data'), filename + '.txt')
fileplot = os.path.join(os.path.join(path, 'plots'), filename + '.png')
data = np.loadtxt(filetext)
fig = plt.figure(figsize=(12,6))
plt.plot(data[:,0], data[:,1])
plt.xlabel('episodio', fontsize=12)
plt.ylabel('número de pasos de tiempo', fontsize=12)
plt.savefig(fileplot, dpi=500)
def read_params(path, filename):
fileparams = os.path.join(os.path.join(path, 'params'), filename + '.json')
with open(fileparams) as json_file:
data = json.load(json_file)
return data
###Output
_____no_output_____
###Markdown
Train 1 custom
###Code
filename = '1_custom_reward_train'
plot(path, filename)
read_params(path, filename)
###Output
_____no_output_____
###Markdown
2 custom
###Code
filename = '2_custom_reward_train'
plot(path, filename)
read_params(path, filename)
###Output
_____no_output_____
###Markdown
0
###Code
filename = '0_train'
plot(path, filename)
read_params(path, filename)
###Output
_____no_output_____
###Markdown
1
###Code
filename = '1_train'
plot(path, filename)
read_params(path, filename)
###Output
_____no_output_____
###Markdown
2
###Code
filename = '2_train'
plot(path, filename)
read_params(path, filename)
###Output
_____no_output_____
###Markdown
3
###Code
filename = '3_train'
plot(path, filename)
read_params(path, filename)
###Output
_____no_output_____
###Markdown
4 beforeUp to 2000 N iterations to update target network params.Change buffer size to 20000.
###Code
plot(path, '4_before_train')
read_params(path, '4_train')
###Output
_____no_output_____
###Markdown
4Update to 500 iterations for update target network params.Load params for target networks from disk file model.
###Code
filename = '4_train'
plot(path, filename)
read_params(path, filename)
###Output
_____no_output_____
###Markdown
5Update to 500 iterations for update target network params.Load params for target networks from disk file model.
###Code
filename = '5_train'
plot(path, filename)
read_params(path, filename)
filename = '6_train'
plot(path, filename)
read_params(path, filename)
filename = '7_train'
plot(path, filename)
read_params(path, filename)
filename = '8_train'
plot(path, filename)
read_params(path, filename)
filename = '9_train'
plot(path, filename)
read_params(path, filename)
filename = '10_custom_f_reward_train'
plot(path, filename)
read_params(path, filename)
###Output
_____no_output_____
###Markdown
Test
###Code
filename = '5_test'
plot(path, filename)
#read_params(path, filename)
filename = '6_test'
plot(path, filename)
#read_params(path, filename)
filename = '7_test'
plot(path, filename)
#read_params(path, filename)
###Output
_____no_output_____
###Markdown
Analysis of Algorithms [Click here to run this chapter on Colab](https://colab.research.google.com/github/AllenDowney/DSIRP/blob/main/notebooks/analysis.ipynb) **Analysis of algorithms** is a branch of computer science that studiesthe performance of algorithms, especially their run time and spacerequirements. See .The practical goal of algorithm analysis is to predict the performanceof different algorithms in order to guide design decisions.During the 2008 United States Presidential Campaign, candidate BarackObama was asked to perform an impromptu analysis when he visited Google.Chief executive Eric Schmidt jokingly asked him for "the most efficientway to sort a million 32-bit integers." Obama had apparently been tippedoff, because he quickly replied, "I think the bubble sort would be thewrong way to go." See .This is true: bubble sort is conceptually simple but slow for large datasets. The answer Schmidt was probably looking for is "radix sort"().But if you get a question like this in an interview, I think abetter answer is, "The fastest way to sort a million integers is touse whatever sort function is provided by the language I'm using.Its performance is good enough for the vast majority ofapplications, but if it turned out that my application was too slow,I would use a profiler to see where the time was being spent. If itlooked like a faster sort algorithm would have a significant effecton performance, then I would look around for a good implementationof radix sort." The goal of algorithm analysis is to make meaningful comparisons between algorithms, but there are some problems:- The relative performance of the algorithms might depend on characteristics of the hardware, so one algorithm might be faster on Machine A, another on Machine B. The usual solution to this problem is to specify a **machine model** and analyze the number of steps, or operations, an algorithm requires under a given model.- Relative performance might depend on the details of the dataset. For example, some sorting algorithms run faster if the data are already partially sorted; other algorithms run slower in this case. A common way to avoid this problem is to analyze the **worst case** scenario. It is sometimes useful to analyze average case performance, but that's usually harder, and it might not be obvious what set of cases to average over.- Relative performance also depends on the size of the problem. A sorting algorithm that is fast for small lists might be slow for long lists. The usual solution to this problem is to express run time (or number of operations) as a function of problem size, and group functions into categories depending on how quickly they grow as problem size increases. The good thing about this kind of comparison is that it lends itself tosimple classification of algorithms. For example, if I know that the runtime of Algorithm A tends to be proportional to the size of the input,$n$, and Algorithm B tends to be proportional to $n^2$, then I expect Ato be faster than B, at least for large values of $n$.This kind of analysis comes with some caveats, but we'll get to that later. Order of growthSuppose you have analyzed two algorithms and expressed their run times in terms of the size of the input: Algorithm A takes $100n+1$ steps to solve a problem with size $n$; Algorithm B takes $n^2 + n + 1$ steps.The following table shows the run time of these algorithms for different problem sizes:
###Code
import numpy as np
import pandas as pd
n = np.array([10, 100, 1000, 10000])
table = pd.DataFrame(index=n)
table['Algorithm A'] = 100 * n + 1
table['Algorithm B'] = n**2 + n + 1
table['Ratio (B/A)'] = table['Algorithm B'] / table['Algorithm A']
table
###Output
_____no_output_____
###Markdown
At $n=10$, Algorithm A looks pretty bad; it takes almost 10 times longerthan Algorithm B. But for $n=100$ they are about the same, and for larger values A is much better. The fundamental reason is that for large values of $n$, any functionthat contains an $n^2$ term will grow faster than a function whoseleading term is $n$. The **leading term** is the term with the highestexponent.For Algorithm A, the leading term has a large coefficient, 100, which iswhy B does better than A for small $n$. But regardless of thecoefficients, there will always be some value of $n$ where$a n^2 > b n$, for any values of $a$ and $b$.The same argument applies to the non-leading terms. Suppose the run timeof Algorithm C is $n+1000000$; it would still be better than AlgorithmB for sufficiently large $n$.
###Code
import numpy as np
import pandas as pd
n = np.array([10, 100, 1000, 10000])
table = pd.DataFrame(index=n)
table['Algorithm C'] = n + 1000000
table['Algorithm B'] = n**2 + n + 1
table['Ratio (C/B)'] = table['Algorithm B'] / table['Algorithm C']
table
###Output
_____no_output_____
###Markdown
In general, we expect an algorithm with a smaller leading term to be abetter algorithm for large problems, but for smaller problems, there maybe a **crossover point** where another algorithm is better. The following figure shows the run times (in arbitrary units) for the three algorithms over a range of problems sizes. For small problem sizes, Algorithm B is the fastest, but for large problems sizes, it is the worst.In the figure, we can see where the crossover points are.
###Code
import matplotlib.pyplot as plt
ns = np.arange(10, 1500)
ys = 100 * ns + 1
plt.plot(ns, ys, label='Algorithm A')
ys = ns**2 + ns + 1
plt.plot(ns, ys, label='Algorithm B')
ys = ns + 1_000_000
plt.plot(ns, ys, label='Algorithm C')
plt.yscale('log')
plt.xlabel('Problem size (n)')
plt.ylabel('Run time')
plt.legend();
###Output
_____no_output_____
###Markdown
The location of these crossover points depends on the details of the algorithms, theinputs, and the hardware, so it is usually ignored for purposes ofalgorithmic analysis. But that doesn't mean you can forget about it. Big O notationIf two algorithms have the same leading order term, it is hard to saywhich is better; again, the answer depends on the details. So foralgorithmic analysis, functions with the same leading term areconsidered equivalent, even if they have different coefficients.An **order of growth** is a set of functions whose growth behavior isconsidered equivalent. For example, $2n$, $100n$ and $n+1$ belong to thesame order of growth, which is written $O(n)$ in **Big-O notation** andoften called **linear** because every function in the set grows linearlywith $n$.All functions with the leading term $n^2$ belong to $O(n^2)$; they arecalled **quadratic**. The following table shows some of the orders of growth that appear mostcommonly in algorithmic analysis, in increasing order of badness. | Order of growth | Name ||-----------------|---------------------------|| $O(1)$ | constant || $O(\log_b n)$ | logarithmic (for any $b$) || $O(n)$ | linear || $O(n \log_b n)$ | linearithmic || $O(n^2)$ | quadratic || $O(n^3)$ | cubic || $O(c^n)$ | exponential (for any $c$) | For the logarithmic terms, the base of the logarithm doesn't matter;changing bases is the equivalent of multiplying by a constant, whichdoesn't change the order of growth. Similarly, all exponential functionsbelong to the same order of growth regardless of the base of theexponent. Exponential functions grow very quickly, so exponentialalgorithms are only useful for small problems. ExerciseRead the Wikipedia page on Big-O notation at and answer the followingquestions:1. What is the order of growth of $n^3 + n^2$? What about $1000000 n^3 + n^2$? What about $n^3 + 1000000 n^2$?2. What is the order of growth of $(n^2 + n) \cdot (n + 1)$? Before you start multiplying, remember that you only need the leading term.3. If $f$ is in $O(g)$, for some unspecified function $g$, what can we say about $af+b$, where $a$ and $b$ are constants?4. If $f_1$ and $f_2$ are in $O(g)$, what can we say about $f_1 + f_2$?5. If $f_1$ is in $O(g)$ and $f_2$ is in $O(h)$, what can we say about $f_1 + f_2$?6. If $f_1$ is in $O(g)$ and $f_2$ is in $O(h)$, what can we say about $f_1 \cdot f_2$? Programmers who care about performance often find this kind of analysishard to swallow. They have a point: sometimes the coefficients and thenon-leading terms make a real difference. Sometimes the details of thehardware, the programming language, and the characteristics of the inputmake a big difference. And for small problems, order of growth isirrelevant.But if you keep those caveats in mind, algorithmic analysis is a usefultool. At least for large problems, the "better" algorithm is usuallybetter, and sometimes it is *much* better. The difference between twoalgorithms with the same order of growth is usually a constant factor,but the difference between a good algorithm and a bad algorithm isunbounded! Example: Adding the elements of a listIn Python, most arithmetic operations are constant time; multiplication usually takes longer than addition and subtraction, and division takes even longer, but these run times don't depend on the magnitude of the operands. Very large integers are an exception; in that case the run time increases with the number of digits.A `for` loop that iterates a list is linear, as long as all of the operations in the body of the loop are constant time. For example, adding up the elements of a list is linear:
###Code
def compute_sum(t):
total = 0
for x in t:
total += x
return total
t = range(10)
compute_sum(t)
###Output
_____no_output_____
###Markdown
The built-in function `sum` is also linear because it does the same thing, but it tends to be faster because it is a more efficient implementation; in the language of algorithmic analysis, it has a smaller leading coefficient.
###Code
%timeit compute_sum(t)
%timeit sum(t)
###Output
_____no_output_____
###Markdown
Image Analysis with Python - Tutorial Pipelineadapted from https://git.embl.de/grp-bio-it/image-analysis-with-python/tree/master/session-3to5 Importing Modules & Packages Let's start by importing the package NumPy, which enables the manipulation of numerical arrays:
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Recall that, once imported, we can use functions/modules from the package, for example to create an array:
###Code
a = np.array([1, 2, 3])
print(a)
print(type(a))
###Output
_____no_output_____
###Markdown
Note that the package is imported under a variable name (here `np`). You can freely choose this name yourself. For example, it would be just as valid (but not as convenient) to write:```pythonimport numpy as lovelyArrayToola = lovelyArrayTool.array([1,2,3])``` ExerciseUsing the import command as above, follow the instructions in the comments below to import two additional modules that we will be using frequently in this pipeline.
###Code
# The plotting module matplotlib.pyplot as plt
### YOUR CODE HERE!
# The image processing module scipy.ndimage as ndi
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Side Note for Jupyter Notebook UsersYou can configure how the figures made by matplotlib are displayed.The most common options are the following:- **inline**: displays as static figure in code cell output- **notebook**: displays as interactive figure in code cell output - **qt**: displays as interactive figure in a separate windowFeel free to test them out on one of the figures you will generate later on in the tutorial. The code cell below shows how to set the different options. Note that combinations of different options in the same notebook do not always work well, so it is best to decide for one and use it throughout. You may need to restart the kernel (`Kernel > Restart`) when you change from one option to another.
###Code
# Set matplotlib backend
%matplotlib inline
#%matplotlib notebook
#%matplotlib qt
###Output
_____no_output_____
###Markdown
Loading & Handling Image Data BackgroundImages are essentially just numbers (representing intensity) in an ordered grid of pixels. Image processing is simply to carry out mathematical operations on these numbers.The ideal object for storing and manipulating ordered grids of numbers is the **array**. Many mathematical operations are well defined on arrays and can be computed quickly by vector-based computation.Arrays can have any number of dimensions (or "axes"). For example, a 2D array could represent the x and y axis of a grayscale image (xy), a 3D array could contain a z-stack (zyx), a 4D array could also have multiple channels for each image (czyx) and a 5D array could have time on top of that (tczyx). ExerciseWe will now proceed to load one of the example images and verify that we get what we expect. Note: Before starting, it always makes sense to have a quick look at the data in Fiji/ImageJ so you know what you are working with!Follow the instructions in the comments below.
###Code
# (i) specify the file path to a suitable test image, using pathlib's `Path`
from pathlib import Path
dir_path = Path("/path/to/data/folder")
file_path = dir_path / "<example image>.tif"
print(file_path)
# (ii) Load the image
# Import the function 'imread' from the module 'skimage.io'.
# (Note: If this gives you an error, please refer to the note below!)
### YOUR CODE HERE!
# Load one of your images and store it in a variable.
img = ... ### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
----*Important note for those who get an error when trying to import `imread` from `skimage.io`:*Some users have been experiencing problems with this module, even though the rest of skimage is installed correctly (running `import skimage` does not given an error). This may have something to do with operating system preferences. The easiest solution in this case is to install the module `tifffile` (with three`f`) and use the function `imread` from that module (it is identical to the `imread` function of `skimage.io` when reading `tif` files). The `tifffile` module does not come with the Anaconda distribution, so it's likely that you don't have it installed. To install it, save and exit Jupyter notebook, then go to a terminal and type `conda install -c conda-forge tifffile`. After the installation is complete, restart Jupyter notebook, come back here and import `imread` from `tifffile`. This should now hopefully work.----
###Code
# (iii) Check that everything is in order
# Check that 'img' is a variable of type 'ndarray' - use Python's built-in function 'type'.
print("Loaded array is of type:", type(img))
# Print the shape of the array using the numpy-function 'shape'.
# Make sure you understand the output!
print("Loaded array has shape:", img.shape)
# Check the datatype of the individual numbers in the array. You can use the array attribute 'dtype' to do so.
# Make sure you understand the output!
print("Loaded values are of type:", img.dtype)
# (iv) Look at the image to confirm that everything worked as intended
# To plot the array as an image, use pyplot's functions 'plt.imshow' followed by 'plt.show'.
# Check the documentation for 'plt.imshow' and note the parameters that can be specified, such as colormap (cmap)
# and interpolation. Since you are working with scientific data, interpolation is unwelcome, so you should set it
# to "none". The most common cmap for grayscale images is naturally "gray".
# You may also want to adjust the size of the figure. You can do this by preparing the figure canvas with
# the function 'plt.figure' before calling 'plt.imshow'. The canvas size is adjusted using the keyword argument
# 'figsize' when calling 'plt.figure'.
### YOUR CODE HERE!
# (v) Look at the image to confirm that everything worked as intended
# To plot an array as an image, use pyplot's functions 'plt.imshow' followed by 'plt.show'.
# Check the documentation for 'plt.imshow' and note the parameters that can be specified, such as colormap (cmap)
# and interpolation. Since you are working with scientific data, interpolation is unwelcome, so you should set it
# to "none". The most common cmap for grayscale images is naturally "gray".
# You may also want to adjust the size of the figure. You can do this by preparing the figure canvas with
# the function 'plt.figure' before calling 'plt.imshow'. The canvas size is adjusted using the keyword argument
# 'figsize' when calling 'plt.figure'.
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Preprocessing BackgroundThe goal of image preprocessing is to prepare or optimize the images to make further analysis easier. Usually, this boils down to increasing the signal-to-noise ratio by removing noise and background and by enhancing structures of interest.The specific preprocessing steps used in a pipeline depend on the type of sample, the microscopy technique used, the image quality, and the desired downstream analysis. The most common operations include:- Deconvolution - Image reconstruction based on information about the PSF of the microscope - These days deconvolution is often included with microscope software - *Our example images are not deconvolved, but will do just fine regardless*- Conversion to 8-bit images to save memory / computational time - *Our example images are already 8-bit*- Cropping of images to an interesting region - *The field of view in our example images is fine as it is*- Smoothing of technical noise - This is a very common step and usually helps to improve almost any type of downstream analysis - Commonly used filters are the `Gaussian filter` and the `median filter` - *Here we will be using a Gaussian filter.*- Corrections of technical artifacts - Common examples are uneven illumination and multi-channel bleed-through- Background subtraction - There are various ways of sutracting background signal from an image - Two different types are commonly distinguished: - `uniform background subtraction` treats all regions of the image the same - `adaptive or local background subtraction` automatically accounts for differences between regions of the image Gaussian SmoothingA Gaussian filter smoothens an image by convolving it with a Gaussian-shaped kernel. In the case of a 2D image, the Gaussian kernel is also 2D and will look something like this:How much the image is smoothed by a Gaussian kernel is determined by the standard deviation of the Gaussian distribution, usually referred to as **sigma** ($\sigma$). A higher $\sigma$ means a broader distribution and thus more smoothing.**How to choose the correct value of $\sigma$?**This depends a lot on your images, in particular on the pixel size. In general, the chosen $\sigma$ should be large enough to blur out noise but small enough so the "structures of interest" do not get blurred too much. Usually, the best value for $\sigma$ is simply found by trying out some different options and looking at the result. ExercisePerform Gaussian smoothing and visualize the result.Follow the instructions in the comments below.
###Code
# (i) Create a variable for the smoothing factor sigma, which should be an integer value
### YOUR CODE HERE!
# After implementing the Gaussian smoothing function below, you can modify this variable
# to find the ideal value of sigma.
# (ii) Clip the image using `np.ndarry.clip` (`img.clip(...)`)
# hint: `np.percentile` might come in handy
img_clipped = ... ### YOUR CODE HERE!
# visualize the clipped image using 'plt.imshow'
### YOUR CODE HERE!
# (iii) Perform the smoothing on the clipped image
# To do so, use the Gaussian filter function 'ndi.filters.gaussian_filter' from the
# image processing module 'scipy.ndimage', which was imported at the start of the tutorial.
# Check out the documentation of scipy to see how to use this function.
img_smooth = ... ### YOUR CODE HERE!
# (iv) Visualize the result using 'plt.imshow'
# Compare with the original image visualized above.
# Does the output make sense? Is this what you expected?
# Can you optimize sigma such that the image looks smooth without blurring the membranes too much?
### YOUR CODE HERE!
# To have a closer look at a specific region of the image, crop that region out and show it in a
# separate plot. Remember that you can crop arrays by "indexing" or "slicing" them similar to lists.
# Use such "zoomed-in" views throughout this tutorial to take a closer look at your intermediate
# results when necessary.
### YOUR CODE HERE!
# (v) BONUS: Show the raw and smoothed images side by side using 'plt.subplots'
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Manual Thresholding & Threshold Detection BackgroundThe easiest way to distinguish foreground objects (here: membranes) from the image background is to threshold the image, meaning all pixels with an intensity above a certain threshold are accepted as foreground, all others are set as background.To find the best threshold for a given image, one option is to simply try out different thresholds manually. Alternatively, one of many algorithms for automated 'threshold detection' can be used. These algorithms use information about the image (such as the histogram) to automatically find a suitable threshold value, often under the assumption that the background and foreground pixels in an image belong to two clearly distinct populations in terms of their intensity. There are many different algorithms for threshold detection and it is often hard to predict which one will produce the nicest and most robust result for a particular dataset. It therefore makes sense to try out a bunch of different options.For this pipeline, we will ultimately use a more advanced thresholding approach, which also accounts (to some extent) for variations in signal across the field of view: adaptive thresholding. But first, let's experiment a bit with threshold detection. ExerciseTry out manual thresholding and automated threshold detection.Follow the instructions in the comments below.
###Code
# (i) Create a variable for a manually set threshold, which should be an integer
# This can be changed later to find a suitable value.
### YOUR CODE HERE!
# (ii) Perform thresholding on the smoothed image
# Remember that you can use relational (Boolean) expressions such as 'smaller' (<), 'equal' (==)
# or 'greater or equal' (>=) with numpy arrays - and you can directly assign the result to a new
# variable.
### YOUR CODE HERE!
# Check the dtype of your thresholded image
# You should see that the dtype is 'np.bool', which stands for 'Boolean' and means the array
# is now simply filled with 'True' and 'False', where 'True' is the foreground (the regions
# above the threshold) and 'False' is the background.
### YOUR CODE HERE!
# (iii) Visualize the result
### YOUR CODE HERE!
# (iv) Try out different thresholds to find the best one
# If you are using jupyter notebook, you can adapt the code below to
# interactively change the threshold and look for the best one. These
# kinds of interactive functions are called 'widgets' and are very
# useful in exploratory data analysis to create greatly simplified
# 'User Interfaces' (UIs) on the fly.
# As a BONUS exercise, try to understand or look up how the widget works
# and play around with it a bit!
# (Note: If this just displays a static image without a slider to adjust
# the threshold or if it displays a text warning about activating
# the 'widgetsnbextension', check out the note below!)
# Prepare widget
from ipywidgets import interact
@interact(thresh=(10,250,10))
def select_threshold(thresh=100):
# Thresholding
### ADAPT THIS: Change 'img_smooth' into the variable you stored the smoothed image in!
mem = img_smooth > thresh
# Visualization
plt.figure(figsize=(7,7))
plt.imshow(mem, interpolation='none', cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
----*Important note for those who get a static image (no slider) or a text warning:*For some users, it is necessary to specifically activate the widgets plugin for Jupyter notebook. To do so, save and exit Jupyter notebook, then go to a terminal and write `jupyter nbextension enable --py --sys-prefix widgetsnbextension`. After this, you should be able to restart Jupyter notebook and the widget should display correctly. If it still doesn't work, you may instead have to type `jupyter nbextension enable --py widgetsnbextension` in the terminal. However, note that this implies that your installation of Conda/Jupyter is not optimally configured (see [this GitHub issue](https://github.com/jupyter-widgets/ipywidgets/issues/541) for more information, although this is not something you necessarily need to worry about in the context of this course).----
###Code
# (v) Perfom automated threshold detection with Otsu's method
# The scikit-image module 'skimage.filters.thresholding' provides
# several threshold detection algorithms. The most popular one
# among them is Otsu's method. Using what you've learned so far,
# import the 'threshold_otsu' function, use it to automatically
# determine a threshold for the smoothed image, apply the threshold,
# and visualize the result.
### YOUR CODE HERE!
# (vi) BONUS: Did you notice the 'try_all_threshold' function?
# That's convenient! Use it to automatically test the threshold detection
# functions in 'skimage.filters.thresholding'. Don't forget to adjust the
# 'figsize' parameter so the resulting images are clearly visible.
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Adaptive Thresholding BackgroundSimply applying a fixed intensity threshold does not always produce a foreground mask of sufficiently high quality, since background and foreground intensities often vary across the image. In our example image, for instance, the intensity drops at the image boundaries - a problem that cannot be resolved just by changing the threshold value.One way of addressing this issue is to use an *adaptive thresholding* algorithm, which adjusts the threshold locally in different regions of the image to account for varying intensities.Although `scikit-image` provides a function for adaptive thresholding (called `threshold_local`), we will here implement our own version, which is slightly different and will hopefully make the concept of adaptive thresholding very clear.Our approach to adaptive tresholding works in two steps:1. Generation of a "background image" This image should - across the entire image - always have higher intensities than the local background but lower intensities than the local foreground. This can be achieved by strong blurring/smoothing of the image, as illustrated in this 1D example: 2. Thresholding of the original image with the background Instead of thresholding with a single value, every pixel in the image is thresholded with the corresponding pixel of the "background image". Exercise Implement the two steps of the adaptive background subtraction:1. Use a strong "mean filter" (aka "uniform filter") to create the background image. This simply assigns each pixel the average value of its local neighborhood. Just like the Gaussian blur, this can be done by convolution, but this time using a "uniform kernel" like this one: To define which pixels should be considered as the local neighborhood of a given pixel, a `structuring element` (`SE`) is used. This is a small binary image where all pixels set to `1` will be considered as part of the neighborhood and all pixels set to `0` will not be considered. Here, we use a disc-shaped `SE`, as this reduces artifacts compared to a square `SE`. *Side note:* A strong Gaussian blur would also work to create the background mask. For the Gaussian blur, the analogy to the `SE` is the `sigma` value, which in a way also determines the size of the local neighborhood.2. Use the background image for thresholding. In practical terms, this works in exactly the same way as thresholding with a single value, since numpy arrays will automatically perform element-wise (pixel-by-pixel) comparisons when compared to other arrays of the same shape by a relational (Boolean) expression.Follow the instructions in the comments below.
###Code
# Step 1
# ------
# (i) Create a disk-shaped structuring element and asign it to a new variable.
# Structuring elements are small binary images that indicate which pixels
# should be considered as the 'neighborhood' of the central pixel.
#
# An example of a small disk-shaped SE would be this:
# 0 0 1 0 0
# 0 1 1 1 0
# 1 1 1 1 1
# 0 1 1 1 0
# 0 0 1 0 0
#
# The expression below creates such structuring elements.
# It is an elegant but complicated piece of code and at the moment it is not
# necessary for you to understand it in detail. Use it to create structuring
# elements of different sizes (by changing 'i') and find a way to visualize
# the result (remember that the SE is just a small 'image').
#
# Try to answer the following questions:
# - Is the resulting SE really circular?
# - Could certain values of 'i' cause problems? If so, why?
# - What value of 'i' should be used for the SE?
# Note that, similar to the sigma in Gaussian smoothing, the size of the SE
# is first estimated based on the images and by thinking about what would
# make sense. Later it can be optimized by trial and error.
# Create SE
i = ???
struct = (np.mgrid[:i,:i][0] - np.floor(i/2))**2 + (np.mgrid[:i,:i][1] - np.floor(i/2))**2 <= np.floor(i/2)**2
# Visualize the result
### YOUR CODE HERE!
# (ii) Create the background
# Run a mean filter over the image using the disc SE and assign the output to a new variable.
# Use the function 'skimage.filters.rank.mean'.
### YOUR CODE HERE!
# (iii) Visualize the resulting background image. Does what you get make sense?
### YOUR CODE HERE!
# Step 2
# ------
# (iv) Threshold the Gaussian-smoothed original image against the background image created in step 1
# using a relational expression
### YOUR CODE HERE!
# (v) Visualize and understand the output.
### YOUR CODE HERE!
# What do you observe?
# Are you happy with this result as a membrane segmentation?
# Adapt the size of the circular SE to optimize the result!
###Output
_____no_output_____
###Markdown
Improving Masks with Binary Morphology BackgroundMorphological operations such as `erosion`, `dilation`, `closing` and `opening` are common tools used to improve masks after they are generated by thresholding. They can be used to fill small holes, remove noise, increase or decrease the size of an object, or smoothen mask outlines.Most morphological operations are once again simple kernel functions that are applied at each pixel of the image based on their neighborhood as defined by a `structuring element` (`SE`). For example, `dilation` simply assigns to the central pixel the maximum pixel value within the neighborhood; it is a maximum filter. Conversely, `erosion` is a minimum filter. Additional options emerge from combining the two: `morphological closing`, for example, is a `dilation` followed by an `erosion`. This is used to fill in gaps and holes or smoothing mask outlines without significantly changing the mask's area. Finally, there are also some more complicated morphological operations, such as `hole filling`. ExerciseImprove the membrane segmentation from above with morphological operations.Specifically, use `binary hole filling` to get rid of the speckles of foreground pixels that litter the insides of the cells. Furthermore, try different other types of morphological filtering to see how they change the image and to see if you can improve the membrane mask even more, e.g. by filling in gaps.Follow the instructions in the comments below. Visualize all intermediate results of your work and remember to "zoom in" to get a closer look by slicing out and then plotting a subsection of the image array.
###Code
# (i) Get rid of speckles using binary hole filling
# Use the function 'ndi.binary_fill_holes' for this. Be sure to check the docs to
# understand exactly what it does. For this to work as intended, you will have to
# invert the mask, which you can do using the function `np.logical_not` or the
# corresponding operator '~'. Again, be sure to understand why this has to be done
# and don't forget to revert the result back.
### YOUR CODE HERE!
# (ii) Try out other morphological operations to further improve the membrane mask
# The various operations are available in the ndimage module, for example 'ndi.binary_closing'.
# Play around and see how the different functions affect the mask. Can you optimize the mask,
# for example by closing gaps?
# Note that the default SE for these functions is a square. Feel free to create another disc-
# shaped SE and see how that changes the outcome.
# BONUS: If you pay close attention, you will notice that some of these operations introduce
# artefacts at the image boundaries. Can you come up with a way of solving this? (Hint: 'np.pad')
### YOUR CODE HERE!
# (iii) Visualize the final result
### YOUR CODE HERE
# At this point you should have a pretty neat membrane mask.
# If you are not satisfied with the quality your membrane segmentation, you should go back
# and fine-tune the size of the SE in the adaptive thresholding section and also optimize
# the morphological cleaning operations.
# Note that the quality of the membrane segmentation will have a significant impact on the
# cell segmentation we will perform next.
###Output
_____no_output_____
###Markdown
Connected Components Labeling BackgroundBased on the membrane segmentation, we can get a preliminary segmentation of the cells in the image by considering each background region surrounded by membranes as a cell. This can already be good enough for many simple measurements.The only thing we still need to do in order to get there is to label each cell individually. Only if each separate cell has a unique number (an `ID`) assigned, values such as the mean intensity can be measured and analyzed at the single-cell level.The approach used to achieve this is called `connected components labeling`. It gives every connected group of foreground pixels a unique `ID` number. ExerciseUse your membrane segmentation for connected components labeling.Follow the instructions in the comments below.
###Code
# (i) Label connected components
# Use the function 'ndi.label' from the 'ndimage' module.
# Note that this function labels foreground pixels (1s, not 0s), so you may need
# to invert your membrane mask just as for hole filling above.
# Also, note that 'ndi.label' returns another result in addition to the labeled
# image. Read up on this in the function's documention and make sure you don't
# mix up the two outputs!
### YOUR CODE HERE!
# (ii) Visualize the output
# Here, it is no longer ideal to use a 'gray' colormap, since we want to visualize that each
# cell has a unique ID. Play around with different colormaps (check the docs to see what
# types of colormaps are available) and choose one that you are happy with.
### YOUR CODE HERE!
# Take a close look at the picture and note mistakes in the segmentation. Depending on the
# quality of your membrane mask, there will most likely be some cells that are 'fused', meaning
# two or more cells are labeled as the same cell; this is called "under-segmentation".
# We will resolve this issue in the next step. Note that our downstream pipeline does not involve
# any steps to resolve "over-segmentation" (i.e. a cell being wrongly split into multiple labeled
# areas), so you should tune your membrane mask such that this is not a common problem.
###Output
_____no_output_____
###Markdown
Segmentation by Seeding & Expansion BackgroundThe segmentation we achieved by membrane masking and connected components labeling is a good start. We could for example use it to measure the fluorescence intensity in each cell's cytoplasm. However, we cannot use it to measure intensities at the membrane of the cells, nor can we use it to accurately measure features like cell shape or size.To improve this (and to resolve cases of under-segmentation), we can use a "seeding & expansion" strategy. Expansion algorithms such as the `watershed` start from a small `seed` and "grow outward" until they touch the boundaries of neighboring cells, which are themselves growing outward from neighboring seeds. Since the "growth rate" at the edge of the growing areas is dependent on image intensity (higher intensity means slower expansion), these expansion methods end up tracing the cells' outlines. Seeding by Distance Transform BackgroundA `seed image` contains a few pixels at the center of each cell labeled by a unique `ID` number and surrounded by zeros. The expansion algorithm will start from these central pixels and grow outward until all zeros are overwritten by an `ID` label. In the case of `watershed` expansion, one can imagine the `seeds` as the sources from which water pours into the cells and starts filling them up.For multi-channel images that contain a nuclear label, it is common practice to mask the nuclei by thresholding and use an eroded version of the nuclei as seeds for cell segmentation. However, there are good alternative seeding approaches for cases where nuclei are not available or not nicely separable by thresholding.Here, we will use a `distance transform` for seeding. In a `distance transform`, each pixel in the foreground (here the cells) is assigned a value corresponding to its distance from the closest background pixel (here the membrane segmentation). In other words, we encode within the image how far each pixel of a cell is away from the membrane (see figure below). The pixels furthest away from the membrane will be at the center of the cells and will have the highest values. Using a function to detect `local maxima`, we will find these high-value peaks and use them as seeds for our segmentation.One big advantage of this approach is that it will create two separate seeds even if two cells are connected by a hole in the membrane segmentation. Thus, under-segmentation artifacts will be reduced. Exercise Find seeds using the distance transform approach.This involves the following three steps:1. Run the distance transform on your membrane mask.2. Due to irregularities in the membrane shape, the distance transform may have some smaller local maxima in addition to those at the center of the cells. This will lead to additional seeds, which will lead to over-segmentation. To resolve this problem, smoothen the distance transform using Gaussian smoothing. 3. Find the seeds by detecting local maxima. Optimize the seeding by changing the amount of smoothing done in step 2, aiming to have exactly one seed for each cell (although this may not be perfectly achievable).Follow the instructions in the comments below.
###Code
# (i) Run a distance transform on the membrane mask
# Use the function 'ndi.distance_transform_edt'.
# You may need to invert your membrane mask so the distances are computed on
# the cells, not on the membranes.
### YOUR CODE HERE!
# (ii) Visualize the output and understand what you are seeing.
### YOUR CODE HERE!
# (iii) Smoothen the distance transform
# Use 'ndi.filters.gaussian_filter' to do so.
# You will have to optimize your choice of 'sigma' based on the outcome below.
### YOUR CODE HERE!
# (iv) Retrieve the local maxima (the 'peaks') from the distance transform
# Use the function 'peak_local_max' from the module 'skimage.feature'. By default, this function will return the
# indices of the pixels where the local maxima are. However, we instead need a boolean mask of the same shape
# as the original image, where all the local maximum pixels are labeled as `1` and everything else as `0`.
# This can be achieved by setting the keyword argument 'indices' to False.
### YOUR CODE HERE!
# (v) Visualize the output as an overlay on the raw (or smoothed) image
# If you just look at the local maxima image, it will simply look like a bunch of distributed dots.
# To get an idea if the seeds are well-placed, you will need to overlay these dots onto the original image.
# To do this, it is important to first understand a key point about how the 'pyplot' module works:
# every plotting command is slapped on top of the previous plotting commands, until everything is ultimately
# shown when 'plt.show' is called. Hence, you can first plot the raw (or smoothed) input image and then
# plot the seeds on top of it before showing both with 'plt.show'.
# As you can see if you try this, you will not get the desired result because the zero values in seed array
# are painted in black over the image you want in the background. To solve this problem, you need to mask
# these zero values before plotting the seeds. You can do this by creating an appropriately masked array
# using the function 'np.ma.array' with the keyword argument 'mask'.
# Check the docs or Stack Overflow to figure out how to do this.
# BONUS: As an additional improvement for the visualization, use 'ndi.filters.maximum_filter' to dilate the
# seeds a little bit, making them bigger and thus better visible.
### YOUR CODE HERE!
# (vi) Optimize the seeding
# Ideally, there should be exactly one seed for each cell.
# If you are not satisfied with your seeding, go back to the smoothing step above and optimize 'sigma'
# to get rid of additional maxima. You can also try using the keyword argument 'min_distance' in
# 'peak_local_max' to solve cases where there are multiple small seeds at the center of a cell. Note
# that good seeding is essential for a good segmentation with an expansion algorithm. However, no
# segmentation is perfect, so it's okay if a few cells end up being oversegmented.
# (vii) Label the seeds (optional)
# Use connected component labeling to give each cell seed a unique ID number.
### YOUR CODE HERE!
# Visualize the final result (the labeled seeds) as an overlay on the raw (or smoothed) image
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Expansion by Watershed BackgroundTo achieve a cell segmentation, the `seeds` now need to be expanded outward until they follow the outline of the cell. The most commonly used expansion algorithm is the `watershed`.Imagine the intensity in the raw/smoothed image as a topographical height profile; high-intensity regions are peaks, low-intensity regions are valleys. In this representation, cells are deep valleys (with the seeds at the center), enclosed by mountains. As the name suggests, the `watershed` algorithm can be understood as the gradual filling of this landscape with water, starting from the seed. As the water level rises, the seed expands - until it finally reaches the 'crest' of the cell membrane 'mountain range'. Here, the water would flow over into the neighboring valley, but since that valley is itself filled up with water from the neighboring cell's seed, the two water surfaces touch and the expansion stops. ExerciseExpand your seeds by means of a watershed expansion.Follow the instructions in the comments below.
###Code
# (i) Perform watershed
# Use the function 'watershed' from the module 'skimage.morphology'.
# Use the labeled cell seeds and the smoothed membrane image as input.
### YOUR CODE HERE!
# (ii) Show the result as transparent overlay over the smoothed input image
# Like the masked overlay of the seeds, this can be achieved by making two calls to 'imshow',
# one for the background image and one for the segmentation. Instead of masking away background,
# this time you simply make the segmentation image semi-transparent by adjusting the keyword
# argument 'alpha' of the 'imshow' function, which specifies opacity.
# Be sure to choose an appropriate colormap that allows you to distinguish the segmented cells
# even if cells with a very similar ID are next to each other (I would recommend 'prism').
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
*A Note on Segmentation Quality*This concludes the segmentation of the cells in the example image. Depending on the quality you achieved in each step along the way, the final segmentation may be of greater or lesser quality (in terms of over-/under-segmentation errors).It should be noted that the segmentation will likely *never* be perfect, as there is usually a trade-off between over- and undersegmentation.This raises an important question: ***When should I stop trying to optimize my segmentation?***There is no absolute answer to this question but the best answer is probably this: ***When you can use it to address your biological questions!****Importantly, this implies that you should already have relatively clear questions in mind when you are working on the segmentation!*
###Code
# (i) Create an array of the same size and data type as the segmentation but filled with only zeros
### YOUR CODE HERE!
# (ii) Iterate over the cell IDs
### YOUR CODE HERE!
# (iii) Erode the cell's mask by 1 pixel
# Hint: 'ndi.binary_erode'
### YOUR CODE HERE!
# (iv) Create the cell edge mask
# Hint: 'np.logical_xor'
### YOUR CODE HERE!
# (v) Add the cell edge mask to the empty array generated above, labeling it with the cell's ID
### YOUR CODE HERE!
# (vi) Visualize the result
# Note: Because the lines are so thin (1pxl wide), they may not be displayed correctly in small figures.
# You can 'zoom in' by showing a sub-region of the image which is then rendered bigger. You can
# also go back to the edge identification code and make the edges multiple pixels wide (but keep
# in mind that this will have an effect on your quantification results!).
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Extracting Quantitative Measurements BackgroundThe ultimate goal of image segmentation is of course the extraction of quantitative measurements, in this case on a single-cell level. Measures of interest can be based on intensity (in different channels) or on the size and shape of the cells.To exemplify how different properties of cells can be measured, we will extract the following:- Cell ID (so all other measurements can be traced back to the cell that was measured)- Mean intensity of each cell- Mean intensity at the membrane of each cell- The cell area, i.e. the number of pixels that make up the cell- The cell outline length, i.e. the number of pixels that make up the cell edge*Note: It makes sense to use smoothed/filtered/background-subtracted images for segmentation. When it comes to measurements, however, it's best to get back to the raw data!* ExerciseExtract the measurements listed above for each cell and collect them in a dictionary.Note: The ideal data structure for data like this is the `DataFrame` offered by the module `Pandas`. However, for the sake of simplicity, we will here stick with a dictionary of lists.Follow the instructions in the comments below.
###Code
# (i) Create a dictionary that contains a key-value pairing for each measurement
# The keys should be strings describing the type of measurement (e.g. 'intensity_mean') and
# the values should be empty lists. These empty lists will be filled with the results of the
# measurements.
### YOUR CODE HERE!
# (ii) Record the measurements for each cell
# Iterate over the segmented cells ('np.unique').
# Inside the loop, create a mask for the current cell and use it to extract the measurements listed above.
# Add them to the appropriate list in the dictionary using the 'append' method.
# Hint: Remember that you can get out all the values within a masked area by indexing the image
# with the mask. For example, 'np.mean(image[cell_mask])' will return the mean of all the
# intensity values of 'image' that are masked by 'cell_mask'!
### YOUR CODE HERE!
# (iii) Print the results and check that they make sense
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Simple Analysis & Visualisation BackgroundBy extracting quantitative measurements from an image we cross over from 'image analysis' to 'data analysis'. This section briefly explains how to do basic data analysis and plotting, including boxplots, scatterplots and linear fits. It also showcases how to map data back onto the image, creating an "image-based heatmap". ExerciseAnalyze and plot the extracted data in a variety of ways.Follow the instructions in the comments below.
###Code
# (i) Familiarize yourself with the data structure of the results dict and summarize the results
# Recall that dictionaries are unordered; a dataset of interest is accessed through its key.
# In our case, the datasets inside the dict are lists of values, ordered in the same order
# as the cell IDs.
# For each dataset in the results dict, print its name (the key) along with its mean, standard
# deviation, maximum, minimum, and median. The appropriate numpy methods (e.g. 'np.median') work
# with lists just as well as with arrays.
### YOUR CODE HERE!
# (ii) Create a box plot showing the mean cell and mean membrane intensities for both channels.
# Use the function 'plt.boxplot'. Use the 'label' keyword of 'plt.boxplot' to label the x axis with
# the corresponding key names. Feel free to play around with the various options of the boxplot
# function to make your plot look nicer. Remember that you can first call 'plt.figure' to adjust
# settings such as the size of the plot.
### YOUR CODE HERE!
# (iii) Create a scatter plot of cell outline length over cell area
# Use the function 'plt.scatter' for this. Be sure to properly label the
# plot using 'plt.xlabel' and 'plt.ylabel'.
### YOUR CODE HERE!
# BONUS: Do you understand why you are seeing the pattern this produces? Can you
# generate a 'null model' curve that assumes all cells to be circular? What is
# the result? Do you notice something odd about it? What could be the reason for
# this and how could it be fixed?
### YOUR CODE HERE!
# (iv) Perform a linear fit of membrane intensity over cell area
# Use the function 'linregress' from the module 'scipy.stats'. Be sure to read the docs to
# understand the output of this function. Print the output.
### YOUR CODE HERE!
# (v) Think about the result
# Note that the fit seems to return a highly significant p-value but a very low correlation
# coefficient (r-value). Based on prior knowledge, we would not expect a linear correlation of
# this sort to be present in our data.
#
# This should prompt several questions:
# 1) What does this p-value actually mean? Check the docs of 'linregress'!
# 2) Could there be artifacts in our segmentation that bias this analysis?
#
# In general, it's always good to be very careful when doing any kind of data analysis. Make sure you
# understand the functions you are using and always check for possible errors or sources of bias!
# (vi) Overlay the linear fit onto a scatter plot
# Recall that a linear function is defined by `y = slope * x + intercept`.
# To define the line you'd like to plot, you need two values of x (the starting point and
# and the end point of the line). What values of x make sense? Can you get them automatically?
### YOUR CODE HERE!
# When you have the x-values for the starting point and end point, get the corresponding y
# values from the fit through the equation above.
### YOUR CODE HERE!
# Plot the line with 'plt.plot'. Adjust the line's properties so it is well visible.
# Note: Remember that you have to create the scatterplot before plotting the line so that
# the line will be placed on top of the scatterplot.
### YOUR CODE HERE!
# Use 'plt.legend' to add information about the line to the plot.
### YOUR CODE HERE!
# Label the plot and finally show it with 'plt.show'.
### YOUR CODE HERE!
# (vii) Map the cell area back onto the image as a 'heatmap'
# Scale the cell area data to 8bit so that it can be used as pixel intensity values.
# Hint: if the largest cell area should correspond to the value 255 in uint8, then
# the other cell areas correspond to 'cell_area * 255 / largest_cell_area'.
# Hint: To perform an operation on all cell area values at once, convert the list
# of cell areas to a numpy array.
### YOUR CODE HERE!
# Initialize a new image array; all values should be zeros, the shape should be identical
# to the images we worked with before and the dtype should be uint8.
### YOUR CODE HERE!
# Iterate over the segmented cells. In addition to the cell IDs, the for-loop should
# also include a simple counter (starting from 0) with which the area measurement can be
# accessed by indexing.
### YOUR CODE HERE!
# Mask the current cell and assign the cell's (re-scaled) area value to the cell's pixels.
### YOUR CODE HERE!
# Visualize the result as a colored semi-transparent overlay over the raw/smoothed original input image.
# BONUS: See if you can exclude outliers to make the color mapping more informative!
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Writing Output to Files BackgroundThe final step of the pipeline shows how to write various outputs of the pipeline to files.Data can be saved to files in a human-readable format such as text files (e.g. to import into Excel), in a format readable for other programs such as tif-images (e.g. to view in Fiji) or in a language-specific file that makes it easy to reload the data into python in the future (e.g. for further analysis). Exercise Write the generated data into a variety of different output files.Follow the instructions in the comments below.
###Code
# (i) Write one or more of the images you produced to a tif file
# Use the function 'imsave' from the 'skimage.io' module. Make sure that the array you are
# writing is of integer type. If necessary, you can use the method 'astype' for conversions,
# e.g. 'some_array.astype(np.uint8)' or 'some_array.astype(np.uint16)'. Careful when
# converting a segmentation to uint8; if there are more than 255 cells, the 8bit format
# doesn't have sufficient bit-depth to represent all cell IDs!
#
# You can also try adding the segmentation to the original image, creating an image with
# two channels, one of them being the segmentation.
#
# After writing the file, load it into Fiji and check that everything worked as intended.
### YOUR CODE HERE!
# (ii) Write a figure to a png or pdf
# Recreate the scatter plot from above (with or without the regression line), then save the figure
# as a png using 'plt.savefig'. Alternatively, you can also save it to a pdf, which will create a
# vector graphic that can be imported into programs like Adobe Illustrator.
### YOUR CODE HERE!
# (iii) Save the segmentation as a numpy file
# Numpy files allow fast storage and reloading of numpy arrays. Use the function 'np.save'
# to save the array and reload it using 'np.load'.
### YOUR CODE HERE!
# (iv) Save the result dictionary as a pickle file
# Pickling is a way of generating generic files from almost any python object, which can easily
# be reloaded into python at a later point in time.
# You will need to open an empty file object using 'open' in write-bytes mode ('wb'). It's best to
# do so using the 'with'-statement (context manager) to make sure that the file object will be
# closed automatically when you are done with it.
# Use the function 'pickle.dump' from the 'pickle' module to write the results to the file.
# Hint: Refer to the python documention for input and output to understand how file objects are
# handled in python in general.
### YOUR CODE HERE!
## Note: Pickled files can be re-loaded again as follows:
#with open('my_filename.pkl', 'rb') as infile:
# reloaded = pickle.load(infile)
# (v) Write a tab-separated text file of the results dict
# The most generic way of saving numeric results is a simple text file. It can be imported into
# pretty much any other program.
# To write normal text files, open an empty file object in write mode ('w') using the 'with'-statement.
### YOUR CODE HERE!
# Use the 'file_object.write(string)' method to write strings to the file, one line at a time,
# First, write the header of the data (the result dict keys), separated by tabs ('\t').
# It makes sense to first generate a complete string with all the headers and then write this
# string to the file as one line. Note that you will need to explicitly write 'newline' characters
# ('\n') at the end of the line to switch to the next line.
# Hint: the string method 'join' is very useful here!
### YOUR CODE HERE!
# After writing the headers, iterate over all the cells and write the result data to the file line
# by line, by creating strings similar to the header string.
### YOUR CODE HERE!
# After writing the data, have a look at the output file in a text editor or in a spreadsheet
# program like Excel.
###Output
_____no_output_____
###Markdown
Batch Processing BackgroundIn practice, we never work with just a single image, so we would like to make it possible to run our analysis pipeline for multiple images and then collect and analyze all the results. This final section of the tutorial shows how to do just that. ExerciseTo run a pipeline multiple times, it needs to be packaged into a function or - even better - as a separate module. Jupyter notebook is not well suited for this, so if you're working in a notebook, first extract your code to a `.py` file (see instructions below). If you are not working in a notebook, create a copy of your pipeline; we will modify this copy into a function that can then be called repeatedly for different images.To export a jupyter notebook as a `.py` file, use `File > Download as > Python (.py)`, then save the file. Open the resulting python script in a text editor or in an IDE like PyCharm. Let's clean the script a bit:- Remove the line `%matplotlib [inline|notebook|qt]`. It is not valid python code outside of a Jupyter notebook.- Go through the script and comment out everything related to plotting; when running a pipeline for dozens or hundreds of images, we usually do not want to generate tons of plots. Similarly, it can make sense to remove some print statments if you have many of them.- Remove the sections `Manual Thresholding` and `Connected Components Labeling`; they are not used in the final segmentation.- Remove the sections `Simple Analysis and Visualization` and `Writing Output to Files`; we will collect the output for each image when running the pipeline in a loop. That way, everything can be analyzed at once at the end. - Note that, even though we skip it here, it is often very useful to store every input file's corresponding outputs in new files. When doing so, the output files should use the name of the input file modified with an additional suffix. For example, the results extracted when analyzing `img_1.tif` might best be stored as `img_1_results.pkl`. - You can implement this approach for saving the segmentations and/or the result dicts as a *bonus* exercise!- Feel free to delete some of the background information to make the script more concise. Converting the pipeline to a function:Convert the entire pipeline into a function that accepts a directory and a filename as input, runs everything, and returns the final segmentation and the results dictionary. To do this, you must:- Add the function definition statement at the beginning of the script (after the imports)- Replace the 'hard-coded' directory path and filename by variables that are accepted by the function- Indent all the code- Add a return statement at the end Importing the function and running it for multiple input files:To actually run the pipeline function for multiple input files, we need to do the following:- Import the pipeline function from the `.py` file- Iterate over all the filenames in a directory- For each filename, call the pipeline function- Collect the returned resultsOnce you have converted your pipeline into a function as described above, you can import and run it according to the instructions below.
###Code
# (i) Test if your pipeline function actually works
# Import your function using the normal python syntax for imports, like this:
# from your_module import your_function
# Run the function and visualize the resulting segmentation. Make sure everything works as intended.
### YOUR CODE HERE!
# (ii) Get all relevant filenames from the input directory
# Use the function 'listdir' from the module 'os' to get a list of all the files
# in a directory. Find a way to filter out only the relevant input files, namely
# "example_cells_1.tif" and "example_cells_2.tif". Of course, one would usually
# do this for many more images, otherwise it's not worth the effort.
# Hint: Loop over the filenames and use if statements to decide which ones to
# keep and which ones to throw away.
### YOUR CODE HERE!
# (iii) Iterate over the input filenames and run the pipeline function
# Be sure to collect the output of the pipeline function in a way that allows
# you to trace it back to the file it came from. You could for example use a
# dictionary with the filenames as keys.
### YOUR CODE HERE!
# (iv) Recreate one of the scatterplots from above but this time with all the cells
# You can color-code the dots to indicate which file they came from. Don't forget to
# add a corresponding legend.
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Summary
###Code
df = outputs.copy()
display(df[[
"test/equality/error_max",
"test/inequality/error_max",
"test/inequality/active_power/error_max",
"test/inequality/reactive_power/error_max",
"test/inequality/voltage_magnitude/error_max",
"test/inequality/forward_rate/error_max",
"test/inequality/backward_rate/error_max",
"test/inequality/voltage_angle_difference/error_max",
]].max())
print(f"""
GNN Cost: {df["test/cost"].mean():0.4f}
IPOPT Cost: {df["acopf/cost"].mean():0.4f}
GNN/IPOPT: {(df["test/cost"]/df["acopf/cost"]).mean():0.4f}
Maximum violation rate: {df["test/inequality/rate"].max():0.4f}
Rate of any violation: {(df["test/inequality/error_max"] > 1e-4).sum() / len(df):0.4f}
""")
###Output
_____no_output_____
###Markdown
Histograms
###Code
from utils import FlowLayout
aspect = 1.618
kwargs = dict(
bins=10,
stat="proportion",
aspect=aspect,
height=3.5/aspect,
)
ylabel = "Count / # of samples"
sns.displot(df, x="test/inequality/error_max", **kwargs)
plt.xlabel("Max inequality error")
plt.ylabel(ylabel)
save("error_max")
sns.displot(df, x="test/inequality/error_mean", **kwargs)
plt.xlabel("Mean inequality error")
plt.ylabel(ylabel)
save("error_mean")
# Cost improvement
df["test/cost/improvement"] = df["test/cost"]/df["acopf/cost"]
df["violation"] = df["test/inequality/rate"] > 1e-8
sns.displot(df[~df["violation"]], x="test/cost/improvement", **kwargs)
plt.xlabel("GNN / IPOPT cost ratio")
plt.ylabel(ylabel)
save("costs")
# map variable names to series names
fmt = "test/inequality/%s/error_max"
hist_dict = {
"equality": ["test/equality/bus_power/error_max"],
"gen": [fmt % "active_power", fmt % "reactive_power"],
"vm": [fmt % "voltage_magnitude"],
"rate": [fmt % "forward_rate", fmt % "backward_rate"],
"vad": [fmt % "voltage_angle_difference"]
}
sns.displot(df["test/equality/bus_power/error_max"], **kwargs)
plt.xlabel("Bus power equality error")
plt.ylabel(ylabel)
save("error_equality")
power_df = df.melt(value_vars=[fmt % "active_power", fmt % "reactive_power"])
sns.displot(power_df, x="value", **kwargs)
plt.xlabel("Generated power error")
plt.ylabel(ylabel)
save("error_gen")
sns.displot(df[fmt % "voltage_magnitude"], **kwargs)
plt.xlabel("Voltage magnitude error")
plt.ylabel(ylabel)
save("error_vm")
flow_df = df.melt(value_vars=[fmt % "forward_rate", fmt % "backward_rate"])
sns.displot(flow_df, x="value", **kwargs)
plt.xlabel("Power rate limit error")
plt.ylabel(ylabel)
save("error_rate")
sns.displot(df[fmt % "voltage_angle_difference"], **kwargs)
plt.xlabel("Voltage angle difference error")
plt.ylabel(ylabel)
save("error_vad")
FlowLayout().all_open()
###Output
_____no_output_____
###Markdown
Visualizing Violations
###Code
sort_term = "test/inequality/error_max"
quantile = 1
s = df[sort_term]
index = (s.sort_values()[::-1] <= s.quantile(quantile)).idxmax()
print(sort_term, s[index])
print("Idx", index)
df.iloc[index][[
"test/cost",
"test/equality/bus_power/error_max",
"test/inequality/error_max",
"test/inequality/active_power/error_max",
"test/inequality/reactive_power/error_max",
"test/inequality/voltage_magnitude/error_max",
"test/inequality/forward_rate/error_max",
"test/inequality/backward_rate/error_max",
"test/inequality/voltage_angle_difference/error_max",
]]
import opf.plot
dataset = list(dm.test_dataloader())
barrier.double()
load = dataset[index][0].double() @ barrier.powerflow_parameters.load_matrix
with torch.no_grad():
variables, _, _, _ = barrier._step_helper(*barrier(load), True)
plots = opf.plot.plot_constraints(variables, barrier.powerflow_parameters)
# df["acopf/cost/relaxed"] = None
# for i in tqdm.tqdm(range(len(df))):
# forward_error = df.iloc[i]["test/inequality/forward_rate/error_max"]
# backward_error = df.iloc[i]["test/inequality/backward_rate/error_max"]
# increase = 1 + torch.clamp((torch.maximum(variables.Sf.abs(), variables.St.abs()) - barrier.powerflow_parameters.rate_a) / barrier.powerflow_parameters.rate_a, min=0) \
# .squeeze().detach().numpy()
# net = barrier.net_wrapper.net
# original = net.line.copy()
# barrier.net_wrapper.set_load_sparse(variables.Sd.real.squeeze(), variables.Sd.imag.squeeze())
# net.line["max_i_ka"] *= increase[:len(net.line)]
# bus = torch.as_tensor(barrier.net_wrapper.optimal_ac(False)[0]).double().unsqueeze(0)
# _, constraints, cost, _ = barrier._step_helper(
# *barrier.parse_bus(bus),
# variables.Sd,
# project_pandapower=False,
# )
# net.line = original
# df["acopf/cost/relaxed"].iloc[i] = cost.item()
# print("IPOPT Original Cost:", df["acopf/cost"].mean())
# print("IPOPT Relaxed Cost:", df["acopf/cost/relaxed"].mean())
# print("GNN Cost", df["test/cost"].mean())
###Output
_____no_output_____
###Markdown
Compare accuracies
###Code
def get_acc_with_std(preds, targets, n_samples, n=100):
n = preds.shape[0]
accuracies = []
for i in range(n):
indices = np.random.choice(list(range(n)), size=n_samples, replace=False)
acc = accuracy_score(targets, preds[indices].mean(axis=0).argmax(axis=1))
accuracies.append(acc)
return np.mean(accuracies), np.std(accuracies), np.percentile(accuracies, 5), np.percentile(accuracies, 95)
###Output
_____no_output_____
###Markdown
CIFAR10
###Code
predictions, targets = get_preds('3rxjjlx1') # Ensemble CIFAR10
acc_single = []
acc_ensemble = []
for i in range(predictions.shape[0]):
acc_single.append(accuracy_score(targets, predictions[-i-1].argmax(axis=1)))
acc_ensemble.append(accuracy_score(targets, predictions[-i-1:].mean(axis=0).argmax(axis=1)))
acc_swa = []
for i in range(2,26):
preds, _ = get_preds('8mvqdjc1', f'_k{i}') # SWA CIFAR10
acc_swa.append(accuracy_score(targets, preds.mean(axis=0).argmax(axis=1)))
# SWAG
acc_swag = []
k_swag = [3, 5, 8, 10, 16]
preds, _ = get_preds('2sjbgi3y') # SWAG 256, k=3
acc, std, _ , _ = get_acc_with_std(preds, targets, n_samples=64)
acc_swag.append(acc)
preds, _ = get_preds('3vkd6gg2') # SWAG 256, k=5 (also 3mgr2rnt, different seed)
acc, std, _ , _ = get_acc_with_std(preds, targets, n_samples=64)
acc_swag.append(acc)
preds, _ = get_preds('11t47era') # SWAG 256, k=8
acc, std, _ , _ = get_acc_with_std(preds, targets, n_samples=64)
acc_swag.append(acc)
preds, _ = get_preds('1tc0el95') # SWAG 256, k=10
acc, std, _ , _ = get_acc_with_std(preds, targets, n_samples=64)
acc_swag.append(acc)
preds, _ = get_preds('wu6eg434') # SWAG 128, k=16
acc, std, _ , _ = get_acc_with_std(preds, targets, n_samples=64)
acc_swag.append(acc)
# plt.figure(figsize=(10,6))
plt.figure(figsize=(4.5,3))
k = np.arange(1, predictions.shape[0] + 1)
plt.plot(k, acc_single, 'k--', label='single checkpoint')
plt.plot(k[1:26], acc_ensemble[1:26], 'y', label='ensemble')
plt.plot(k[1:25], acc_swa, 'c', label='swa')
plt.plot(k_swag, acc_swag, 'k.:', label='swag N=64')
plt.xlabel('K last checkpoints')
plt.ylabel('accuracy (CIFAR10)')
plt.xticks([0,5,10,15,20,25])
plt.legend()
plt.tight_layout()
preds1, targets = get_preds('3vkd6gg2') # SWAG 256, k=5
preds2, _ = get_preds('3mgr2rnt') # SWAG 256, k=5 (different seed)
preds = np.concatenate([preds1, preds2], axis=0)
del preds1
del preds2
samples = []
accuracies = []
stds = []
los = []
his = []
for i in tqdm([2,4,8,16,32,64,128,256,512]):
acc, std, lo5, hi5 = get_acc_with_std(preds, targets, n_samples=i, n=200)
accuracies.append(acc)
los.append(lo5)
his.append(hi5)
samples.append(i)
plt.figure(figsize=(4,3))
plt.plot(samples, accuracies, 'ko:',label='swag k=5')
# omit last few because sampling without replacement from total of 512
# TODO: ask if this is ok?
plt.plot(samples[:-3], los[:-3], 'k_', label='5th percentile')
plt.plot(samples[:-3], his[:-3], 'k_', label='95th percentile')
plt.xlabel('N samples')
plt.ylabel('accuracy (CIFAR10)')
plt.legend()
plt.xscale('log')
plt.xticks([2,4,8,16,32,64,128,256,512], [2,4,8,16,32,64,128,256,512]);
plt.tight_layout()
###Output
_____no_output_____
###Markdown
CIFAR100
###Code
predictions, targets = get_preds('6rur0243') # Ensemble CIFAR100
acc_single = []
acc_ensemble = []
for i in range(predictions.shape[0]):
acc_single.append(accuracy_score(targets, predictions[-i-1].argmax(axis=1)))
acc_ensemble.append(accuracy_score(targets, predictions[-i-1:].mean(axis=0).argmax(axis=1)))
acc_swa = []
for i in range(2,22):
preds, _ = get_preds('373xmyi4', f'_k{i}') # SWA CIFAR100
acc_swa.append(accuracy_score(targets, preds.mean(axis=0).argmax(axis=1)))
swag_k = [2,3,4,5,6,7,8,9,10,16]
acc_swag64 = []
for i in range(2,5):
preds, _ = get_preds('3l03q84b', f'_k{i}') # SWAG CIFAR100 K = {2,3,4}
acc_swag64.append(accuracy_score(targets, preds.mean(axis=0).argmax(axis=1)))
preds, _ = get_preds('1l1zic13', '_k5') # SWAG CIFAR100 K=5
acc_swag64.append(accuracy_score(targets, preds.mean(axis=0).argmax(axis=1)))
for i in range(6,10):
preds, _ = get_preds('d6790168', f'_k{i}') # SWAG CIFAR100 K= {6 - 9}
acc_swag64.append(accuracy_score(targets, preds.mean(axis=0).argmax(axis=1)))
preds, _ = get_preds('3nmg5cky') # SWAG, K=10 (128)
print(accuracy_score(targets, preds.mean(axis=0).argmax(axis=1)))
acc, std, _ , _ = get_acc_with_std(preds, targets, n_samples=64)
acc_swag64.append(acc)
preds, _ = get_preds('36ykfzm1') # SWAG, K=16 (64)
acc_swag64.append(accuracy_score(targets, preds.mean(axis=0).argmax(axis=1)))
acc, std, _ , _ = get_acc_with_std(preds, targets, n_samples=16)
# plt.figure(figsize=(10,6))
plt.figure(figsize=(4.5,3))
k = np.arange(1, predictions.shape[0] + 1)
plt.plot(k, acc_single, 'k--', label='single checkpoint')
plt.plot(k, acc_ensemble, 'y', label='ensemble')
plt.plot(k[1:21], acc_swa, 'c', label='swa')
plt.plot(swag_k, acc_swag64, 'k.:', label='swag N=64')
plt.xlabel('K last checkpoints')
plt.ylabel('accuracy (CIFAR100)')
plt.legend()
plt.xticks([0,5,10,15,20])
preds1, targets = get_preds('f68xa8fk') # SWAG 256, k=5
preds2, _ = get_preds('65r3pymj') # SWAG 256, k=5 (different seed)
preds = np.concatenate([preds1, preds2], axis=0)
del preds1
del preds2
samples = []
accuracies = []
stds = []
los = []
his = []
for i in tqdm([2,4,8,16,32,64,128,256,512]):
acc, std, lo5, hi5 = get_acc_with_std(preds, targets, n_samples=i, n=200)
accuracies.append(acc)
los.append(lo5)
his.append(hi5)
samples.append(i)
plt.figure(figsize=(4,3))
plt.plot(samples, accuracies, 'ko:',label='swag k=5')
# omit last few because sampling without replacement from total of 512
# TODO: ask if this is ok?
plt.plot(samples[:-3], los[:-3], 'k_', label='5th percentile')
plt.plot(samples[:-3], his[:-3], 'k_', label='95th percentile')
plt.xlabel('N samples')
plt.ylabel('accuracy (CIFAR100)')
plt.legend()
plt.xscale('log')
plt.xticks([2,4,8,16,32,64,128,256,512], [2,4,8,16,32,64,128,256,512]);
###Output
_____no_output_____
###Markdown
Plot calibration curves CIFAR10
###Code
from sklearn.calibration import calibration_curve
import matplotlib
def plot_calibration_curve(probabilities, targets, label=None, line=':.'):
max_probs = probabilities.max(axis=1)
correct = probabilities.argmax(axis=1) == targets
# scale the x axis to get nice spacing
xscale_fn = lambda x: -np.log10(1-x*0.999)
tick_labels = np.array([0.2, 0.7, 0.9, 0.97, 0.99, 0.996, 0.999])
# tick_labels = (1-np.power(10, - np.linspace(0.1,3,10)))/0.999
tick_placement = xscale_fn(tick_labels)
# plt.xticks(tick_placement, np.round(tick_labels,3))
plt.xticks(tick_placement, tick_labels)
# plot reference at 0
plt.plot(xscale_fn(np.array([0, 1])), [0, 0], "k--")
# calibration curve
prob_true, prob_pred = calibration_curve(correct, max_probs, n_bins=20, strategy='quantile')
plt.plot(xscale_fn(prob_pred), prob_pred - prob_true, line, label=label)
plt.ylabel('Confidence - Accuracy')
plt.xlabel('Confidence')
predictions, targets = get_preds('3rxjjlx1') # Ensemble CIFAR10
swa_20, _ = get_preds('8mvqdjc1', f'_k20') # SWA 20
# SWAG
preds1, _ = get_preds('3vkd6gg2') # SWAG 256, k=5
preds2, _ = get_preds('3mgr2rnt') # SWAG 256, k=5 (different seed)
swag_5 = np.concatenate([preds1, preds2], axis=0)
del preds1
del preds2
swag_8, _ = get_preds('11t47era') # SWAG 128, k=8
swag_16, _ = get_preds('wu6eg434') # SWAG 128, k=16
single = predictions[-1]
# ensemble_2 = predictions[-2:].mean(axis=0)
ensemble_5 = predictions[-5:].mean(axis=0)
ensemble_8 = predictions[-8:].mean(axis=0)
ensemble_16 = predictions[-16:].mean(axis=0)
# plt.figure(figsize=(12,12))
plt.figure(figsize=(6,4))
plot_calibration_curve(single, targets, label='SGD', line='k.--')
plot_calibration_curve(swa_20[0], targets, label='SWA k=20', line='c:.')
plot_calibration_curve(ensemble_5, targets, label='ensemble k=5', line='r:.')
# plot_calibration_curve(ensemble_8, targets, label='ensemble k=8', line=':.')
plot_calibration_curve(ensemble_16, targets, label='ensemble k=16', line='g:.')
plot_calibration_curve(swag_5.mean(axis=0), targets, label='swag k=5 (512)', line='rd-')
# plot_calibration_curve(swag_5[:32].mean(axis=0), targets, label='swag k=5 (32)', line='d-.')
# plot_calibration_curve(swag_5[:128].mean(axis=0), targets, label='swag k=5 (128)', line='d-.')
plot_calibration_curve(swag_16.mean(axis=0), targets, label='swag k=16 (128)', line='gd-')
# plot_calibration_curve(swag_8.mean(axis=0), targets, label='swag k=8 (128)', line='d-.')
plt.legend()
plt.xlim((0.25, -np.log10(1-0.9991)))
plt.title('Calibration curve (VGG16 on CIFAR10)')
###Output
_____no_output_____
###Markdown
CIFAR100
###Code
predictions, targets = get_preds('6rur0243') # Ensemble CIFAR100
swa_20, _ = get_preds('373xmyi4', f'_k20') # SWA 20
single = predictions[-1]
ensemble_2 = predictions[-2:].mean(axis=0)
ensemble_5 = predictions[-5:].mean(axis=0)
ensemble_8 = predictions[-8:].mean(axis=0)
ensemble_16 = predictions[-16:].mean(axis=0)
ensemble_20 = predictions[-20:].mean(axis=0)
# SWAG
preds1, _ = get_preds('f68xa8fk') # SWAG 256, k=5
preds2, _ = get_preds('65r3pymj') # SWAG 256, k=5 (different seed)
swag_5 = np.concatenate([preds1, preds2], axis=0)
del preds1
del preds2
swag_8, _ = get_preds('d6790168', f'_k8') # SWAG 64, k=8
swag_16, _ = get_preds('36ykfzm1') # SWAG 128, k=16
# plt.figure(figsize=(12,12))
plt.figure(figsize=(6,4))
plot_calibration_curve(single, targets, label='SGD', line='k.--')
plot_calibration_curve(swa_20[0], targets, label='SWA k=20', line='c:.')
plot_calibration_curve(ensemble_5, targets, label='ensemble k=5', line='r:.')
# plot_calibration_curve(ensemble_8, targets, label='ensemble k=8', line='b:.')
plot_calibration_curve(ensemble_16, targets, label='ensemble k=16', line='g:.')
plot_calibration_curve(swag_5.mean(axis=0), targets, label='swag k=5 (512)', line='rd-')
# plot_calibration_curve(swag_5[:32].mean(axis=0), targets, label='swag k=5 (32)', line='rd-')
# plot_calibration_curve(swag_5[:128].mean(axis=0), targets, label='swag k=5 (128)', line='d-.')
plot_calibration_curve(swag_16.mean(axis=0), targets, label='swag k=16 (128)', line='gd-')
# plot_calibration_curve(swag_8.mean(axis=0), targets, label='swag k=8 (128)', line='bd-')
plt.legend()
plt.xlim((0.25, -np.log10(1-0.9991)))
plt.legend()
plt.title('Calibration curve (VGG16 on CIFAR100)')
###Output
_____no_output_____
###Markdown
Confidence on OOD samples
###Code
import seaborn as sns
def plot_prob_distributions(predictions, ax=None):
for label, probs in predictions:
sns.distplot(probs.max(axis=1), kde=False, norm_hist=True, label=label, bins=np.linspace(0,1, 50), ax=ax)
# plt.legend()
predictions, t100 = get_preds('6rur0243') # Ensemble CIFAR100 on CIFAR100
predictions_svhn, _ = get_preds('zo487s5s') # Ensemble CIFAR100 on SVHN
predictions_n, _ = get_preds('16w8wx06') # Ensemble CIFAR100 on noise
predictions10, t10 = get_preds('3rxjjlx1') # Ensemble CIFAR10 on CIFAR10
predictions10_svhn, _ = get_preds('vyoc1t1f') # Ensemble CIFAR10 on SVHN
predictions10_n, _ = get_preds('3brh34y2') # Ensemble CIFAR10 on noise
swag = get_preds('1v32yl0c')[0].mean(axis=0) # SWAG k=8 (128) CIFAR100 on CIFAR100
p1 = get_preds('2n2a361m')[0] # SWAG k=8 (64 + 64) CIFAR100 on SVHN
p2 = get_preds('4q338z8o')[0]
swag_svhn = np.concatenate([p1,p2], axis=0).mean(axis=0)
swag_n = get_preds('1hxim8dr')[0].mean(axis=0) # SWAG k=8 (128) CIFAR100 on noise
swag10 = get_preds('11t47era')[0].mean(axis=0) # SWAG k=8 (128) CIFAR10 on CIFAR10
swag10_svhn = get_preds('2tk9zcgt')[0].mean(axis=0) # SWAG k=8 (128) CIFAR10 on SVHN
swag10_n = get_preds('yp7nmltk')[0].mean(axis=0) # SWAG k=8 (128) CIFAR10 on noise
# CIFAR100
single = predictions[-1]
ensemble_2 = predictions[-2:].mean(axis=0)
ensemble_8 = predictions[-8:].mean(axis=0)
ensemble_20 = predictions[-20:].mean(axis=0)
single_svhn = predictions_svhn[-1]
ensemble_2_svhn = predictions_svhn[-2:].mean(axis=0)
ensemble_8_svhn = predictions_svhn[-8:].mean(axis=0)
ensemble_20_svhn = predictions_svhn[-20:].mean(axis=0)
single_n = predictions_n[-1]
ensemble_2_n = predictions_n[-2:].mean(axis=0)
ensemble_8_n = predictions_n[-8:].mean(axis=0)
ensemble_20_n = predictions_n[-20:].mean(axis=0)
# CIFAR10
single10 = predictions10[-1]
ensemble10_2 = predictions10[-2:].mean(axis=0)
ensemble10_8 = predictions10[-8:].mean(axis=0)
ensemble10_20 = predictions10[-20:].mean(axis=0)
single10_svhn = predictions10_svhn[-1]
ensemble10_2_svhn = predictions10_svhn[-2:].mean(axis=0)
ensemble10_8_svhn = predictions10_svhn[-8:].mean(axis=0)
ensemble10_20_svhn = predictions10_svhn[-20:].mean(axis=0)
single10_n = predictions10_n[-1]
ensemble10_2_n = predictions10_n[-2:].mean(axis=0)
ensemble10_8_n = predictions10_n[-8:].mean(axis=0)
ensemble10_20_n = predictions10_n[-20:].mean(axis=0)
single_mask = np.argmax(single, axis=1) == t100
ensemble_2_mask = np.argmax(ensemble_2, axis=1) == t100
ensemble_10_mask = np.argmax(ensemble_10, axis=1) == t100
ensemble_20_mask = np.argmax(ensemble_20, axis=1) == t100
swag_mask = np.argmax(swag, axis=1) == t100
mask = single_mask & ensemble_20_mask & swag_mask
single10_mask = np.argmax(single10, axis=1) == t10
ensemble10_2_mask = np.argmax(ensemble10_2, axis=1) == t10
ensemble10_10_mask = np.argmax(ensemble10_10, axis=1) == t10
ensemble10_20_mask = np.argmax(ensemble10_20, axis=1) == t10
swag10_mask = np.argmax(swag10, axis=1) == t10
mask = single10_mask & ensemble10_20_mask & swag10_mask
###Output
_____no_output_____
###Markdown
plot confidence distributions (for the maximum probability)
###Code
plt.figure(figsize=(7,3))
fig, (ax1,ax2, ax3) = plt.subplots(ncols=3, sharey=True) # frameon=False removes frames
# plt.subplot(1,3,1)
ax1.set_title('single model')
plot_prob_distributions([('CIFAR100', single), ('SVHN (OOD)', single_svhn), ('Gaussian (OOD)', single_n)], ax=ax1)
# plt.ylim((0,26))
# plt.subplot(1,3,2)
ax2.set_title('ensemble k=20')
plot_prob_distributions([('CIFAR100', ensemble_20), ('SVHN (OOD)', ensemble_20_svhn), ('Gaussian (OOD)', ensemble_20_n)], ax=ax2)
# plt.ylim((0,26))
# plt.subplot(1,3,3)
ax3.set_title('swag k=8')
plot_prob_distributions([('CIFAR100', swag),
('SVHN (OOD)', swag_svhn),
('Gaussian (OOD)', swag_n)], ax=ax3)
ax3.legend()
# plt.tight_layout()
plt.subplots_adjust(wspace=.0)
ax1.set_xticks([0,1])
ax1.set_xticks([0.5],True)
ax2.set_xticks([0,1])
ax2.set_xticks([0.5],True)
ax3.set_xticks([0,1])
ax3.set_xticks([0.5],True)
# plt.ylim((0,26))
# plt.yscale('log')
plt.figure(figsize=(10,5))
plt.subplot(1,3,1)
plt.title('single model')
plot_prob_distributions([('CIFAR10', single10), ('SVHN (OOD)', single10_svhn), ('Gaussian (OOD)', single10_n)])
plt.ylim((0,45))
plt.subplot(1,3,2)
plt.title('ensemble k=20')
plot_prob_distributions([('CIFAR10', ensemble10_20), ('SVHN (OOD)', ensemble10_20_svhn), ('Gaussian (OOD)', ensemble10_20_n)])
plt.ylim((0,45))
plt.subplot(1,3,3)
plt.title('swag k=8')
plot_prob_distributions([('CIFAR10', swag10),
('SVHN (OOD)', swag10_svhn),
('Gaussian (OOD)', swag10_n)])
plt.ylim((0,45))
###Output
/usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
Entropy for in and out of domain
###Code
from scipy.stats import entropy
print('Entropy CIFAR100 VGG16')
print('\nSingle model')
print('CIFAR100:', entropy(single.T).mean())
# print('CIFAR100:', entropy(single[single_mask].T).mean())
# print('CIFAR100:', entropy(single[~single_mask].T).mean())
print('SVHN:', entropy(single_svhn.T).mean())
print('Gaussian:', entropy(single_n.T).mean())
print('\nEnsemble k=2')
print('CIFAR100:', entropy(ensemble_2.T).mean())
print('SVHN:', entropy(ensemble_2_svhn.T).mean())
print('Gaussian:', entropy(ensemble_2_n.T).mean())
print('\nEnsemble k=20')
print('CIFAR100:', entropy(ensemble_20.T).mean())
print('SVHN:', entropy(ensemble_20_svhn.T).mean())
print('Gaussian:', entropy(ensemble_20_n.T).mean())
print('\nSWAG k=8 (128)')
print('CIFAR100:', entropy(swag.T).mean())
# print('CIFAR100:', entropy(swag[swag_mask].T).mean())
# print('CIFAR100:', entropy(swag[~swag_mask].T).mean())
print('SVHN:', entropy(swag_svhn.T).mean())
print('Gaussian:', entropy(swag_n.T).mean())
print('\nEntropy CIFAR10 VGG16')
print('\nSingle model')
print('CIFAR10:', entropy(single10.T).mean())
print('SVHN:', entropy(single10_svhn.T).mean())
print('Gaussian:', entropy(single10_n.T).mean())
print('\nEnsemble k=2')
print('CIFAR10:', entropy(ensemble10_2.T).mean())
print('SVHN:', entropy(ensemble10_2_svhn.T).mean())
print('Gaussian:', entropy(ensemble10_2_n.T).mean())
print('\nEnsemble k=20')
print('CIFAR10:', entropy(ensemble10_20.T).mean())
print('SVHN:', entropy(ensemble10_20_svhn.T).mean())
print('Gaussian:', entropy(ensemble10_20_n.T).mean())
print('\nSWAG k=8 (128)')
print('CIFAR10:', entropy(swag10.T).mean())
print('SVHN:', entropy(swag10_svhn.T).mean())
print('Gaussian:', entropy(swag10_n.T).mean())
###Output
Entropy CIFAR100 VGG16
Single model
CIFAR100: 0.36590943
CIFAR100: 0.14601204
CIFAR100: 0.8030105
SVHN: 1.0064511
Gaussian: 0.70562893
Ensemble k=2
CIFAR100: 0.44262382
SVHN: 1.1445429
Gaussian: 0.606499
Ensemble k=20
CIFAR100: 0.6436284
SVHN: 1.4936976
Gaussian: 1.1913989
SWAG k=8 (128)
CIFAR100: 0.5235151
CIFAR100: 0.25477436
CIFAR100: 1.0993398
SVHN: 1.2464372
Gaussian: 0.71285266
Entropy CIFAR10 VGG16
Single model
CIFAR10: 0.070742406
SVHN: 0.4867465
Gaussian: 0.14269099
Ensemble k=2
CIFAR10: 0.090693675
SVHN: 0.50139767
Gaussian: 0.19741191
Ensemble k=20
CIFAR10: 0.127225
SVHN: 0.6670243
Gaussian: 0.23204009
SWAG k=8 (128)
CIFAR10: 0.10531652
SVHN: 0.6022755
Gaussian: 0.1957414
###Markdown
OOD detection AUCROC (with max confidence as in-domain score) TODO: only use correctly classified samples?
###Code
from sklearn.metrics import roc_auc_score, roc_curve
def get_ood_aucroc(in_domain, ood):
y = np.concatenate([in_domain, ood])
t = np.concatenate([np.ones_like(in_domain), np.zeros_like(ood)])
return roc_auc_score(t, y)
def get_ood_roc_curve(in_domain, ood):
y = np.concatenate([in_domain, ood])
t = np.concatenate([np.ones_like(in_domain), np.zeros_like(ood)])
return roc_curve(t, y)
ensemble10_8_svhn, ensemble10_20_svhn
# print('\nCIFAR100 vs SVHN')
print(f'Single: & {get_ood_aucroc(single.max(axis=1), single_svhn.max(axis=1)):.6f}'
f' & {get_ood_aucroc(single.max(axis=1), single_n.max(axis=1)):.6f}'
f' & {get_ood_aucroc(single10.max(axis=1), single10_svhn.max(axis=1)):.6f}'
f' & {get_ood_aucroc(single10.max(axis=1), single10_n.max(axis=1)):.6f} \\\\')
print(f'E k=8 {get_ood_aucroc(ensemble_8.max(axis=1), ensemble_8_svhn.max(axis=1)):.6f}'
f' & {get_ood_aucroc(ensemble_8.max(axis=1), ensemble_8_n.max(axis=1)):.6f}'
f' & {get_ood_aucroc(ensemble10_8.max(axis=1), ensemble10_8_svhn.max(axis=1)):.6f}'
f' & {get_ood_aucroc(ensemble10_8.max(axis=1), ensemble10_8_n.max(axis=1)):.6f} \\\\')
print(f'E k=20 {get_ood_aucroc(ensemble_20.max(axis=1), ensemble_20_svhn.max(axis=1)):.6f}'
f' & {get_ood_aucroc(ensemble_20.max(axis=1), ensemble_20_n.max(axis=1)):.6f}'
f' & {get_ood_aucroc(ensemble10_20.max(axis=1), ensemble10_20_svhn.max(axis=1)):.6f}'
f' & {get_ood_aucroc(ensemble10_20.max(axis=1), ensemble10_20_n.max(axis=1)):.6f} \\\\')
print(f'SWAG K=8 {get_ood_aucroc(swag.max(axis=1), swag_svhn.max(axis=1)):.6f}'
f' & {get_ood_aucroc(swag.max(axis=1), swag_n.max(axis=1)):.6f}'
f' & {get_ood_aucroc(swag10.max(axis=1), swag10_svhn.max(axis=1)):.6f}'
f' & {get_ood_aucroc(swag10.max(axis=1), swag10_n.max(axis=1)):.6f} \\\\')
# print('E k=2 : ', get_ood_aucroc(ensemble_2.max(axis=1), ensemble_2_svhn.max(axis=1)))
# # print('E k=10: ', get_ood_aucroc(ensemble_10.max(axis=1), ensemble_10_svhn.max(axis=1)))
# # print('\nCIFAR100 vs Gaussian')
# print('E k=2 : ', get_ood_aucroc(ensemble_2.max(axis=1), ensemble_2_n.max(axis=1)))
# # print('E k=10: ', get_ood_aucroc(ensemble_10.max(axis=1), ensemble_10_n.max(axis=1)))
# # print('\nCIFAR10 vs SVHN')
# print('E k=2 : ', get_ood_aucroc(ensemble10_2.max(axis=1), ensemble10_2_svhn.max(axis=1)))
# # print('E k=10: ', get_ood_aucroc(ensemble10_10.max(axis=1), ensemble10_10_svhn.max(axis=1)))
# # print('\nCIFAR10 vs Gaussian')
# print('E k=2 : ', get_ood_aucroc(ensemble10_2.max(axis=1), ensemble10_2_n.max(axis=1)))
# # print('E k=10: ', get_ood_aucroc(ensemble10_10.max(axis=1), ensemble10_10_n.max(axis=1)))
fpr, tpr, thresholds = get_ood_roc_curve(single.max(axis=1), single_svhn.max(axis=1))
plt.plot(fpr, tpr, label='single')
fpr, tpr, thresholds = get_ood_roc_curve(ensemble_20.max(axis=1), ensemble_20_svhn.max(axis=1))
plt.plot(fpr, tpr, label='ensemble')
plt.legend()
fpr, tpr, thresholds = get_ood_roc_curve(single.max(axis=1), single_n.max(axis=1))
plt.plot(fpr, tpr, label='single')
fpr, tpr, thresholds = get_ood_roc_curve(ensemble_20.max(axis=1), ensemble_20_n.max(axis=1))
plt.plot(fpr, tpr, label='ensemble')
fpr, tpr, thresholds = get_ood_roc_curve(swag.max(axis=1), swag_n.max(axis=1))
plt.plot(fpr, tpr, label='swag')
plt.legend()
###Output
_____no_output_____
###Markdown
Weight space visualisations
###Code
predictions10, targets10 = get_preds('1eptvyat') # CIFAR10 interpolate
predictions100, targets100 = get_preds('3ji5gbi5') # CIFAR100 interpolate
n_samples = 16
locations = np.arange(-1/(n_samples-2), 1 + 2/(n_samples-2), 1/(n_samples-2))[:n_samples]
accuracies10 = []
accuracies100 = []
for i in range(n_samples):
accuracies10.append(accuracy_score(targets10, predictions10[-i-1].argmax(axis=1)))
accuracies100.append(accuracy_score(targets100, predictions100[-i-1].argmax(axis=1)))
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
plt.title('CIFAR10')
plt.plot(locations, accuracies10, 'k.:')
plt.plot([0], [accuracies10[1]], 'rx')
plt.plot([1], [accuracies10[-1]], 'rx')
# plt.ylabel('accuracy')
# plt.ylabel('relative location between checkpoints')
plt.subplot(1,2,2)
plt.title('CIFAR100')
plt.plot(locations, accuracies100, 'k.:')
plt.plot([0], [accuracies100[1]], 'rx')
plt.plot([1], [accuracies100[-1]], 'rx')
# plt.savefig()
predictions10.shape
###Output
_____no_output_____
###Markdown
ProblématiqueIl s'agit d'un problème d'analyse de sentiment soit un problème de classification. On a récupéré un fichier **run.py** permettant de récupérer les prédictions du modèle, le fichier **sentiment_pipe.joblib** et un dataset **comment_train.csv**. Mais il faudrait analyser le modèle et les données pour comprendre la démarche de notre prédécesseur et voir les actions possibles. Analyse des donnéesLes données utilisées pour l'entrainement sont des commentaires de restaurants avec 2 types de sentiments : **Positif / Négatif**.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import joblib
df = pd.read_csv("../data/comments_train.csv")
df.head()
print(f"Il y a au total {df.sentiment.count()} commentaires.")
df.describe()
print(f"Il y a 2 types de sentiments : {df.sentiment.unique()}")
# répartition des classes dans le label
plt.pie(df.sentiment.value_counts(normalize = True),
labels=df.sentiment.unique(),
autopct='%1.1f%%',
shadow=True, startangle=90)
plt.show()
###Output
_____no_output_____
###Markdown
Les classes sont légèrement déséquilibrées. - actions possibles:- stratify pendant l'entraînement- ajout de commentaires négatifs- SMOTE Il y a des doublons dans les commentaires 1617 commentaires mais seulement 1534 uniques soit 83 commentaires dupliqués.
###Code
duplicated_comments = df.loc[df.comment.duplicated(keep='first')]
duplicated_comments
# répartition des classes dans le label parmi les commentaires dupliqués
plt.pie(duplicated_comments.sentiment.value_counts(normalize = True),
labels=duplicated_comments.sentiment.unique(),
autopct='%1.1f%%',
shadow=True, startangle=90)
plt.show()
df_clean = df.drop_duplicates()
###Output
_____no_output_____
###Markdown
>95,2% des commentaires dupliqués sont négatifs. Ce qui renforce le déséquilibre.
###Code
df_clean.isna().sum() # pas de valeurs manquantes
###Output
_____no_output_____
###Markdown
Observation sur les données- Pas de valeurs manquantes- Commentaires dupliqués dont 95% négatifs- Classes du label sentiment légèrement déséquilibrés (63 / 37) Chargement du modèle
###Code
model = joblib.load("../models/sentiment_pipe.joblib")
model
###Output
_____no_output_____
###Markdown
>Pour empêcher les problèmes de compatibilité, on utilisera la version **0.23.2** de Scikit Learn.
###Code
import sklearn
sklearn.__version__
###Output
_____no_output_____
###Markdown
On a un pipeline Scikit Learn avec 2 étapes : - un TF-IDF avec les arguments par défaut du vectorizer- SVM avec le coefficient C à 1000 et le gamma à 0.001Ce modèle n'utilise pas les nombreux paramètres du Tf-Idf pour nettoyer les chaînes de caractères (tokenizer, stop_words, token_pattern).> Action à mener : ajouter une étape de nettoyage des donnéesEn ce qui concerne, les performances du modèle, il faudrait récupérer les métriques de ce dernier et comparer avec un grid search pour savoir s'il s'agit du meilleur modèle SVM. On pourra aussi aller plus loin en testant d'autres algorithmes tels que des algorithmes ensemblistes.
###Code
# dimension du dataset passé dans le SVM
model.steps[0][1].transform(df.comment.values)
###Output
_____no_output_____
###Markdown
Performances du modèle sans altération des données ou du modèleLe dataset **comment_train.csv** a été utilisé au cours de l'entraînement du modèle. Création des fonctions de performances
###Code
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split, cross_val_score, cross_validate
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_curve, plot_roc_curve, classification_report
def split_data(data):
# split dataset between features and label
target = "sentiment"
X = data.drop(target, axis = 1)
X = [comment[0] for comment in X.values]
y = data[target]
y.loc[y == "Positive"] = 1
y.loc[y == "Negative"] = 0
y = y.values.tolist()
return X, y
# compute metrics without cross validation
def compute_metrics(model, data):
X, y = split_data(data)
predictions = model.predict(X)
# define metrics
print(classification_report(model.predict(X), y))
# plot AUC-ROC curve
fig, ax = plt.subplots()
mean_fpr = np.linspace(0, 1, 100)
viz = plot_roc_curve(model, X, y,
name='ROC curve',
alpha=0.3, lw=2, ax = ax)
interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr)
ax.plot([0, 1], [0, 1], linestyle='--', lw=1, color='r',
label='Chance', alpha=.8)
ax.legend(loc="lower right")
plt.show()
return None
# evolution : compute metrics with cross validation
def compute_metrics_cv(X, y):
scores = cross_validate(model, X, y, cv=5, scoring=('accuracy', 'precision', 'recall', 'f1_weighted', 'roc_auc'), return_train_score = True)
final_scores = {metric : round(np.mean(metric_scores), 3) for metric, metric_scores in scores.items()}
confidence_intervals = {metric : round(np.std(metric_scores), 2) for metric, metric_scores in scores.items()}
return (final_scores, confidence_intervals)
###Output
_____no_output_____
###Markdown
Prédictions sur l'ensemble des données
###Code
all_data_metrics = compute_metrics(model, df)
all_data_metrics
classification_report(model.predict(X), y, output_dict=True)
###Output
_____no_output_____
###Markdown
La précision nous indique que le modèle arrive a détecté 68% des commentaires positifs parmi ceux recensés dans le dataset d'entrainement. Le recall nous indique que parmi les commentaires jugés comme positifs, 95% le sont vraiment. Le f1 score est une la moyenne harmonique entre la précision et le recall. Le modèle est juste dans 70% des cas avec un AUC (taux de séparabilité) de 0.76 >Les résultats obtenus sont assez moyens. Evaluation du modèle avec validation croiséeLa fonction **compute_metrics_cv** réalise une cross-validation pour chaque métrique afin d'obtenir des résultats les plus robustes possibles selon la division entre train et test.
###Code
X, y = split_data(df_clean)
scores, confidences_intervals = compute_metrics_cv(X, y)
scores
confidences_intervals
###Output
_____no_output_____
###Markdown
Interprétation résultatsLe fit_time est de 0.517 secondes et le score time de 0.198 secondes donc assez rapides.La précision nous indique que le modèle arrive à détecter 99.9% des commentaires positifs parmi ceux recensés dans le dataset de train mais seulement 89.8% sur le dataset de test.Le recall nous indique que parmi les commentaires jugés comme positifs, ils le sont tous effectivement sur le dataset de train, ce score tombe à 91% pour les données de test. **On observe que ces deux métriques sont assez bonnes donc le léger déséquilibre dans la polarité des commentaires ne semblent pas peser sur la capacité prédictive du modèle.**Le modèle est juste à 99.9% sur les données de train et à 88% sur les données de test. Le taux de séparabilité (AUC) est de 1 pour les données de train : le modèle distingue parfaitement les commentaires positifs des commentaires négatifs. Ce taux tombe à 0.94 sur les données de test.Au vue des résultats parfaits sur les données de train de la fonction cross_validate, il semblerait ce dataset que le modèle ait été entraîné sur une partie de dataset seulement. Par contre, si cette hypothèse est juste, le modèle aurait surappris les données de train, ce qui est reflété par l'écart substantiel entre les performances du modèle sur les données de train et de test.> HYPOTHESE : surapprentissage du modèle Actions correctrices Transform data before saving them into a Mongo DB
###Code
df_clean.to_csv("../data/comments.csv", index = False)
###Output
_____no_output_____
###Markdown
Create a dataset with sentiment values
###Code
df1 = pd.read_csv("data/comments_train.csv")
df1.head()
sentiments = pd.DataFrame([{"Sentiment": "Negative", "Value" : 0}, {"Sentiment" : "Positive", "Value" : 1}])
sentiments.to_csv("mongo/sentiments.csv", index = False)
###Output
_____no_output_____
###Markdown
Stock Analysis --- Investment Ratios Price-earnings ratio Divide a company's share price by its annual earnings per share to calculate the P/E ratio. This ratio shows how much investors are willing to pay for $1 of a company's earnings. "It is probably the best way of comparing assets in different sectors and of finding true bargains," says Steven Jon Kaplan, CEO of True Contrarian Investments. Higher P/E ratios suggest a company's future earnings are expected to grow and may appear overvalued compared with companies with lower P/Es. That said, a high or low P/E doesn't necessarily indicate a good or bad investment; it offers a snapshot that begs additional inquiry. Given the uncertainty of future cash flows, it can be helpful for investors to rely on historical P/Es and use a mix of other ratios to evaluate and pick stocks. Price-sales ratio Robert Johnson, professor of finance at Creighton University in Omaha, Nebraska, touts the benefits of calculating a stock's price divided by sales per share, commonly referred to as the price-to-sales ratio. "The price-to-sales ratio is used by analysts who want to eliminate some of the distortions that can result in company earnings," Johnson says. It's a useful ratio to determine whether a company has earnings, cash flow or even positive book value since sales is always a positive number. A lower ratio suggests you've found a bargain, or a value stock. Industry consensus says lower P/S stocks have better value because investors are paying less for every dollar of a company's revenues. P/S ratio values can vary across sectors, so to best assess a company's P/S, compare it with industry peers. Profit margin ratio This is the amount of profit a company makes for every unit of sales. Investors calculate this ratio by dividing net profit over revenue. Profit margins are unique to an industry – with grocery chains known for low profit margins, while software companies can claim double-digit ratios. But this information doesn't necessarily mean that it's better to buy a software company than a grocery store stock. A high profit margin means a business can offer products priced higher than its costs, yielding profits through effective pricing strategies. A low profit margin may mean there are inefficient pricing strategies, where a business cannot produce enough profit to cover expenses. Any stock could be a winner with a growing revenue stream and steady profit margins. Dividend payout ratio Companies with rising dividend payments are favored by John Robinson, owner and founder of Nest Egg Guru in Hawaii. The dividend payout ratio is the percentage of net income paid to investors in the form of dividends. This ratio tells investors how much earnings are paid out in dividends versus how much is reinvested back into the company. The higher the percentage, the less money remains to reinvest back into growing the company. "Companies that pay out less than 60% of their earnings as dividends tend to have room for further dividend increases and the ability to withstand temporary earnings downturns without having to reduce or eliminate dividend payments," he says. Price-free cash flow ratio Tim Parker, a partner at Regency Wealth Management in New York City, reveres free cash flow because that is the amount of money left over after a company reinvests in the business to pay dividends, buy back shares or make acquisitions. To determine price-free cash flow, divide the company's share price by the operating free cash flow per share. The ratio measures how much cash a company earns for each share of stock. Investors want to search for companies with growing free cash flow that are selling at a bargain. Parker favors this ratio since free cash flow is harder to manipulate than earnings. A lower ratio indicates a company may be undervalued, while a higher ratio may signal overvaluation. Debit-equity ratio Valuation ratios are important, but so are quality measures, such as debt and liquidity metrics. Divide a company's total liabilities by its shareholder equity to compute the debt-equity ratio. This ratio explains a company's financial leverage, the comparison between borrowed funds and equity or ownership. Think of this ratio like a homeowner's mortgage value versus principal on the home. A greater proportion of debt constrains a company's flexibility to grow as more revenue is directed to pay debt costs. Like most ratios, compare the debt-equity ratio to those of other industry members, as some sectors, such as utilities, have higher typical debt ratios compared with others. Quick and current ratios Sameer Samana, global equity and technical strategist at Wells Fargo Investment Institute in St. Louis, recommends examining the quick ratio and current ratio. These liquidity ratios measure if a company has enough working capital to handle potential downturns and financial setbacks. The current ratio divides current assets by current liabilities to measure how much cash a company has on hand to pay short-term obligations within a year. The quick ratio sums cash, marketable securities and accounts receivables and divides this sum by current liabilities. Higher numbers for these ratios suggest greater liquidity, while lower ratios may suggest a company cannot meet short-term obligations. EBITDA-to-sales ratio This metric is the company's EBITDA – which is an abbreviation for earnings before interest, taxes, depreciation and amortization – divided by its net sales. This ratio is used to evaluate a company's overall profitability or earnings before expenses, by comparing revenue with earnings. "The stability of EBITDA typically determines investors' appetite for the amount of debt it believes the business should have," says Bryan Lee, chief investment officer at Blue Zone Wealth Advisors in Los Angeles. "A more levered company brings higher volatility for how the equity trades. This volatility can amplify returns on the upside but also to the downside," he says. EBITDA margin offers a transparent view into business operations by eliminating noncash or nonoperating expenses like interest costs, taxes and depreciation that may dim profits, giving a more precise view of a company's profitability. Libraries
###Code
# !pip install pmdarima
# Libraries
# import yfinance as yf
from matplotlib import pyplot as plt
from matplotlib.ticker import (MultipleLocator, AutoMinorLocator)
import matplotlib.dates as mdates
%matplotlib inline
import seaborn as sns; sns.set_theme(color_codes=True)
sns.set_theme(style="dark")
import pandas as pd
import numpy as np
import boto3
from io import StringIO
from sagemaker import get_execution_role
import warnings
warnings.filterwarnings("ignore")
import pickle
from scipy import stats
plt.style.use('seaborn-white')
import numpy as np
import math
import ipywidgets as widgets
from datetime import datetime
import math
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVR
from sklearn.metrics import classification_report, accuracy_score, confusion_matrix
import pmdarima as pm
from pmdarima.model_selection import train_test_split
# print(plt.style.available)
###Output
_____no_output_____
###Markdown
Get data Retrieve ticker info data from s3
###Code
# Read ticker_info from s3
s3 = boto3.client("s3")
#Read the object stored in key 'myList001'
# object = s3.get_object(Bucket="euronext-stocks", Key="ticker_info")
object = s3.get_object(Bucket="euronext-stocks", Key="ticker_info_cac40")
# object = s3.get_object(Bucket="euronext-stocks", Key="ticker_info_test")
serializedObject = object['Body'].read()
ticker_info = pickle.loads(serializedObject) # Deserialize the retrieved object
# Create dataframe of ticker info
df_ticker_info = pd.DataFrame([(ticker,
ticker_info[ticker][0],
ticker_info[ticker][1],
ticker_info[ticker][2],
) for ticker in ticker_info], columns=["Ticker", "Name", "Sector", "Industry"])
df_ticker_info.set_index("Ticker", inplace=True)
df_ticker_info
df_ticker_info.loc["CHSR.PA"]
# df_ticker_info.loc[df_ticker_info["Sector"]=="Consumer Cyclical"]
# df_ticker_info.loc[df_ticker_info["Industry"]=="Railroads"]
df_ticker_info.loc[df_ticker_info["Industry"]=="Biotechnology"].index.tolist()
df_ticker_info["Sector"].unique()
df_ticker_info.loc[df_ticker_info["Sector"]=="Consumer Defensive"]["Industry"].unique()
df_ticker_info.loc[df_ticker_info["Sector"]=="Consumer Defensive"]
df_ticker_info.loc[df_ticker_info["Industry"]=="Drug Manufacturers—General"]
###Output
_____no_output_____
###Markdown
Retrieve all prices and volumes from s3
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "data.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_data = pd.read_csv(data_location, header=[0, 1],)
df_data.set_index(df_data["Unnamed: 0_level_0"]["Unnamed: 0_level_1"], drop=True, inplace=True)
df_data.index.name = "Date"
df_data.drop("Unnamed: 0_level_0", axis=1, inplace=True)
df_data.drop(df_data.index[0], inplace=True)
df_data.index = pd.to_datetime(df_data.index)
df_data.interpolate(method='linear', inplace=True) # use linear interpolation for missing values
df_data.head(3)
# Create various dataframes for different use cases
df_raw_prices = df_data["Adj Close"]
df_daily_returns = df_raw_prices.pct_change()
df_raw_prices.head(3)
df_daily_returns.head(3)
# Create df_prices dataframe with sector and industry columns
# Use ticker metadata in column headings of df_prices dataframe
df_prices = df_data["Adj Close"]
col_tuples = [(ticker_info[ticker][1], ticker_info[ticker][2], ticker) for ticker in ticker_info] # sector and industry
df_prices.columns = col_tuples
df_prices.columns = pd.MultiIndex.from_tuples(df_prices.columns,)
# Tidy up ordering and grouping on column headers
new_columns = df_prices.columns.sort_values(ascending=[True, True, True])
df_prices = df_prices[new_columns]
df_prices.head(3)
###Output
_____no_output_____
###Markdown
Retrieve ratio data from s3 Retrieve YF ratio data
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "df_ratio_data.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_ratio_data = pd.read_csv(data_location,)
df_ratio_data.set_index(df_ratio_data["Unnamed: 0"], drop=True, inplace=True)
df_ratio_data.index.name = "Metric"
df_ratio_data.drop("Unnamed: 0", axis=1, inplace=True)
# df_data.drop(df_data.index[0], inplace=True)
# df_data.index = pd.to_datetime(df_data.index)
# df_data.interpolate(method='linear', inplace=True) # use linear interpolation for missing values
df_ratio_data
###Output
_____no_output_____
###Markdown
Retrieve Pre-calculated Trailing PE
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "df_trailing_pe_data.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_trailing_pe_data = pd.read_csv(data_location,)
df_trailing_pe_data.set_index(df_trailing_pe_data["Date"], drop=True, inplace=True)
df_trailing_pe_data.index.name = "Date"
df_trailing_pe_data.index = pd.to_datetime(df_trailing_pe_data.index)
df_trailing_pe_data.drop("Date", axis=1, inplace=True)
df_trailing_pe_data.tail(3)
# Most recent
latest_pe = pd.Series(
data = [item[0] for item in df_trailing_pe_data.tail(1).T.values.tolist()],
index = list(df_trailing_pe_data.tail(1).T.index)
)
latest_pe
###Output
_____no_output_____
###Markdown
Retrieve Pre-calculated Forward PE
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "df_forward_pe_data.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_forward_pe_data = pd.read_csv(data_location,)
df_forward_pe_data.set_index(df_forward_pe_data["Date"], drop=True, inplace=True)
df_forward_pe_data.index.name = "Date"
df_forward_pe_data.index = pd.to_datetime(df_forward_pe_data.index)
df_forward_pe_data.drop("Date", axis=1, inplace=True)
df_forward_pe_data.head(3)
###Output
_____no_output_____
###Markdown
Retrieve Pre-calculated Price to Sales
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "df_p2s_data.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_p2s_data = pd.read_csv(data_location,)
df_p2s_data.set_index(df_p2s_data["Date"], drop=True, inplace=True)
df_p2s_data.index.name = "Date"
df_p2s_data.index = pd.to_datetime(df_p2s_data.index)
df_p2s_data.drop("Date", axis=1, inplace=True)
df_p2s_data.head(3)
###Output
_____no_output_____
###Markdown
Retrieve Pre-calculated Profit Margins
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "df_pm_data.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_pm_data = pd.read_csv(data_location,)
df_pm_data.set_index(df_pm_data["Unnamed: 0"], drop=True, inplace=True)
df_pm_data.index.name = "Date"
df_pm_data.index = pd.to_datetime(df_pm_data.index)
df_pm_data.drop("Unnamed: 0", axis=1, inplace=True)
df_pm_data.head(3)
###Output
_____no_output_____
###Markdown
Retrieve Dividend Payout ratios
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "series_dpr_data.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_dpr_data = pd.read_csv(data_location,)
df_dpr_data.set_index(df_dpr_data["Unnamed: 0"], drop=True, inplace=True)
df_dpr_data.index.name = "Date"
df_dpr_data.drop("Unnamed: 0", axis=1, inplace=True)
df_dpr_data.columns = ["dpr"]
df_dpr_data.head(3)
###Output
_____no_output_____
###Markdown
Retrieve Pre-calculated Price to Free Cashflow ratios
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "df_pfcf_data.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_pfcf_data = pd.read_csv(data_location,)
df_pfcf_data.set_index(df_pfcf_data["Date"], drop=True, inplace=True)
df_pfcf_data.index.name = "Date"
df_pfcf_data.index = pd.to_datetime(df_pfcf_data.index)
df_pfcf_data.drop("Date", axis=1, inplace=True)
df_pfcf_data.head(3)
###Output
_____no_output_____
###Markdown
Retrieve Debt to Equity ratios
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "df_de_data.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_de_data = pd.read_csv(data_location,)
df_de_data.set_index(df_de_data["Unnamed: 0"], drop=True, inplace=True)
df_de_data.index.name = "Date"
df_de_data.index = pd.to_datetime(df_de_data.index)
df_de_data.drop("Unnamed: 0", axis=1, inplace=True)
df_de_data.tail()
###Output
_____no_output_____
###Markdown
Retrieve Current ratios
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "df_cr_data.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_cr_data = pd.read_csv(data_location,)
df_cr_data.set_index(df_cr_data["Unnamed: 0"], drop=True, inplace=True)
df_cr_data.index.name = "Date"
df_cr_data.index = pd.to_datetime(df_cr_data.index)
df_cr_data.drop("Unnamed: 0", axis=1, inplace=True)
df_cr_data.head(3)
###Output
_____no_output_____
###Markdown
Retrieve Covid data New cases
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "df_log_new_cases.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_log_new_cases = pd.read_csv(data_location,)
df_log_new_cases.set_index(df_log_new_cases["date"], drop=True, inplace=True)
df_log_new_cases.index.name = "Date"
df_log_new_cases.index = pd.to_datetime(df_log_new_cases.index)
df_log_new_cases.drop("date", axis=1, inplace=True)
df_log_new_cases
###Output
_____no_output_____
###Markdown
New vaccinations
###Code
# Read data from s3
role = get_execution_role()
bucket="euronext-stocks"
data_key = "df_log_new_vaccinations.csv"
data_location = 's3://{}/{}'.format(bucket, data_key)
df_log_new_vaccinations = pd.read_csv(data_location,)
df_log_new_vaccinations.set_index(df_log_new_vaccinations["date"], drop=True, inplace=True)
df_log_new_vaccinations.index.name = "Date"
df_log_new_vaccinations.index = pd.to_datetime(df_log_new_vaccinations.index)
df_log_new_vaccinations.drop("date", axis=1, inplace=True)
df_log_new_vaccinations
###Output
_____no_output_____
###Markdown
Slicing
###Code
# Get all raw prices and volume data for a particular stock
df_data.xs("SU.PA", axis=1, level=1, drop_level=True)
df_data.index.day_name()
# Filter by date
# df_prices.loc['2021-01-01':'2021-01-31']
# Get sectors
sectors = df_prices.columns.get_level_values(0).unique()[2:].to_list()
sectors.sort()
print(sectors) # full dataset
# df_prices.columns.get_level_values(0).unique().to_list() # test dataset
# Get industries
industries = df_prices.columns.get_level_values(1).unique()[2:].to_list()
industries.sort() # full dataset
print(industries)
# df_prices.columns.get_level_values(1).unique().to_list() # test data
# Get latest values for all stocks in a particular sector/industry combination
sector = "Utilities"
industry = "Utilities—Renewable"
# df_prices[sector][industry].iloc[107].sort_values(ascending=False)
# Get data for all stocks in a particular sector
# df_prices.xs("Energy", axis=1, level=0, drop_level=False)
# Get data for all stocks in a particular industry
# df_prices.xs("Utilities—Renewable", axis=1, level=1, drop_level=False)
# df_prices.xs("Airports & Air Services", axis=1, level=1, drop_level=False)
# Get stock tickers in a particular sector
print([item[2] for item in df_prices.xs("Financial Services", axis=1, level=0, drop_level=False).columns.to_list()])
# Get stock tickers in a particular industry
[item[2] for item in df_prices.xs("Aerospace & Defense", axis=1, level=1, drop_level=False).columns.to_list()]
# Get list of dicts of ticker and ticker names for a sector. Can use this to generate tickers_to_plot below
sector = "Financial Services"
[{ticker:ticker_info[ticker][0]} for ticker in [item[2] for item in df_prices.xs(sector, axis=1, level=0, drop_level=False).columns.to_list()]]
# Get list of dicts of ticker and ticker names for an industry. Can use this to generate tickers_to_plot below
industry = "Software—Infrastructure"
[{ticker:ticker_info[ticker][0]} for ticker in [item[2] for item in df_prices.xs(industry, axis=1, level=1, drop_level=False).columns.to_list()]]
###Output
_____no_output_____
###Markdown
Plotting Reuseable components
###Code
# List of tickers in alphabetical order
ticker_list = list(df_raw_prices.columns)
ticker_list.sort()
###Output
_____no_output_____
###Markdown
Plot functions
###Code
# Heatmap displaying correlation of closing price between tickers
def plot_corr_daily_returns(tickers, start_date, end_date, ma_days):
# corr = df_daily_returns.loc[start_date:end_date][tickers].corr()
# corr = df_daily_returns.loc[start_date:end_date][tickers].rolling(window=ma_days, center=False).mean().corr()
corr = df_raw_prices.loc[start_date:end_date][tickers].rolling(window=ma_days, center=False).mean().corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr, dtype=bool))
fig, ax = plt.subplots(figsize=(15,12))
sns.heatmap(corr, cmap="rocket_r", ax=ax, square=True, annot=True, fmt=".2g", annot_kws={"size":8}, mask=mask) # plot heatmap with seaborn
# icefire
plt.title(f"Correlation of {ma_days} day Moving Averages of Close Prices")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Single stock price with moving averages
###Code
# Plot a single stock Close price and several moving averages
ticker_list = list(df_raw_prices.columns)
ticker_list.sort()
@widgets.interact(
ticker=ticker_list,
start_date=widgets.Text(value="2020-01-01", description="start date", continuous_update=False),
end_date=widgets.Text(value="2020-12-31", description="end date", continuous_update=False),
)
def plot_single_ticker(ticker, start_date, end_date):
base = df_raw_prices.loc[start_date:end_date][ticker]
rolling_20 = np.round(base.rolling(window=20, center=False).mean(), 2)
rolling_50 = np.round(base.rolling(window=50, center=False).mean(), 2)
rolling_200 = np.round(base.rolling(window=200, center=False).mean(), 2)
fig, ax = plt.subplots(figsize=(15,5))
ax.plot(base, label="Close", linewidth=4, alpha=.2)
ax.plot(rolling_20, label="20d M.A.", linestyle="dashed", linewidth=2)
ax.plot(rolling_50, label="50d M.A.", linestyle="dashed", linewidth=2)
ax.plot(rolling_200, label="200d M.A.", linestyle="dashed", linewidth=2)
ax.set_ylabel("Close Price")
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=1))
ax.set_title(f"{ticker}\n{start_date} to {end_date}")
plt.xticks(rotation=45)
plt.legend()
plt.grid()
###Output
_____no_output_____
###Markdown
Trailing P/E ratio
###Code
# Name Schneider Electric S.E.
# Sector Industrials
# Industry Specialty Industrial Machinery
df_ticker_info.loc[df_ticker_info["Industry"]=="Specialty Industrial Machinery"]
# Plot Trailing Price-Earnings Ratio for multiple stocks
# Plot multiple stocks or indices against each other and customize the legend labels
ticker_list = list(df_trailing_pe_data.columns)
ticker_list.sort()
@widgets.interact(
tickers_to_plot=widgets.SelectMultiple(options=ticker_list[:-1], value=["SU.PA"], description="Tickers", disabled=False),
# tickers_to_plot=widgets.SelectMultiple(options=ticker_list[:-1], value=["^FCHI"], description="Tickers", disabled=False),
start_date=widgets.Text(value="2021-01-01", description="start date", continuous_update=False),
end_date=widgets.Text(value="2021-12-31", description="end date", continuous_update=False),
)
def pe_ratio(tickers_to_plot, start_date, end_date):
"""
Function to plot a stock against an index (or other stock) and customize the legend labels.
Input is a list of key:pairs (python dicts) in the form ticker:ticker description
"""
fig = plt.figure(figsize=(15,8))
ax = fig.add_subplot(111)
# Baseline the data at start date and convert to percentages
df_plot = df_trailing_pe_data.loc[start_date:end_date]
for ticker in tickers_to_plot:
df_plot[ticker].plot(ax=ax, legend=ticker)
lines, labels = ax.get_legend_handles_labels()
# mylabels = [list(ticker.values())[0] for ticker in tickers_to_plot]
# ax.legend(lines, mylabels, loc='best') # legend for first two lines only
ax.legend()
ax.set_xlabel(""); ax.set_ylabel("Trailing P/E Ratio")
ax.set_title(f"Trailing Price-Earnings Ratio comparison\n{start_date} to {end_date}")
plt.grid()
###Output
_____no_output_____
###Markdown
Foward P/E ratio
###Code
# Plot Forward Price-Earnings Ratio for multiple stocks
# Plot multiple stocks or indices against each other and customize the legend labels
ticker_list = list(df_forward_pe_data.columns)
ticker_list.sort()
@widgets.interact(
tickers_to_plot=widgets.SelectMultiple(options=ticker_list[:-1], value=["SU.PA"], description="Tickers", disabled=False),
# tickers_to_plot=widgets.SelectMultiple(options=ticker_list[:-1], value=["^FCHI"], description="Tickers", disabled=False),
start_date=widgets.Text(value="2021-01-01", description="start date", continuous_update=False),
end_date=widgets.Text(value="2021-12-31", description="end date", continuous_update=False),
)
def pe_ratio(tickers_to_plot, start_date, end_date):
"""
Function to plot a stock against an index (or other stock) and customize the legend labels.
Input is a list of key:pairs (python dicts) in the form ticker:ticker description
"""
fig = plt.figure(figsize=(15,8))
ax = fig.add_subplot(111)
# Baseline the data at start date and convert to percentages
df_plot = df_forward_pe_data.loc[start_date:end_date]
for ticker in tickers_to_plot:
df_plot[ticker].plot(ax=ax, legend=ticker)
lines, labels = ax.get_legend_handles_labels()
# mylabels = [list(ticker.values())[0] for ticker in tickers_to_plot]
# ax.legend(lines, mylabels, loc='best') # legend for first two lines only
ax.legend()
ax.set_xlabel(""); ax.set_ylabel("Foward P/E Ratio")
ax.set_title(f"Forward Price-Earnings Ratio comparison\n{start_date} to {end_date}")
plt.grid()
###Output
_____no_output_____
###Markdown
Return comparisons
###Code
# Plot multiple stocks or indices against each other and customize the legend labels
ticker_list = list(df_raw_prices.columns)
ticker_list.sort()
@widgets.interact(
tickers_to_plot=widgets.SelectMultiple(options=ticker_list[:-1], description="Tickers", disabled=False),
# tickers_to_plot=widgets.SelectMultiple(options=ticker_list[:-1], value=["^FCHI"], description="Tickers", disabled=False),
start_date=widgets.Text(value="2020-01-01", description="start date", continuous_update=False),
end_date=widgets.Text(value="2020-12-31", description="end date", continuous_update=False),
)
def plot_tickers(tickers_to_plot, start_date, end_date):
"""
Function to plot a stock against an index (or other stock) and customize the legend labels.
Input is a list of key:pairs (python dicts) in the form ticker:ticker description
"""
fig = plt.figure(figsize=(15,8))
ax = fig.add_subplot(111)
# Baseline the data at start date and convert to percentages
df_plot = df_raw_prices.loc[start_date:end_date]
df_plot = (df_plot/df_plot.iloc[0] - 1) * 100
for ticker in tickers_to_plot:
df_plot[ticker].plot(ax=ax, legend=ticker)
df_plot["^FCHI"].plot(ax=ax, legend="^FCHI", linewidth=4, alpha=.2, color="gray")
lines, labels = ax.get_legend_handles_labels()
# mylabels = [list(ticker.values())[0] for ticker in tickers_to_plot]
# ax.legend(lines, mylabels, loc='best') # legend for first two lines only
ax.legend()
ax.set_xlabel(""); ax.set_ylabel("Return %")
ax.set_title(f"% Returns comparison\n{start_date} to {end_date}")
plt.grid()
###Output
_____no_output_____
###Markdown
Alpha & Beta
###Code
# Regression plot
@widgets.interact(
ticker1=ticker_list,
ticker2=ticker_list,
start_date=widgets.Text(value="2020-01-01", description="start date", continuous_update=False),
end_date=widgets.Text(value="2020-12-31", description="end date", continuous_update=False),
)
def regression_plot(ticker1, ticker2, start_date, end_date):
# Transform data for alpha, beta and plotting
data = pd.DataFrame(
{
ticker1:[item[0] for item in df_prices.loc[start_date:end_date].xs(ticker1, axis=1, level=2, drop_level=False).values.tolist()],
ticker2:[item[0] for item in df_prices.loc[start_date:end_date].xs(ticker2, axis=1, level=2, drop_level=False).values.tolist()]
}
)
data_returns = (data/data.iloc[0] - 1) * 100
(beta, alpha) = stats.linregress(data_returns[ticker2],
data_returns[ticker1])[0:2]
# fig, ax = plt.subplots()
fig, ax = plt.subplots(figsize=(12,8))
ax = sns.regplot(x=ticker2, y=ticker1, data=data_returns)
ax.set_title(f"Regression plot for\n{ticker2} vs {ticker1}\n{start_date} to {end_date}\nBeta={round(beta, 4)}\nAlpha={round(alpha,5)}")
ax.xaxis.set_major_locator(MultipleLocator(10))
ax.yaxis.set_major_locator(MultipleLocator(10))
plt.gca().set_aspect("equal")
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Volatility
###Code
# Create dataframe of annualized volatility of log returns, rolling window = 20 days
# https://stackoverflow.com/questions/38828622/calculating-the-stock-price-volatility-from-a-3-columns-csv
df_vols = np.log(1 + df_raw_prices.pct_change()).rolling(window=20).std() * (255**0.5)
# Plot rolling vol comaprison
# tickers_to_plot = ["SU.PA", "EUCAR.PA", "^FCHI"]
# tickers_to_plot = ['FAUV.PA', 'FORE.PA', 'GET.PA', 'GIRO.PA']
# tickers_to_plot = df_ticker_info.loc[df_ticker_info["Industry"]=="Railroads"].index.tolist()
# tickers_to_plot = df_ticker_info.loc[df_ticker_info["Industry"]=="Biotechnology"].index.tolist()
tickers_to_plot = df_ticker_info.loc[df_ticker_info["Industry"]=="Drug Manufacturers—General"].index.tolist()
df_vols[tickers_to_plot].plot(figsize=(15,5),
# color=["green", "blue", "gray"],
)
plt.title("Realized Volatility\n(Annualized, rolling window=20)")
plt.grid()
# Plot most volatile stocks in past 20 days
today = "2021-05-21"
top_n_vols = 20
df_vols.loc[today].sort_values(ascending=False)[:top_n_vols].sort_values().plot(kind="barh", figsize=(7,6))
plt.title(f"Top {top_n_vols} most volatile stocks in past 20 days\nas at {today}")
plt.xlabel("Annualized vol of log returns in %")
plt.grid()
# Plot most volatile stocks in past 20 days
industry = "Biotechnology"
today = "2021-05-21"
top_n_vols = 100
df_vols.loc[today][df_ticker_info.loc[df_ticker_info["Industry"]==industry].index.tolist()].sort_values(ascending=False)[:top_n_vols].sort_values().plot(kind="barh", figsize=(7,10))
plt.title(f"Top {top_n_vols} most volatile stocks in {industry} industry over past 20 days\nas at {today}")
plt.xlabel("Annualized vol of log returns in %")
plt.grid()
###Output
_____no_output_____
###Markdown
Correlation of moving average returns
###Code
start_date = '2021-01-01'
end_date = '2021-12-31'
ma_days = 20
# Make list of tickers to plot
tickers = ['EC.PA', 'LFDE.PA', 'MAU.PA', 'CGG.PA', 'CLB.AS', 'FTI.PA', 'FUR.AS', 'GTT.PA', 'SBMO.AS', 'SLB.PA', 'TE.PA', 'VPK.AS', 'RDSA.AS', 'RDSB.AS', 'EURN.BR', 'EXM.BR', 'FLUX.BR', 'DPAM.PA', 'ES.PA']
# tickers = df_raw_prices.columns # DON'T DO THIS FOR ~700 EURONEXT STOCKS!!!
# Plot correlation of closing price between tickers
plot_corr_daily_returns(tickers, start_date, end_date, ma_days)
###Output
_____no_output_____
###Markdown
Compare weekly patterns
###Code
stock_to_compare = "^FCHI"
start_date = "2021-01-01"
end_date = "2021-12-31"
def plot_comparison_of_weeks(stock_to_compare):
df_compare_weeks = pd.DataFrame(df_raw_prices.loc[start_date:end_date][stock_to_compare])
# Get day of week for each date in the index
df_compare_weeks["weekday"] = df_compare_weeks.index.to_series().dt.day_name()
# Get week number for each date in the index
df_compare_weeks["weeknum"] = df_compare_weeks.index.to_series().dt.week
df_compare_weeks = df_compare_weeks.pivot_table(index="weeknum", columns="weekday", values=stock_to_compare).transpose()
df_compare_weeks = df_compare_weeks.reindex(["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"])
df_compare_weeks = (df_compare_weeks/df_compare_weeks.iloc[0] - 1) * 100
colors = ["white" if df_compare_weeks[col].iloc[0] < df_compare_weeks[col].iloc[4] else "red" for col in df_compare_weeks.columns]
fig, ax = plt.subplots(figsize=(15,7))
df_compare_weeks.plot(ax=ax, color=colors, alpha=.6, linewidth=4)
ax.get_legend().remove()
ax.set_facecolor("black")
plt.xlabel(None)
return colors
colors = plot_comparison_of_weeks(stock_to_compare)
pd.Series(colors).value_counts()
###Output
_____no_output_____
###Markdown
Trading Strategies ML-based Trading Strategy Target to be predicted 1-week forward return classification. One of:- Loss (> n% loss)- Even (> Loss < Profit)- Profit (> n% profit)e.g. Price on t = 100, price on t + 1week = 103, (103/100)-1 = 3%, if n% = 2% then class="Profit" Features to be implemented - Alpha (performance relative to index), last 10 days, last 5 days- Beta (volatility/risk relative to index), last 10 days, last 5 days- Trailing PE --> DONE- Forward PE --> DONE- Current realized vol, last 10 days, last 5 days- Price to Sales --> DONE- Profit margin --> DONE- Dividend payout- Price to Free Cashflow --> DONE- Debt to Equity --> DONE- Current ratio --> DONE- Moving Average crosses, 5d crossing 10d- Correlation to index, last 10 days, last 5 days- Index performance, last 10 days, last 5 days --> DONE- Stock (self) performance, last 10 days, last 5 days --> DONE- Index performance, last 10 days, last 5 days as 1 (gain) or -1 (loss)- Stock (self) performance, last 10 days, last 5 days as 1 (gain) or -1 (loss)- ARIMA forecast: https://pypi.org/project/pmdarima/ and https://github.com/cdignam8304/time-series/tree/master/tutorialhttps://github.com/owid/covid-19-data/tree/master/public/dataNB: Data generated using "covid_data.ipynb"- Covid new cases (smoothed + log transform) --> DONE- Covid new vaccinations (smoothed + log transform) --> DONE Parameters
###Code
# days_forward = 5
days_forward = 10
target_return = .05 # 5%
# target_return = .00 # 0%
test_ratio = .2
###Output
_____no_output_____
###Markdown
Create labeled dataset
###Code
# df_raw_prices
# Create dataframe of n days forward returns
df_nday_returns = -df_raw_prices.diff(-days_forward)/df_raw_prices
df_nday_returns.head(3)
# Create dataframe of n days forward log returns
df_nday_log_returns = -np.log(1+df_raw_prices.pct_change(-days_forward))
df_nday_log_returns.head(3)
# Stack the actual returns so can look at correlations
df_nday_returns_stacked = df_nday_returns.stack().reset_index()
df_nday_returns_stacked.columns = ["Date", "Ticker", "n_day_return"]
df_nday_returns_stacked.sort_values(["Ticker", "Date", ], inplace=True)
# df_nday_returns_stacked
# Stack the actual returns so can look at correlations
df_nday_log_returns_stacked = df_nday_log_returns.stack().reset_index()
df_nday_log_returns_stacked.columns = ["Date", "Ticker", "n_day_log_return"]
df_nday_log_returns_stacked.sort_values(["Ticker", "Date", ], inplace=True)
df_nday_log_returns_stacked
# Convert n days forward returns to target labels
def convert_2_target(x):
try:
# if x >= target_return:
if x > target_return:
return 1
# elif x <= -target_return:
# return -1
else:
return 0
except:
return 0
# df_targets = df_nday_log_returns_stacked.applymap(convert_2_target)
df_nday_log_returns_stacked["n_day_log_return"].apply(lambda x: convert_2_target(x)).value_counts()
df_targets
df_targets.shape
# df_targets.stack().unstack() # this is how you get back to original frame
df_targets_stacked = df_targets.stack().reset_index()
df_targets_stacked.columns = ["Date", "Ticker", "Target"]
df_targets_stacked.sort_values(["Ticker", "Date", ], inplace=True)
df_targets_stacked
# Add column with previous yr end date so can be used to link to features from financial statements
def get_prev_yr_end(x):
yr = x.year-1
datestring = f"{yr}-12-31"
return pd.to_datetime(datestring)
df_targets_stacked["Prev_Yr_End"] = df_targets_stacked["Date"].apply(lambda x: get_prev_yr_end(x))
# Put columns in better order so Target is last column
cols = ['Date', 'Prev_Yr_End', 'Ticker', 'Target']
df_targets_stacked = df_targets_stacked[cols]
df_targets_stacked
###Output
_____no_output_____
###Markdown
Features Trailing PE
###Code
df_trailing_pe_stacked = df_trailing_pe_data.stack().reset_index()
df_trailing_pe_stacked.columns = ["Date", "Ticker", "Trailing_PE"]
df_trailing_pe_stacked.sort_values(["Ticker", "Date", ], inplace=True)
df_trailing_pe_stacked
###Output
_____no_output_____
###Markdown
Forward PE
###Code
df_forward_pe_stacked = df_forward_pe_data.stack().reset_index()
df_forward_pe_stacked.columns = ["Date", "Ticker", "Forward_PE"]
df_forward_pe_stacked.sort_values(["Ticker", "Date", ], inplace=True)
df_forward_pe_stacked
###Output
_____no_output_____
###Markdown
Price 2 Sales
###Code
df_p2s_stacked = df_p2s_data.stack().reset_index()
df_p2s_stacked.columns = ["Date", "Ticker", "P2S"]
df_p2s_stacked.sort_values(["Ticker", "Date", ], inplace=True)
df_p2s_stacked
###Output
_____no_output_____
###Markdown
Profit Margins
###Code
df_pm_stacked = df_pm_data.stack().reset_index()
df_pm_stacked.columns = ["Date", "Ticker", "ProfitMargin"]
df_pm_stacked.sort_values(["Ticker", "Date", ], inplace=True)
df_pm_stacked
###Output
_____no_output_____
###Markdown
Dividend Payout ratios
###Code
# COME BACK TO THIS LATER!
###Output
_____no_output_____
###Markdown
Price to Free Cashflow
###Code
df_pfcf_stacked = df_pfcf_data.stack().reset_index()
df_pfcf_stacked.columns = ["Date", "Ticker", "P2FCF"]
df_pfcf_stacked.sort_values(["Ticker", "Date", ], inplace=True)
df_pfcf_stacked
###Output
_____no_output_____
###Markdown
Debt Equity ratios
###Code
df_de_stacked = df_de_data.stack().reset_index()
df_de_stacked.columns = ["Date", "Ticker", "DebtEquity"]
df_de_stacked.sort_values(["Ticker", "Date", ], inplace=True)
df_de_stacked
###Output
_____no_output_____
###Markdown
Current ratios
###Code
df_cr_stacked = df_cr_data.stack().reset_index()
df_cr_stacked.columns = ["Date", "Ticker", "CurrRatio"]
df_cr_stacked.sort_values(["Ticker", "Date", ], inplace=True)
df_cr_stacked
###Output
_____no_output_____
###Markdown
Covid new cases
###Code
df_covid_cases_stacked = df_log_new_cases.reset_index()
df_covid_cases_stacked.columns = ["Date", "log_new_cases"]
df_covid_cases_stacked
###Output
_____no_output_____
###Markdown
Covid new vaccinations
###Code
df_covid_vaccinations_stacked = df_log_new_vaccinations.reset_index()
df_covid_vaccinations_stacked.columns = ["Date", "log_new_vaccinations"]
df_covid_vaccinations_stacked
###Output
_____no_output_____
###Markdown
Index performance
###Code
# Use this to create binary features
def gain_or_loss(x):
if x > 0:
return 1
return -1
periods = 5
index_chg_5 = (df_raw_prices["^N100"].diff(periods=periods)/df_raw_prices["^N100"].shift(periods=periods))
index_chg_5 = index_chg_5.reset_index()
index_chg_5.columns = ["Date", "euronext_5d_chg"]
index_chg_5.fillna(value=0, inplace=True)
periods = 10
index_chg_10 = (df_raw_prices["^N100"].diff(periods=periods)/df_raw_prices["^N100"].shift(periods=periods))
index_chg_10 = index_chg_10.reset_index()
index_chg_10.columns = ["Date", "euronext_10d_chg"]
index_chg_10.fillna(value=0, inplace=True)
index_chg_5_binary = index_chg_5[["Date"]]
index_chg_5_binary["euronext_5d_chg_binary"] = index_chg_5["euronext_5d_chg"].apply(lambda x: gain_or_loss(x))
index_chg_10_binary = index_chg_10[["Date"]]
index_chg_10_binary["euronext_10d_chg_binary"] = index_chg_10["euronext_10d_chg"].apply(lambda x: gain_or_loss(x))
###Output
_____no_output_____
###Markdown
Stock (self) performance
###Code
periods = 5
stock_chg_5 = (df_raw_prices.diff(periods=periods)/df_raw_prices.shift(periods=periods)).stack().reset_index()
stock_chg_5.columns = ["Date", "Ticker", "stock_chg_5"]
stock_chg_5.sort_values(["Ticker", "Date", ], inplace=True)
periods = 10
stock_chg_10 = (df_raw_prices.diff(periods=periods)/df_raw_prices.shift(periods=periods)).stack().reset_index()
stock_chg_10.columns = ["Date", "Ticker", "stock_chg_10"]
stock_chg_10.sort_values(["Ticker", "Date", ], inplace=True)
stock_chg_5.head()
stock_chg_10.head()
stock_chg_5_binary = stock_chg_5[["Date", "Ticker"]]
stock_chg_5_binary["stock_chg_5_binary"] = stock_chg_5["stock_chg_5"].apply(lambda x: gain_or_loss(x))
stock_chg_10_binary = stock_chg_10[["Date", "Ticker"]]
stock_chg_10_binary["stock_chg_10_binary"] = stock_chg_10["stock_chg_10"].apply(lambda x: gain_or_loss(x))
###Output
_____no_output_____
###Markdown
Regression models: SVR and RandomForestRegressor
###Code
df_raw_prices["MC.PA"].plot()
to_predict_svr = df_raw_prices.loc["2020-04-01":"2021-12-31"][["MC.PA"]].rolling(window=10, center=False).mean()
to_predict_svr.reset_index(inplace=True)
to_predict_svr["MC.PA"].plot()
# define function for create N lags
def create_lags(df, N):
for i in range(N):
df['Lag' + str(i+1)] = df["MC.PA"].shift(i+1)
return df
# create 10 lags
num_lags = 100
to_predict_svr = create_lags(to_predict_svr, num_lags)
to_predict_svr.dropna(inplace=True)
to_predict_svr.head()
to_predict_svr["MC.PA"].plot()
plt.grid()
y = to_predict_svr["MC.PA"].values
X = to_predict_svr.iloc[:, 2:].values
y.shape
train_idx = int(len(to_predict_svr)*.9)
train_idx
X_train, y_train, X_test, y_test = X[:train_idx], y[:train_idx], X[train_idx:], y[train_idx:]
# fit and predict
# clf = SVR(
# C=1,
# )
clf = RandomForestRegressor(n_estimators=500)
clf.fit(X_train, y_train)
preds_test = clf.predict(X_test)
plt.plot(y_train)
plt.plot(y_test)
plt.plot(preds_test)
###Output
_____no_output_____
###Markdown
ARIMA
###Code
# ?pm.auto_arima
model = pm.auto_arima(
y_train,
seasonal=False,
)
# make your forecasts
forecasts = model.predict(y_test.shape[0]) # predict N steps into the future
# Visualize the forecasts (blue=train, green=forecasts)
x = np.arange(y.shape[0])
plt.plot(x[:train_idx], y_train, c='blue')
plt.plot(x[train_idx:], forecasts, c='green')
plt.show()
sns.jointplot(
x=y_test,
y=forecasts,
kind="resid",
# color="#4CB391",
)
###Output
_____no_output_____
###Markdown
Combine features with target
###Code
# Merge each feature into single dataframe, along with target labels...
# NB: features from financial statements and static ratios require special treatment due to year end dates
# Trailing PE
model_data = pd.merge(
left=df_trailing_pe_stacked,
right=df_targets_stacked,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Date", "Ticker"],
sort=False,
validate="one_to_one",
)
# Forward PE
model_data = pd.merge(
left= df_forward_pe_stacked,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Date", "Ticker"],
sort=False,
validate="one_to_one",
)
# Price 2 Sales
model_data = pd.merge(
left= df_p2s_stacked,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Date", "Ticker"],
sort=False,
validate="one_to_one",
)
# Profit Margins
model_data = pd.merge(
left= df_pm_stacked,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Prev_Yr_End", "Ticker"],
sort=False,
# validate="one_to_one",
)
model_data.drop(["Date_x"], axis=1, inplace=True) # remove superfluous column
model_data.rename(columns={'Date_y':'Date'}, inplace=True) # fix column name
# Dividend Payout ratios - CPME BACK TO THIS LATER!
# Price 2 Free Cashflow
model_data = pd.merge(
left= df_pfcf_stacked,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Date", "Ticker"],
sort=False,
validate="one_to_one",
)
# Debt Equity ratios
model_data = pd.merge(
left= df_de_stacked,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Prev_Yr_End", "Ticker"],
sort=False,
# validate="one_to_one",
)
model_data.drop(["Date_x"], axis=1, inplace=True) # remove superfluous column
model_data.rename(columns={'Date_y':'Date'}, inplace=True) # fix column name
# Current ratios
model_data = pd.merge(
left= df_cr_stacked,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Prev_Yr_End", "Ticker"],
sort=False,
# validate="one_to_one",
)
model_data.drop(["Date_x"], axis=1, inplace=True) # remove superfluous column
model_data.rename(columns={'Date_y':'Date'}, inplace=True) # fix column name
# Covid new cases
model_data = pd.merge(
left= df_covid_cases_stacked,
right = model_data,
# how="inner",
how="right",
left_on=["Date"],
right_on=["Date"],
sort=False,
# validate="one_to_one",
)
# Covid new vaccinations
model_data = pd.merge(
left= df_covid_vaccinations_stacked,
right = model_data,
# how="inner",
how="right",
left_on=["Date"],
right_on=["Date"],
sort=False,
# validate="one_to_one",
)
# 5-day change in euronext index
model_data = pd.merge(
left= index_chg_5,
right = model_data,
# how="inner",
how="right",
left_on=["Date"],
right_on=["Date"],
sort=False,
# validate="one_to_one",
)
# 10-day change in euronext index
model_data = pd.merge(
left= index_chg_10,
right = model_data,
# how="inner",
how="right",
left_on=["Date"],
right_on=["Date"],
sort=False,
# validate="one_to_one",
)
# 5-day change in index as binary
model_data = pd.merge(
left= index_chg_5_binary,
right = model_data,
# how="inner",
how="right",
left_on=["Date"],
right_on=["Date"],
sort=False,
# validate="one_to_one",
)
# 10-day change in index as binary
model_data = pd.merge(
left= index_chg_10_binary,
right = model_data,
# how="inner",
how="right",
left_on=["Date"],
right_on=["Date"],
sort=False,
# validate="one_to_one",
)
# 5-day change in stock price
model_data = pd.merge(
left= stock_chg_5,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Date", "Ticker"],
sort=False,
validate="one_to_one",
)
# 10-day change in stock price
model_data = pd.merge(
left= stock_chg_10,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Date", "Ticker"],
sort=False,
validate="one_to_one",
)
# 5-day change in stock price as binary
model_data = pd.merge(
left= stock_chg_5_binary,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Date", "Ticker"],
sort=False,
validate="one_to_one",
)
# 10-day change in stock price as binary
model_data = pd.merge(
left= stock_chg_10_binary,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Date", "Ticker"],
sort=False,
validate="one_to_one",
)
# df_nday_returns (this is value used to create target labels)
model_data = pd.merge(
left= df_nday_returns_stacked,
right = model_data,
# how="inner",
how="right",
left_on=["Date", "Ticker"],
right_on=["Date", "Ticker"],
sort=False,
validate="one_to_one",
)
model_data.shape
# (362884, 11) before adding Covid data
model_data.sample(10)
model_data.describe(include="all")
###Output
_____no_output_____
###Markdown
Target class counts
###Code
# Plot target value counts
model_data["Target"].value_counts().sort_values(ascending=True).plot.barh(figsize=(8, 2))
plt.grid()
plt.title("Target counts in full dataset")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Correlation of features with target
###Code
model_data.columns
# Plot correlation matrix
# Calc correlations
corr = model_data[[
'stock_chg_10_binary',
'stock_chg_5_binary',
'euronext_10d_chg_binary',
'euronext_5d_chg_binary',
'n_day_return',
'Target',
]].corr()
# Generate a mask for the upper triangle
mask = np.triu(np.ones_like(corr, dtype=bool))
# Set up the matplotlib figure
fig, ax = plt.subplots(figsize=(11, 9))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(230, 20, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
plt.title("Pearson Correlation between Features")
plt.tight_layout()
# Is past performance and indicator of the future?
# kind : { "scatter" | "kde" | "hist" | "hex" | "reg" | "resid" }
ticker = "MC.PA"
start_date = "2019-01-01"
end_date = "2021-12-31"
plot_data = model_data.loc[(model_data["Date"] > start_date) & (model_data["Date"] < end_date)]
historical_data = [
"stock_chg_5",
"stock_chg_10",
"euronext_5d_chg",
"euronext_10d_chg",
"Trailing_PE",
"Forward_PE",
"log_new_cases",
]
for hd in historical_data:
n = historical_data.index(hd)
y = plot_data.loc[model_data["Ticker"]==ticker]["n_day_return"]
x = plot_data.loc[model_data["Ticker"]==ticker][hd]
# ax.scatter(x, y, alpha=.5)
sns.jointplot(
x=x,
y=y,
kind="hex",
# color="#4CB391",
)
# plt.ylim(-.3, .3); plt.xlim(-.3, .3)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Data quality issues
###Code
# model_data.loc[model_data["Forward_PE"]==np.inf] # These need to be fixed!
# model_data.loc[model_data["P2S"]==np.inf] # These need to be fixed!
# model_data["P2FCF"].hist(bins=100, log=True) # Need to deal with outliers?
# model_data["CurrRatio"].hist(bins=100, log=True) # Need to deal with outliers?
###Output
_____no_output_____
###Markdown
Create modeling dataset
###Code
columns_to_drop = [
"Ticker",
"Date",
"Prev_Yr_End",
"Target",
]
df_featureset = model_data.drop(columns_to_drop, axis=1)
df_featureset.shape
# Replace np.nan and np.inf with 0s (not sure this is best treatment!)
df_featureset.replace(to_replace=np.nan, value=0, inplace=True)
df_featureset.replace(to_replace=np.inf, value=0, inplace=True)
df_featureset.replace(to_replace=-np.inf, value=0, inplace=True)
df_featureset.describe()
df_featureset.sample(5)
###Output
_____no_output_____
###Markdown
Train / Test split
###Code
traintest_cutoff = math.floor(len(df_featureset.index)*(1-test_ratio))
traintest_cutoff
df_train_features = df_featureset.iloc[:traintest_cutoff]
features_train = df_train_features.values
df_test_features = df_featureset.iloc[traintest_cutoff:]
features_test = df_test_features.values
target_train = model_data["Target"].iloc[:traintest_cutoff]
target_test = model_data["Target"].iloc[traintest_cutoff:]
target_train.value_counts(normalize=True)
target_test.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Build classifier
###Code
# ?RandomForestClassifier
clf = RandomForestClassifier(
n_estimators=100,
# class_weight="balanced", # slightly worse performance when balanced
)
# clf = SVC()
# clf = LogisticRegression(class_weight="balanced")
clf.fit(features_train, target_train)
clf.score(features_train, target_train)
###Output
_____no_output_____
###Markdown
Evaluate against test set
###Code
predictions_test = clf.predict(features_test)
pd.Series(predictions_test).value_counts()
print(confusion_matrix(target_test, predictions_test, normalize="all"))
print(classification_report(target_test, predictions_test))
###Output
_____no_output_____
###Markdown
Previous results -----------------------------------------------------precision recall f1-score support -1 0.14 0.06 0.08 6925 0 0.80 0.84 0.82 56760 1 0.16 0.18 0.17 8892 accuracy 0.69 72577 macro avg 0.37 0.36 0.36 72577weighted avg 0.66 0.69 0.67 72577-----------------------------------------------------(without Covid features) -----------------------------------------------------precision recall f1-score support -1 0.16 0.12 0.14 5283 0 0.83 0.91 0.87 58911 1 0.19 0.11 0.14 8383 accuracy 0.76 72577 macro avg 0.40 0.38 0.38 72577weighted avg 0.71 0.76 0.73 72577-----------------------------------------------------(with Covid features) Moving Average Strategy
###Code
# Create trade recommendations
# Create moving averages
rolling_20 = np.round(df_raw_prices.rolling(window=20, center=False).mean(), 2)
rolling_50 = np.round(df_raw_prices.rolling(window=50, center=False).mean(), 2)
# Creates df where if M.A.20 < M.A.50 then True else False
rolling_diff = pd.DataFrame(np.where(rolling_20<rolling_50, True, False), columns=df_raw_prices.columns, index=df_raw_prices.index)
# Apply following to rolling_diff dataframe:
# If yesterday == True and today == False
# Then "Sell"
# Elif yesterday == False and today == True
# Then "Buy"
# Else None
sell_opps = rolling_diff < rolling_diff.shift(periods=-1)
buy_opps = rolling_diff > rolling_diff.shift(periods=-1)
# Replace boolean values with "Buy"/"Sell" strings, then merge into single dataframe
sell_opps.replace(to_replace=True, value="Sell", inplace=True)
sell_opps.replace(to_replace=False, value=np.nan, inplace=True)
buy_opps.replace(to_replace=True, value="Buy", inplace=True)
buy_opps.replace(to_replace=False, value=np.nan, inplace=True)
all_opps = sell_opps.mask(buy_opps=="Buy", buy_opps) # Merge buy_opps with sell_opps
# Get all historic recommendations for a given ticker
# ticker = "SU.PA"
# all_opps[ticker][~all_opps[ticker].isnull()]
# Get historic price on particular date for particular ticker
# df_raw_prices[ticker]["2020-03-11"]
# Total counts
# buy_opps
# sell_opps
# print(f"There are {all_opps[all_opps=='Sell'].count().sum()} sell opportunities.")
# print(f"There are {all_opps[all_opps=='Buy'].count().sum()} buy opportunities.")
# all_opps.count().sum() # Counts values that are neither NA or np.nan
# all_opps[all_opps=='Sell'].loc["2021-04-26"].count().sum()
# all_opps[all_opps=='Buy'].loc["2021-04-26"].count().sum()
opportunity_date = "2021-05-21"
# Generate Buy and Sell recommendations
recommend_buy = list(all_opps.loc[opportunity_date][all_opps.loc[opportunity_date]=="Buy"].index)
recommend_sell = list(all_opps.loc[opportunity_date][all_opps.loc[opportunity_date]=="Sell"].index)
# Function to plot recommendations
def plot_recommendation(ticker, start_date, end_date, ax):
base = df_raw_prices.loc[start_date:end_date][ticker]
rolling_20 = np.round(base.rolling(window=20, center=False).mean(), 2)
rolling_50 = np.round(base.rolling(window=50, center=False).mean(), 2)
rolling_200 = np.round(base.rolling(window=200, center=False).mean(), 2)
ax = ax
ax.plot(base, label="Close", linewidth=4, alpha=.2)
ax.plot(rolling_20, label="20d M.A.", linestyle="dashed", linewidth=2)
ax.plot(rolling_50, label="50d M.A.", linestyle="dashed", linewidth=2)
ax.plot(rolling_200, label="200d M.A.", linestyle="dashed", linewidth=2)
ax.set_ylabel("Close Price")
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=1))
ax.set_title(f"{ticker}")
plt.xticks(rotation=45)
plt.legend()
plt.grid()
# Plot Buy recommendations
if math.ceil(len(recommend_buy)/3) == 1:
height = (math.ceil(len(recommend_buy))/3*4)+3
else:
height = (math.ceil(len(recommend_buy))/3*4)
fig = plt.figure(figsize=(15, height))
for buy in recommend_buy:
n = recommend_buy.index(buy)+1
# print(n)
ax = fig.add_subplot(math.ceil(len(recommend_buy)/3), 3, n)
plot_recommendation(buy, "2021-01-01", opportunity_date, ax=ax)
plt.suptitle(f"BUY RECOMMENDATIONS {opportunity_date}\n")
plt.tight_layout()
# Plot Sell recommendations
if math.ceil(len(recommend_sell)/3) == 1:
height = (math.ceil(len(recommend_sell))/3*4)+3
else:
height = (math.ceil(len(recommend_sell))/3*4)
fig = plt.figure(figsize=(15,height))
for sell in recommend_sell:
n = recommend_sell.index(sell)+1
# print(n)
ax = fig.add_subplot(math.ceil(len(recommend_sell)/3), 3, n)
plot_recommendation(sell, "2021-01-01", opportunity_date, ax=ax)
plt.suptitle(f"SELL RECOMMENDATIONS {opportunity_date}\n")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Back-test Moving Average Strategy
###Code
# Function to run Back-testing
def run_backtest(start_date, end_date):
# Generate Buy trades
index = all_opps.index
buy_trades = {}
for ticker in all_opps.columns:
condition = all_opps[ticker] == "Buy"
buy_indices = index[condition]
buy_indices_list = buy_indices.tolist()
buy_ind_dates = [str(ts.year) + "-" + str(ts.month) + "-" + str(ts.day) for ts in buy_indices_list]
df_raw_prices.loc[buy_ind_dates][ticker]
buy_prices = list(df_raw_prices.loc[buy_ind_dates][ticker].values)
buy_trades[ticker] = list(zip(buy_ind_dates, buy_prices))
# Generate Sell trades
index = all_opps.index
sell_trades = {}
for ticker in all_opps.columns:
condition = all_opps[ticker] == "Sell"
sell_indices = index[condition]
sell_indices_list = sell_indices.tolist()
sell_ind_dates = [str(ts.year) + "-" + str(ts.month) + "-" + str(ts.day) for ts in sell_indices_list]
df_raw_prices.loc[sell_ind_dates][ticker]
sell_prices = list(df_raw_prices.loc[sell_ind_dates][ticker].values)
sell_trades[ticker] = list(zip(sell_ind_dates, sell_prices))
# Apply strategy and calculate p&l per ticker:
profits_losses = []
for ticker in df_raw_prices.columns.to_list()[:-2]: # -2 to exclude the indices CAC40 and Euronext
# for ticker in df_raw_prices.columns.to_list()[650:655]: # for testing use the first (tickers)
# print("=" * 72)
# print(ticker)
buy_dates = [datetime.strptime(trade[0], "%Y-%m-%d").date() for trade in buy_trades[ticker]]
sell_dates = [datetime.strptime(trade[0], "%Y-%m-%d").date() for trade in sell_trades[ticker]]
all_dates = buy_dates + sell_dates
# print(type(all_dates[0]))
direction = [-1 for date in buy_dates] + [1 for date in sell_dates]
buy_close = [trade[1] for trade in buy_trades[ticker]]
sell_close = [trade[1] for trade in sell_trades[ticker]]
all_close = buy_close + sell_close
df_executions = pd.DataFrame({
"Direction": direction,
"Close": all_close,
}, index=all_dates)
df_executions.sort_index(inplace=True)
df_executions["Cashflow"] = df_executions["Direction"] * df_executions["Close"]
# Keep executions only within the prescribed date range we are running the backtest for:
s = pd.to_datetime(start_date).date()
e = pd.to_datetime(end_date).date()
df_executions = df_executions[s:e]
# try:
# print("First", df_executions.iloc[0]["Direction"])
# except:
# pass
# try:
# print("Last,", df_executions.iloc[len(df_executions)-1]["Direction"])
# except:
# pass
# Execution Rules
# ---------------
# Always start with a Buy and always end with a Sell. Avoid short selling and unrealised p&l.
try:
# Drop first row if its a Sell
if df_executions.iloc[0]["Direction"] == 1:
df_executions.drop(df_executions.index[0], inplace=True, axis=0)
except:
pass
try:
# Drop last row if its a Buy
if df_executions.iloc[len(df_executions)-1]["Direction"] == -1:
df_executions.drop(df_executions.index[len(df_executions)-1], inplace=True, axis=0)
except:
pass
# print(df_executions)
ticker_pnl = df_executions["Cashflow"].sum()
profits_losses.append(ticker_pnl)
df_Backtest_Results = pd.DataFrame({
"Back-test PnL":profits_losses,
}, index=df_raw_prices.columns[:-2])
return df_Backtest_Results, df_executions
# Generate backtest results
# start_date = "2019-01-01"
# end_date = "2021-12-31"
# df_Backtest_Results, df_executions = run_backtest(start_date, end_date)
# df_executions
# df_Backtest_Results.sort_values(by="Back-test PnL", ascending=False)
# df_Backtest_Results.describe()
# df_Backtest_Results.sum()
# df_Backtest_Results.hist(bins=100, log=True, figsize=(10, 6), alpha=.5)
# df_Backtest_Results.loc["VLTSA.PA"]
###Output
_____no_output_____
###Markdown
Test widgets Test 1: single widgets
###Code
# def print_value(myvalue):
# print(f"The current value is: {myvalue}")
# widgets.interact(print_value, myvalue=[1, 2, 3, 4, 5]) # use list to create dropdown
# widgets.interact(print_value, myvalue=(0, 10, 1)) # use tuple to create number slider
# widgets.interact(print_value, myvalue=(0, 10, .5)) # for decimal values
# widgets.interact(print_value, myvalue=True) # for boolean
# Multiselect dropdown. Use shift or ctrl to select multiple values
# w = widgets.SelectMultiple(
# options=['Apples', 'Oranges', 'Pears'],
# value=['Oranges'],
# #rows=10,
# description='Fruits',
# disabled=False
# )
# w
# print(w.value)
###Output
_____no_output_____
###Markdown
Test 2: multiple widgets
###Code
# def three_variables(x, y, z):
# return (x, y, z)
# _ = widgets.interact(
# three_variables,
# x=["Blue", "Green", "Black"],
# y=(1, 5, 1),
# z=True,
# )
###Output
_____no_output_____
###Markdown
Fixed variables
###Code
# _ = widgets.interact(
# three_variables,
# x=["Blue", "Green", "Black"],
# y=(1, 5, 1),
# z=widgets.fixed("I am fixed"),
# )
###Output
_____no_output_____
###Markdown
Create widgets with decorators
###Code
# @widgets.interact(x=(0, 10, 1), y=["a", "b", "c"])
# def print_slider_val(x, y):
# print(f"Slider says: {x}. Dropdown says: {y}")
###Output
_____no_output_____
###Markdown
Create one of my charts
###Code
# ticker_list = list(df_raw_prices.columns)
# ticker_list.sort()
# @widgets.interact(
# ticker=ticker_list,
# start_date=widgets.Text(value="2020-01-01", description="start date", continuous_update=False),
# end_date=widgets.Text(value="2020-12-31", description="end date", continuous_update=False),
# )
# def plot_ticker(ticker, start_date, end_date):
# df_raw_prices.loc[start_date:end_date][ticker].plot()
# plt.title(ticker)
###Output
_____no_output_____
###Markdown
Basic query
###Code
df = pd.read_sql("""
select
e.nom_ent,
m.nom_mun,
m.cve_ent,
m.cve_mun,
d.year,
d.fosas,
d.cuerpos,
d.cuerpos_identificados,
d.restos,
d.restos_identificados
from mapasdata d
join areas_geoestadisticas_municipales m
on d.cve_ent = m.cve_ent
and d.cve_mun = m.cve_mun
join areas_geoestadisticas_estatales e
on d.cve_ent = e.cve_ent
""", con)
df.head()
grouped = df.groupby(['nom_ent', 'nom_mun', 'year']).sum()
grouped.to_dict(orient='index')
grouped.to_dict(orient="index")
grouped.to_dict(orient="index")
###Output
_____no_output_____
###Markdown
Hyperspectral Image Analysis
###Code
%%capture
!python -m pip install --upgrade git+git://github.com/abraia/abraia-multiple.git
import os
if not os.getenv('ABRAIA_KEY'):
#@markdown <a href="https://abraia.me/console/settings" target="_blank">Get your ABRAIA_KEY</a>
abraia_key = '' #@param {type: "string"}
%env ABRAIA_KEY=$abraia_key
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [12, 8]
plt.rcParams['figure.dpi'] = 100
from multiple import *
multiple = Multiple()
###Output
_____no_output_____
###Markdown
Load an hyperspectral dataset
###Code
%%capture
#@markdown <a href="https://abraia.me/console/gallery" target="_blank">Upload and manage your hyperspectral data</a>
data.load_dataset('PU')
multiple.upload('datasets/PaviaU.mat')
multiple.upload('datasets/PaviaU_gt.mat')
img = multiple.load_image('PaviaU.mat')
gt = multiple.load_image('PaviaU_gt.mat')
img.shape, gt.shape
###Output
_____no_output_____
###Markdown
Basic hyperspectral visualization
###Code
# Get some random bands from HSI cube
imgs, indexes = hsi.random(img)
# View the bands
fig, ax = plt.subplots(2, 3)
ax = ax.reshape(-1)
for i, im in enumerate(imgs):
ax[i].imshow(im, cmap='jet')
ax[i].axis('off')
###Output
_____no_output_____
###Markdown
Dimensionality reduction (PCA) and visualization
###Code
pc_img = hsi.principal_components(img)
plt.title('Principal components')
plt.imshow(pc_img)
plt.axis('off')
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
Setting up the positional datasetsHere I'm going to load up a few datasets and then separate them out by position.
###Code
from phantasyfootballer.common import Stats, PLAYER_NAME
SCORING_MODE = 'ppr'
FP = Stats.FANTASY_POINTS
df_all = catalog.load(f'scoring.{SCORING_MODE}')
df_all = catalog.load(f'stats.weekly')
df_qb = df_all.query('position=="QB"')
df_wr = df_all.query('position=="WR"')
df_rb = df_all.query('position=="RB"')
df_te = df_all.query('position=="TE"')
df_qb.columns
df_annual = catalog.load('stats.season')
df = df_all.reset_index()
df.position.unique()
df_annual.position.unique()
df_all.query('position == "CB/RS"')
###Output
_____no_output_____
###Markdown
Defining plotting functionshere is is helpful to put plotting functions into a function, so I don't have to repeat everything in order to plot the graphs for each position
###Code
def plot_4(plot, data, title='', **kwargs):
'''
Max a 2x2 plot for each skill position.
Parameters
----------
plot : callable
The plotting function to call
data : list[data]
a list with the four dataframes, QB, RB, WR, TE
title: str
The title of the plot
top_player: bool
If True, limit the output to only the top players,
else use them all
**kwargs : dict
Any arguments required to make the plot correct
'''
data = [df_qb, df_rb, df_wr, df_te] if data is None else data
#fig, ((ax1, ax2),(ax3, ax4)) = plt.subplots(2,2, figsize=(20,12))
fig, ax = plt.subplots(2,2, figsize=(20,12))
plot(data=data[0], **kwargs, ax=ax[0][0] ).set_title(f'{title} QB');
plot(data=data[1], **kwargs, ax=ax[0][1]).set_title(f'{title} RB');
plot(data=data[2], **kwargs, ax=ax[1][0]).set_title(f'{title} WR');
plot(data=data[3], **kwargs, ax=ax[1][1]).set_title(f'{title} TE');
def plot_4_xonly(plot, data, stat, title, **kwargs):
data = [df_qb, df_rb, df_wr, df_te] if data is None else data
fig, ax = plt.subplots(2,2, figsize=(20,12))
plot(data[0][stat], **kwargs, ax=ax[0][0] ).set_title(f'{title} QB');
plot(data[1][stat], **kwargs, ax=ax[0][1]).set_title(f'{title} RB');
plot(data[2][stat], **kwargs, ax=ax[1][0]).set_title(f'{title} WR');
plot(data[3][stat], **kwargs, ax=ax[1][1]).set_title(f'{title} TE');
###Output
_____no_output_____
###Markdown
Now, just getting a sense of how many players are better than average (the average player will have a value of 1)
###Code
plot_4_xonly(sns.distplot, None, title='Distribution of `%` avg position',stat=Stats.PCT_MEAN_OVR);
plot_4(sns.boxplot, None, 'Distribution of `%` avg position',x=Stats.TOP_PLAYER, y=Stats.PCT_MEAN_OVR);
###Output
_____no_output_____
###Markdown
Looking at all the players, ranking them by their overall draft value based on the number of points expected over the average player, and the median player. Player ValueAlright, let's assume a player value is based on the value they bring from the worst player in the position that we are willing to consider (so this is the TOP_PLAYER filter). Let's look at all the players and consider value based on that player. Draft ImpactSo let's consider what happens if I pass on this player, how much value is left in the position after I pickup/pass on the player. In other words, if RB1 is projected to score 300 points, and the rest of the running backs in the league (below him in draft order) are going to score 700 pts, then this guy is worth 0.3 `positional_value` `Stats.POS_VALUE` and the `Stats.POS_VAL_REM` will be 0.7. This let's me compare the impact of picking or passing on this guy. If I'm looking at the board and trying to figure out if I'm going to pick an RB or WR, I'll compare the `Stats.POS_VAL_REM` and go with the lower value (it means that the picken's are getting slimmer in that position.
###Code
df_all[[POSITION, Stats.POS_VALUE, Stats.POS_VALUE_REM]].sort_values(Stats.POS_VALUE_REM, ascending = False)[:200]
df_pivot = df_all.pivot_table(index=[POSITION, PLAYER_NAME], values=[Stats.POS_VALUE, Stats.POS_VALUE_REM])
from IPython.display import HTML
def highlight_gaps(value):
return 'background-color: yellow' if value < -0.16 else ''
qb_pivot = df_pivot.xs('QB').sort_values(Stats.POS_VALUE_REM, ascending=False)
qb_pivot['pct_change'] = qb_pivot[Stats.POS_VALUE_REM].pct_change()
pd.qcut(qb_pivot['pct_change'],10)
qb_pivot.style.applymap(highlight_gaps, subset=['pct_change'])
###Output
_____no_output_____
###Markdown
Data Loading
###Code
import json
import pydeck
import pickle
import numpy as np
import plotly.express as px
import matplotlib.pyplot as plt
import pandas as pd
import os
import shapefile
data_dir = "./data"
# load location index
with open(os.path.join(data_dir, "akl_loc_idx.pkl"), 'rb') as f:
loc_idx = pickle.load(f) # datazone to point index
idx_loc = {v:k for k, v in loc_idx.items()} # point index to datazone
print(f" -- loaded location index with dimension {len(loc_idx)}")
# load time index
with open(os.path.join(data_dir, "akl_t_idx.pkl"), 'rb') as f:
t_idx = pickle.load(f) # datetime to time index
idx_t = {v:k for k, v in t_idx.items()} # time index to datetime
print(f" -- loaded time index with dimension {len(t_idx)}")
# load precomputed odt
with open(os.path.join(data_dir, "akl_odt.npy"), 'rb') as f:
odt = np.load(f)
print(f" -- loaded odt cube with dimensions {odt.shape}")
# show odt time rane
times = list(t_idx.keys())
print(min(times), "-", max(times))
# load polygon data
with open(os.path.join(data_dir, "akl_polygons_id.geojson")) as f:
polys = json.load(f)
# load shapefile of points (data zone population centeroids)
sf_path = os.path.join(data_dir, "akl_points.shp")
sf = shapefile.Reader(sf_path)
records = sf.records()
coords = {}
for i, r in enumerate(records):
coords[r[0]] = sf.shape(i).points[0]
sf.close()
# load IMD
imd = pd.read_csv(os.path.join(data_dir, "akl_imd.csv"), index_col="DZ2018")
imd.head()
# load vdr
vdr = pd.read_csv(os.path.join(data_dir, "vdr_values.csv"), index_col="lzuid").dropna()
vdr.index = vdr.index.astype(np.int32)
#vdr = pd.read_csv("../data/vdr_values.csv").dropna()
# replace 'S' suppressed values with 0
# vdr["count_vdr"] = vdr["count_vdr"].replace('S', 0)
# vdr["pop"] = vdr["pop"].replace('S', 0)
#print(vdr[vdr.count_vdr == 'S'].shape[0]/vdr.shape[0])
# drop rows with suppressed values
#vdr = vdr.drop(vdr[vdr.count_vdr == 'S'].index)
# filter for valid counts
vdr = vdr[vdr.count_vdr != 'S']
# set types
#vdr = vdr.astype({"lzuid":np.int32, "mpoMaoriPacific":str, "ageband":str, "pop":np.int32, "count_vdr":np.int32})
vdr = vdr.astype({"mpoMaoriPacific":str, "ageband":str, "pop":np.int32, "count_vdr":np.int32})
vdr.head()
# clinics
# these are currently manuallys set to the nearest data zone location (population centeroid)
clinics = pd.read_csv(os.path.join(data_dir, "akl_clinics.csv"), index_col="DZ2018")
clinics.head()
###Output
_____no_output_____
###Markdown
Some basic plotting with pydeckCan use this to select a location ID
###Code
# deckgl show polygons and location ids
view_state = pydeck.ViewState(
longitude=174.7633,
latitude=-36.8485,
zoom=11,
max_zoom=16,
pitch=0,
bearing=0
)
# default view
geojson = pydeck.Layer(
"GeoJsonLayer",
polys, # needs to be wgs84
opacity=0.2,
stroked=True,
line_width_min_pixels=1,
filled=True,
pickable=True,
auto_highlight=True,
get_fill_color=[128, 128, 128],
get_line_color=[255, 255, 255],
)
r = pydeck.Deck(
layers=[geojson],
initial_view_state=view_state,
map_style='mapbox://styles/mapbox/light-v9',
tooltip = {
"text": "Location: {id}"
}
)
#r.to_html("geojson_layer.html", iframe_width="100%")
r.show()
###Output
/home/smas036/anaconda3/envs/transit/lib/python3.7/site-packages/pydeck/bindings/deck.py:88: UserWarning: Mapbox API key is not set. This may impact available features of pydeck.
UserWarning,
###Markdown
One source to all destinations with one-way journey threshold
###Code
loc = 7600522 # seaview terrace, mt albert
lon, lat = coords[loc]
origin = loc_idx[loc] # get odt index from location id
dt = odt[origin, :, :] # get destination-time matrix for this origin
print(dt.shape)
# view dt matrix
fig, ax = plt.subplots(figsize=(15, 15))
ax.imshow(np.transpose(dt))
ax.set_xlabel("location index")
ax.set_ylabel("time index")
# compute mean, std travel time
mean_tt = np.nanmean(dt, axis=1).reshape(-1, 1)
std_tt = np.nanstd(dt, axis=1).reshape(-1, 1)
# create dataframe
ids = np.array(list(idx_loc.values())).reshape(-1, 1)
d = np.concatenate((ids, mean_tt, std_tt), axis=1)
df = pd.DataFrame(d, columns=["id", "mean_tt", "std_tt"])
df = df.astype({'id': 'int32'})
df = df.dropna()
# join with imd
df = df.join(imd, on="id")
# threshold by mean_tt
threshold = 60 # minutes
df = df[df["mean_tt"] < threshold]
# plot - note geojson data needs to be wgs84
fig = px.choropleth_mapbox(
df,
geojson=polys,
featureidkey="id",
locations="id",
center = {"lat": lat, "lon": lon},
mapbox_style="carto-positron",
color="mean_tt",
color_continuous_scale="Viridis",
zoom=12)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
# plot - note geojson data needs to be wgs84
fig = px.choropleth_mapbox(
df,
geojson=polys,
featureidkey="id",
locations="id",
center = {"lat": lat, "lon": lon},
mapbox_style="carto-positron",
color="std_tt",
color_continuous_scale="Viridis",
opacity=1,
zoom=12)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
###Output
_____no_output_____
###Markdown
All locations to one destination with one-way journey threshold
###Code
print(clinics["name"])
# get clinic location by name
clinic_name = "Rehab Plus"
clinic_loc = clinics.index[clinics["name"] == clinic_name].tolist()[0]
print(clinic_name, clinic_loc)
loc = clinic_loc # or just set any location id
lon, lat = coords[loc]
destination = loc_idx[loc] # get odt index from location id
ot = odt[:, destination, :] # get origin-time matrix for this destination
print(ot.shape)
# view ot matrix
fig, ax = plt.subplots(figsize=(15, 15))
ax.imshow(np.transpose(ot))
ax.set_xlabel("location index")
ax.set_ylabel("time index")
# compute mean, std travel time
mean_tt = np.nanmean(ot, axis=1).reshape(-1, 1)
std_tt = np.nanstd(ot, axis=1).reshape(-1, 1)
# create dataframe
ids = np.array(list(idx_loc.values())).reshape(-1, 1)
d = np.concatenate((ids, mean_tt, std_tt), axis=1)
df = pd.DataFrame(d, columns=["id", "mean_tt", "std_tt"])
df = df.astype({'id': 'int32'})
df = df.dropna()
# join with imd
df = df.join(imd, on="id")
# threshold by mean_tt
threshold = 60 # minutes
df = df[df["mean_tt"] < threshold]
# plot - note geojson data needs to be wgs84
fig = px.choropleth_mapbox(
df,
geojson=polys,
featureidkey="id",
locations="id",
center = {"lat": lat, "lon": lon},
mapbox_style="carto-positron",
color="mean_tt",
color_continuous_scale="Viridis",
zoom=12)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
# plot - note geojson data needs to be wgs84
fig = px.choropleth_mapbox(
df,
geojson=polys,
featureidkey="id",
locations="id",
center = {"lat": lat, "lon": lon},
mapbox_style="carto-positron",
color="std_tt",
color_continuous_scale="Viridis",
zoom=12)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
###Output
_____no_output_____
###Markdown
Point to point travel times to investigate variablitily throughout the day
###Code
from_loc = 7600316 # high stdev
to_loc = clinic_loc
origin_idx = loc_idx[from_loc] # get odt index from location id
dest_idx = loc_idx[to_loc]
dt = odt[origin_idx, dest_idx, :] # get destination travel time series for this origin
dt = pd.Series(dt, index=pd.DatetimeIndex(list(t_idx.keys())))
print(dt.shape)
dt.plot(figsize=(15, 5), xlabel="Departure Time", ylabel="ETA (minutes)")
from_loc = 7600870 # low stdev
to_loc = clinic_loc
origin_idx = loc_idx[from_loc] # get odt index from location id
dest_idx = loc_idx[to_loc]
dt = odt[origin_idx, dest_idx, :] # get destination travel time series for this origin
dt = pd.Series(dt, index=pd.DatetimeIndex(list(t_idx.keys())))
dt.plot(figsize=(15, 5), xlabel="Departure Time", ylabel="ETA (minutes)")
###Output
_____no_output_____
###Markdown
Find the travel time from every location to every clinic
###Code
# get clinic odt indexs
clinic_locs = clinics.index.tolist()
clinic_idxs = [loc_idx[l] for l in clinic_locs]
# get clinic odt
odt_clinic = odt[:, clinic_idxs, :]
print(odt_clinic.shape)
# compute mean, std travel time
mean_tt = np.nanmean(odt_clinic, axis=-1)
std_tt = np.nanstd(odt_clinic, axis=-1)
print(mean_tt.shape)
# find minimum mean time and its stdev
min_tt = np.zeros((mean_tt.shape[0], 1))
min_tt_std = np.zeros((mean_tt.shape[0], 1))
for i in range(mean_tt.shape[0]):
min_t = mean_tt[i, 0]
min_j = 0
for j in range(1, mean_tt.shape[1]):
if np.isnan(min_t) and not np.isnan(mean_tt[i, j]):
min_t = mean_tt[i, j]
min_j = j
elif np.isnan(mean_tt[i, j]):
pass
elif mean_tt[i, j] < min_t:
min_t = mean_tt[i, j]
min_j = j
min_tt[i] = min_t
min_tt_std[i] = std_tt[i, min_j]
#print(i, min_t, min_j, std_tt[i, min_j])
plt.figure(figsize=(10, 10))
plt.scatter(min_tt, min_tt_std)
plt.xlabel("Mean travel time to nearest clinic")
plt.ylabel("stdev travel time to nearest clinic")
# use log travel travel times to plot on map more easily
log_min_tt = np.log10(min_tt)
log_min_tt_std = np.log10(min_tt_std)
# replace destinations with 0 travel time
log_min_tt[np.isneginf(log_min_tt)] = 0
log_min_tt_std[np.isneginf(log_min_tt_std)] = 0
# create dataframe
ids = np.array(list(idx_loc.values())).reshape(-1, 1)
d = np.concatenate((ids, min_tt, min_tt_std, log_min_tt, log_min_tt_std), axis=1)
df = pd.DataFrame(d, columns=["id", "min_tt", "min_tt_std", "log_min_tt", "log_min_tt_std"])
df = df.astype({'id': 'int32'})
df = df.dropna()
# join with imd
df = df.join(imd, on="id")
# plot - note geojson data needs to be wgs84
fig = px.choropleth_mapbox(
df,
geojson=polys,
featureidkey="id",
locations="id",
center = {"lat": lat, "lon": lon},
mapbox_style="carto-positron",
color="log_min_tt",
color_continuous_scale="Viridis",
zoom=12)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
# plot - note geojson data needs to be wgs84
fig = px.choropleth_mapbox(
df,
geojson=polys,
featureidkey="id",
locations="id",
center = {"lat": lat, "lon": lon},
mapbox_style="carto-positron",
color="log_min_tt_std",
color_continuous_scale="Viridis",
zoom=12)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
loc = 7601177
idx = loc_idx[loc]
min_tt_std[idx]
###Output
_____no_output_____
###Markdown
Compared with deprivation
###Code
df.columns
# does accessibility to nearest clinic predict IMD Health Index?
x = "log_min_tt"
xlabel = "Travel time to nearest clinic"
y = "Health"
ylabel = f"{y} Index"
plt.figure(figsize=(10, 10))
plt.scatter(df[x], df[y])
plt.xlabel(xlabel)
plt.ylabel(ylabel)
# loop through them all
Y = ['Census18_P', 'IMD18', 'Employment', 'Income', 'Crime', 'Housing', 'Health', 'Education', 'Access']
x = "log_min_tt"
for y in Y:
plt.figure(figsize=(10, 10))
plt.scatter(df[x], df[y])
plt.xlabel(x)
plt.ylabel(y)
# rename id column and save
df.rename(columns={"id":"DZ2018"}).to_csv("imd-with-travel-time.csv", index=False)
# join with vdr
df_vdr = df.join(vdr, on="id")
df_vdr.head()
# select ethnicity and age combo
df_vdr[(df_vdr.mpoMaoriPacific == "MaoriPacific") & (df_vdr.ageband == "20-44")].head()
x = "log_min_tt"
plt.figure(figsize=(10, 10))
for eth in ["MaoriPacific"]:
for ageband in ["20-44", "45-64", "65+"]:
sample = df_vdr[(df_vdr.mpoMaoriPacific == eth) & (df_vdr.ageband == ageband)].dropna()
plt.scatter(sample[x], 100 * sample["count_vdr"]/sample["pop"], label=f"{eth}, {ageband}", alpha=0.5)
plt.xlabel("travel time (log10 minutes) to nearest clinic")
plt.ylabel("vdr as % of pop")
plt.ylim(0, 100)
plt.title(f"{eth} VDR vs travel time")
plt.legend()
x = "log_min_tt"
plt.figure(figsize=(10, 10))
for eth in ["nMnP"]:
for ageband in ["20-44", "45-64", "65+"]:
sample = df_vdr[(df_vdr.mpoMaoriPacific == eth) & (df_vdr.ageband == ageband)].dropna()
plt.scatter(sample[x], 100 * sample["count_vdr"]/sample["pop"], label=f"{eth}, {ageband}", alpha=0.5)
plt.xlabel("travel time (log10 minutes) to nearest clinic")
plt.ylabel("vdr as % of pop")
plt.ylim(0, 100)
plt.title(f"{eth} VDR vs travel time")
plt.legend()
# maps
eth = "MaoriPacific"
#eth = "nMnP"
ageband = "45-64"
print(f"VDR as % of pop. for {eth}, aged {ageband}")
sample = df_vdr[(df_vdr.mpoMaoriPacific == eth) & (df_vdr.ageband == ageband)].dropna()
sample["vdr_perc"] = 100 * sample["count_vdr"] / sample["pop"]
# plot - note geojson data needs to be wgs84
fig = px.choropleth_mapbox(
sample,
geojson=polys,
featureidkey="id",
locations="id",
center = {"lat": lat, "lon": lon},
mapbox_style="carto-positron",
color="vdr_perc",
color_continuous_scale="Viridis",
zoom=12)
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.show()
np.unique(df.id.values).shape[0]
np.unique(df_vdr.dropna().id.values).shape[0]
###Output
_____no_output_____
###Markdown
Let's start with Univariate analysis
###Code
# for numeric coluns
#[feat for feat in df.columns if df[feat].dtypes != 'O']
numeric_cols = [feat for feat in df.select_dtypes(include=np.number)]
i,j=0,0
PLOTS_PER_ROW =2
fig, ax = plt.subplots(math.ceil(len(numeric_cols)/PLOTS_PER_ROW),PLOTS_PER_ROW, figsize=(18,20))
for feat in numeric_cols:
ax[i][j].hist(df[feat],bins=30)
ax[i][j].set_xlabel(feat)
ax[i][j].set_ylabel("Counts")
j+=1
if j%PLOTS_PER_ROW ==0:
i+=1
j=0
plt.show()
# QQ plot
i,j=0,0
PLOTS_PER_ROW = 2
fig, ax = plt.subplots(math.ceil(len(numeric_cols)/PLOTS_PER_ROW),PLOTS_PER_ROW,figsize=(18,20))
for feat in numeric_cols:
stats.probplot(df[feat],plot = ax[i][j] )
ax[i][j].set_ylabel(feat)
j += 1
if j%PLOTS_PER_ROW == 0:
j=0
i+=1
plt.show()
# area/bhk
area_per_bhk = df['area']/df['bhk']
price_per_sqft = df['price']/df['area']
fig,ax = plt.subplots(1,2, figsize=(18,6))
ax[0].hist(area_per_bhk,bins=30)
ax[0].set_xlabel("area per bhk")
ax[1].hist(price_per_sqft,bins=30)
ax[1].set_xlabel("price per sqft")
plt.show()
df['area_per_bhk'] = df['area']/df['bhk']
df['price_per_sqft'] = df['price']/df['area']
df['price_per_sqft'] = df['price_per_sqft']*100000
df.describe().T
# make a copy of df as df_main
df_main = df.copy()
# Let's draw scatter plot again price with each feature
i,j=0,0
PLOTS_PER_ROW =2
fig,ax = plt.subplots(math.ceil(len(numeric_cols)/PLOTS_PER_ROW),PLOTS_PER_ROW,figsize = (18,20))
for feat in numeric_cols:
ax[i,j].scatter(df[feat],df['price'])
ax[i,j].set_ylabel("Price")
ax[i,j].set_xlabel(feat)
j+=1
if j%PLOTS_PER_ROW==0:
i+=1
j=0
plt.show()
# categorical features
loc_list = []
price_list = []
for key,subdf in df.groupby("location"):
loc_list.append(key)
price_list.append(subdf['price'].mean())
df_loc_price = pd.DataFrame({'location':loc_list, 'price':price_list})
plt.figure(figsize=(18,6))
plt.scatter(df_loc_price['location'],df_loc_price['price'])
plt.xlabel = "location"
plt.ylabel = "price"
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Outliers detection process
###Code
def remove_outliers_price_sqft(df_local):
df_out = pd.DataFrame()
for key, subdf in df_local.groupby('location'):
s = np.std(subdf['price_per_sqft'])#.std()
m = np.mean(subdf['price_per_sqft'].mean())
reduce_df = subdf[(subdf['price_per_sqft'] > ( m - s) ) & (subdf['price_per_sqft'] <= ( m + s)) ]
df_out = pd.concat([df_out, reduce_df],ignore_index=True)
return df_out
df1 = remove_outliers_price_sqft(df)
#df1
df_feat=df1
df
###Output
_____no_output_____
###Markdown
Ignore bolow cells till feature engg
###Code
# get the upper and lower limits to remove the outliers
apb_IQR = df['area_per_bhk'].quantile(0.75) - df['area_per_bhk'].quantile(0.25)
pps_IQR = df['price_per_sqft'].quantile(0.75) - df['price_per_sqft'].quantile(0.25)
lower_apb_limit = df['area_per_bhk'].quantile(0.25) - 1.5*apb_IQR
upper_apb_limit = df['area_per_bhk'].quantile(0.75) + 1.5*apb_IQR
lower_pps_limit = df['price_per_sqft'].quantile(0.25) - 1.5*pps_IQR
upper_pps_limit = df['price_per_sqft'].quantile(0.75) + 1.5*pps_IQR
lower_apb_limit, upper_apb_limit, lower_pps_limit, upper_pps_limit
df_price_sqft_rmv = df[(df['price_per_sqft'] > lower_pps_limit) & (df['price_per_sqft'] <= upper_pps_limit)]
df_area_bhk_rmv = df_price_sqft_rmv[(df_price_sqft_rmv['area_per_bhk'] > lower_apb_limit) & (df_price_sqft_rmv['area_per_bhk'] <= upper_apb_limit)]
df_feat = df_area_bhk_rmv
# lets try 3rd std to remove
# Let's have a look at high area and price records before removing outliers
#df[df.area > 2000][['price','area']]
df['area'].describe()
df[(df.bhk == 2) & (df.area < 600)][['area','price','bhk']]
# outlier detection
std_price = df.price.std()
mean_price = df.price.mean()
df.price.describe()
#get 3rd min and max std
#min_std_val =
df.price.quantile(1-0.997)
df.price.quantile()
###Output
_____no_output_____
###Markdown
Feature Engg
###Code
df_feat.shape
# let's handle categorical feature
#df['builder'].unique()
builder_stats = df_feat['builder'].value_counts(ascending=False)
builder_stats
builder_stats_less_then_10 = builder_stats[builder_stats <=10]
builder_stats_less_then_10
df_feat['builder'] = df_feat['builder'].apply(lambda x: 'other' if x in builder_stats_less_then_10 else x)
df_feat['builder']
#df['location'].unique()
location_stats = df_feat['location'].value_counts(ascending=False)
location_stats
location_stats_less_then_10 = location_stats[location_stats <= 10]
location_stats_less_then_10
df_feat['location'] = df_feat['location'].apply(lambda x: 'other' if x in location_stats_less_then_10 else x)
df['status'].unique()
#drop additional columns
df_new = df_feat.drop(['area_per_bhk','price_per_sqft'],axis=1)
df_final = pd.get_dummies(df_new,columns=['status','location','builder'])
Y = df_final['price']
X = df_final.drop(['price'],axis=1)
Y
###Output
_____no_output_____
###Markdown
Split the dataset into train and test datasets
###Code
X_train,X_test,y_train,y_test = train_test_split(X,Y,test_size=0.2,random_state=0)
X_train.shape,X_test.shape
y_train.shape,y_test.shape
lr = LinearRegression()
lr.fit(X_train,y_train)
y_pred = lr.predict(X_test)
r2_score(y_test,y_pred)
# Let's try Decision tree regressor
lr.coef_
###Output
_____no_output_____
###Markdown
Analysis of 2018 fake news urls and comparison with 2016, 2017 fake news sites
###Code
import pandas as pd
from datetime import datetime
from urllib.parse import urlparse
import calendar
import re
%matplotlib inline
pd.set_option('max_colwidth', 200)
###Output
_____no_output_____
###Markdown
Load the lists of sites by year
###Code
sites_2018 = pd.read_csv("../data/sites_2018.csv")
sites_2017 = pd.read_csv("../data/sites_2017.csv")
sites_2016 = pd.read_csv("../data/sites_2016.csv")
sites_2018.head()
len(sites_2018)
###Output
_____no_output_____
###Markdown
Check for duplicates
###Code
assert sites_2018["domain"].value_counts().max() == 1
assert sites_2017["domain"].value_counts().max() == 1
assert sites_2016["domain"].value_counts().max() == 1
###Output
_____no_output_____
###Markdown
**Sites in the 2018 list that are not in the 2017 list**
###Code
difference = len(
set(sites_2018["domain"]) - set(sites_2017["domain"])
)
difference
###Output
_____no_output_____
###Markdown
By percent
###Code
total = len(sites_2018)
round(difference / total, 4)
###Output
_____no_output_____
###Markdown
**Sites in the 2018 list that are not in the 2016 list**
###Code
difference = len(
set(sites_2018["domain"]) - set(sites_2016["domain"])
)
difference
###Output
_____no_output_____
###Markdown
By percentage
###Code
round(difference / total, 4)
###Output
_____no_output_____
###Markdown
**Sites that survived from 2016 to 2018**
###Code
sites_2016_to_2018 = set(sites_2018["domain"]).intersection(set(sites_2016["domain"]))
for site in sorted(sites_2016_to_2018):
print(site)
len(sites_2016_to_2018)
###Output
_____no_output_____
###Markdown
**Sites that survived from 2017 to 2018**
###Code
sites_2017_to_2018 = set(sites_2018["domain"]).\
intersection(set(sites_2017["domain"]))
for site in sorted(sites_2017_to_2018):
print(site)
len(sites_2017_to_2018)
###Output
_____no_output_____
###Markdown
Facebook engagement **Top stories**
###Code
# load 2018 list and parse dates
top_2018 = pd.read_csv(
"../data/top_2018.csv",
thousands = ',',
dtype = {"fb_engagement": int},
parse_dates = ['published_date'])\
.dropna(axis = "index", subset = ['url']) \
.sort_values('fb_engagement', ascending = False)[0:50]
top_2018.head(10)[['title', 'fb_engagement']]
###Output
_____no_output_____
###Markdown
Total engagement
###Code
top_2018['fb_engagement'].sum()
###Output
_____no_output_____
###Markdown
Top domains in 2018
###Code
# pull the domain out of the url
top_2018['domain'] = top_2018['url'].apply(
lambda each: re.sub(r"^www\.", "", urlparse(each).netloc)
)
# group and count
top_2018['domain'].value_counts()\
.to_frame("count")\
.loc[lambda frame: frame['count'] > 1]
###Output
_____no_output_____
###Markdown
What types of stories did well in 2018?
###Code
# Note: Counts nulls as "Other"
top_cats_2018 = top_2018['category']\
.fillna("Other")\
.value_counts()\
.to_frame("count")
top_cats_2018
###Output
_____no_output_____
###Markdown
Categories by proportion of whole
###Code
top_2018['category']\
.fillna("Other")\
.value_counts(normalize = True)\
.to_frame("proportion")
###Output
_____no_output_____
###Markdown
Posts over time
###Code
# re-index by date
top_2018.index = top_2018['published_date']
# group by month and convert integer to named month
top_2018_bymonth = top_2018.groupby(pd.Grouper(freq='M')) \
.count()
top_2018_bymonth.index = top_2018_bymonth.index.map(
lambda each: calendar.month_name[each.month]
)
top_2018_bymonth['title'].to_frame('count')
###Output
_____no_output_____
###Markdown
Charted
###Code
top_2018_bymonth['title'].plot(
kind = "bar",
color = "steelblue",
figsize = (9, 6),
);
###Output
_____no_output_____
###Markdown
LRUD
###Code
freq_df = plot_freqs(lrud_generators)
pd.concat([english_freqs,freq_df], axis=1)
###Output
_____no_output_____
###Markdown
UDLR
###Code
plot_freqs(udlr_generators)
###Output
_____no_output_____
###Markdown
Data Source: https://www.uci.org/mountain-bike/results
###Code
import json
DICIPLINE_ID_MOUNTAIN_BIKE = '7'
RACE_TYPE_ID_DOWNHILL = '19'
RACE_TYPE_ID_ENDURO = '122'
RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC = '92'
SEASON_ID_YEAR_MAP = {
2020: '129',
2019: '128',
2018: '123',
2017: '22',
2016: '12',
2015: '4',
2014: '102',
2013: '103',
2012: '104',
2011: '105',
2010: '106',
2009: '107',
}
COMPETITION_CLASS_CODE_WORLD_CHAMPS = 'CM'
COMPETITION_CLASS_CODE_WORLD_CUP = 'CDM'
COMPETITION_CLASS_CODE_ENDURO_WORLD_SERIES = '3'
CATEGORY_CODE_MEN_ELITE = 'Men Elite'
CATEGORY_CODE_WOMEN_ELITE = 'Women Elite'
RACE_TYPE_CODE_DHI = 'DHI'
RACE_TYPE_CODE_ENDURO = 'END'
RACE_TYPE_CODE_XCO = 'XCO'
RACE_TYPE_ID_TO_CODE_MAP = {
RACE_TYPE_ID_DOWNHILL: RACE_TYPE_CODE_DHI,
RACE_TYPE_ID_ENDURO: RACE_TYPE_CODE_ENDURO,
RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC: RACE_TYPE_CODE_XCO
}
with open('../data/competitions_with_races_and_events_and_results.json') as f:
competitions_with_races_and_events_and_results = json.load(f)
total_age_dh_mens = 0
number_of_items_in_total_age_dh_mens = 0
total_age_enduro_mens = 0
number_of_items_in_total_age_enduro_mens = 0
total_age_xc_mens = 0
number_of_items_in_total_age_xc_mens = 0
total_age_dh_womens = 0
number_of_items_in_total_age_dh_womens = 0
total_age_enduro_womens = 0
number_of_items_in_total_age_enduro_womens = 0
total_age_xc_womens = 0
number_of_items_in_total_age_xc_womens = 0
for race_type in competitions_with_races_and_events_and_results:
for year in competitions_with_races_and_events_and_results[race_type]:
for competition in competitions_with_races_and_events_and_results[race_type][year]:
for race in competition['races']:
for category_code in [CATEGORY_CODE_MEN_ELITE, CATEGORY_CODE_WOMEN_ELITE]:
if category_code in race['events']:
age = int(race['events'][category_code]['results'][0]['Age'])
if race_type == RACE_TYPE_ID_DOWNHILL:
if category_code == CATEGORY_CODE_MEN_ELITE:
total_age_dh_mens += age
number_of_items_in_total_age_dh_mens += 1
elif category_code == CATEGORY_CODE_WOMEN_ELITE:
total_age_dh_womens += age
number_of_items_in_total_age_dh_womens += 1
elif race_type == RACE_TYPE_ID_ENDURO:
if category_code == CATEGORY_CODE_MEN_ELITE:
total_age_enduro_mens += age
number_of_items_in_total_age_enduro_mens += 1
elif category_code == CATEGORY_CODE_WOMEN_ELITE:
total_age_enduro_womens += age
number_of_items_in_total_age_enduro_womens += 1
elif race_type == RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC:
if category_code == CATEGORY_CODE_MEN_ELITE:
total_age_xc_mens += age
number_of_items_in_total_age_xc_mens += 1
elif category_code == CATEGORY_CODE_WOMEN_ELITE:
total_age_xc_womens += age
number_of_items_in_total_age_xc_womens += 1
def round_age(age):
return round(age, 2)
print('Average Age of Event Winners (years)')
print(f"DH Men's: {round_age(total_age_dh_mens / number_of_items_in_total_age_dh_mens)}")
print(f"DH Women's: {round_age(total_age_dh_womens / number_of_items_in_total_age_dh_womens)}")
print(f"Enduro Men's: {round_age(total_age_enduro_mens / number_of_items_in_total_age_enduro_mens)}")
print(f"Enduro Women's: {round_age(total_age_enduro_womens / number_of_items_in_total_age_enduro_womens)}")
print(f"XCO Men's: {round_age(total_age_xc_mens / number_of_items_in_total_age_xc_mens)}")
print(f"XCO Women's: {round_age(total_age_xc_womens / number_of_items_in_total_age_xc_womens)}")
###Output
Average Age of Event Winners (years)
DH Men's: 26.53
DH Women's: 26.64
Enduro Men's: 26.83
Enduro Women's: 27.72
XCO Men's: 29.11
XCO Women's: 28.0
###Markdown
Analysis of Oscar-nominated Films
###Code
import re
import numpy as np
import pandas as pd
import scipy.stats as stats
pd.set_option('display.float_format', lambda x: '%.3f' % x)
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sb
sb.set(color_codes=True)
sb.set_palette("muted")
np.random.seed(sum(map(ord, "regression")))
awards = pd.read_csv('../data/nominations.csv')
oscars = pd.read_csv('../data/analysis.csv')
###Output
_____no_output_____
###Markdown
Descriptive AnalysisTo better understand general trends in the data. This is a work in progress. *last updated on: February 26, 2017* SeasonalityIt is well known that movies gunning for an Academy Award aim to be released between December and February, two months before the award ceremony. This is pretty evident looking at a distribution of film release months:
###Code
sb.countplot(x="release_month", data=oscars)
###Output
_____no_output_____
###Markdown
This can be more or less confirmed by calculating the Pearson correlation coefficient, which measures the linear dependence between two variables:
###Code
def print_pearsonr(data, dependent, independent):
for field in independent:
coeff = stats.pearsonr(data[dependent], data[field])
print "{0} | coeff: {1} | p-value: {2}".format(field, coeff[0], coeff[1])
print_pearsonr(oscars, 'Oscar', ['q1_release', 'q2_release', 'q3_release', 'q4_release'])
###Output
q1_release | coeff: 0.0516477112662 | p-value: 0.0457924227928
q2_release | coeff: -0.0315055870144 | p-value: 0.223276479778
q3_release | coeff: 0.00108129557868 | p-value: 0.966668039253
q4_release | coeff: -0.0317501964736 | p-value: 0.219701562299
###Markdown
Q1 and Q4 have a higher coefficient than Q2 and Q3, so that points in the right direction...This won't really help us determine **who** will win the actual Oscar, but at least we know that if we want a shot, we need to be releasing in late Q4 and early Q1. ProfitabilityHow do the financial details contribute to Oscar success?
###Code
# In case we want to examine the data based on the release decade...
oscars['decade'] = oscars['year'].apply(lambda y: str(y)[2] + "0")
# Adding some fields to slice and dice...
profit = oscars[~oscars['budget'].isnull()]
profit = profit[~profit['box_office'].isnull()]
profit['profit'] = profit['box_office'] - profit['budget']
profit['margin'] = profit['profit'] / profit['box_office']
###Output
_____no_output_____
###Markdown
Profitability by Award CategorySince 1980, the profitability for films which won an Oscar were on average higher than all films nominated that year.
###Code
avg_margin_for_all = profit.groupby(['category'])['margin'].mean()
avg_margin_for_win = profit[profit['Oscar'] == 1].groupby(['category'])['margin'].mean()
fig, ax = plt.subplots()
index = np.arange(len(profit['category'].unique()))
rects1 = plt.bar(index, avg_margin_for_win, 0.45, color='r', label='Won')
rects2 = plt.bar(index, avg_margin_for_all, 0.45, color='b', label='All')
plt.xlabel('Award Category')
ax.set_xticklabels(profit['category'].unique(), rotation='vertical')
plt.ylabel('Profit Margin (%)')
plt.title('Average Profit Margin by Award Category')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The biggest losers...that won?This is just a fun fact. There were 5 awards since 1980 that were given to films that actually *lost* money.
###Code
fields = ['year', 'film', 'category', 'name', 'budget', 'box_office', 'profit', 'margin']
profit[(profit['profit'] < 0) & (profit['Oscar'] == 1)][fields]
###Output
_____no_output_____
###Markdown
Other AwardsDo the BAFTAs, Golden Globes, Screen Actors Guild Awards, etc. forecast who is going to win the Oscars? Let's find out...
###Code
winning_awards = oscars[['category', 'Oscar', 'BAFTA', 'Golden Globe', 'Guild']]
winning_awards.head()
acting_categories = ['Actor', 'Actress', 'Supporting Actor', 'Supporting Actress']
y = winning_awards[(winning_awards['Oscar'] == 1)&(winning_awards['category'].isin(acting_categories))]
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, sharey=True)
plt.title('Count Plot of Wins by Award')
sb.countplot(x="BAFTA", data=y, ax=ax1)
sb.countplot(x="Golden Globe", data=y, ax=ax2)
sb.countplot(x="Guild", data=y, ax=ax3)
print "Pearson correlation for acting categories\n"
print_pearsonr(oscars[oscars['category'].isin(acting_categories)], 'Oscar', ['BAFTA', 'Golden Globe', 'Guild'])
###Output
Pearson correlation for acting categories
BAFTA | coeff: 0.343106753782 | p-value: 1.53555701801e-21
Golden Globe | coeff: 0.471264631568 | p-value: 1.59689509599e-41
Guild | coeff: 0.467345698872 | p-value: 8.92371589745e-41
###Markdown
It looks like if the Golden Globes and Screen Actors Guild awards are better indicators of Oscar success than the BAFTAs. Let's take a look at the same analysis, but for Best Picture. The "Guild" award we use is the [Screen Actor Guild Award for Outstanding Performance by a Cast in a Motion Picture](https://en.wikipedia.org/wiki/Screen_Actors_Guild_Award_for_Outstanding_Performance_by_a_Cast_in_a_Motion_Picture).
###Code
y = winning_awards[(winning_awards['Oscar'] == 1)&(winning_awards['category'] == 'Picture')]
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, sharey=True)
plt.title('Count Plot of Wins by Award')
sb.countplot(x="BAFTA", data=y, ax=ax1)
sb.countplot(x="Golden Globe", data=y, ax=ax2)
sb.countplot(x="Guild", data=y, ax=ax3)
print "Pearson correlation for acting categories\n"
print_pearsonr(oscars[oscars['category'] == 'Picture'], 'Oscar', ['BAFTA', 'Golden Globe', 'Guild'])
###Output
Pearson correlation for acting categories
BAFTA | coeff: 0.303661894517 | p-value: 5.22830069622e-06
Golden Globe | coeff: 0.367860033304 | p-value: 2.34135839185e-08
Guild | coeff: 0.286173459405 | p-value: 1.8594940365e-05
###Markdown
Check class balance in test data
###Code
data = Utils().test_data('data/sst/sst_test.txt')
ax = data['truth'].value_counts(sort=False).plot(kind='barh');
ax.set_xlabel("Number of Samples in Test Set");
ax.set_ylabel("Label");
###Output
_____no_output_____
###Markdown
TextBlob
###Code
tb = textblob('data/sst/sst_test.txt', lower_case=False)
plot_confusion_matrix(tb['truth'], tb['textblob_pred'], normalize=True);
###Output
Accuracy: 28.3710407239819
Macro F1-score: 0.2468141571266554
Normalized confusion matrix
###Markdown
Vader
###Code
va = vader('data/sst/sst_test.txt', lower_case=False)
plot_confusion_matrix(va['truth'], va['vader_pred'], normalize=True);
###Output
Accuracy: 31.538461538461537
Macro F1-score: 0.31297326018199634
Normalized confusion matrix
###Markdown
FastText
###Code
ft = fasttext('data/sst/sst_test.txt',
model='models/fasttext/sst.bin',
lower_case=False)
plot_confusion_matrix(ft['truth'], ft['fasttext_pred'], normalize=True);
###Output
Accuracy: 41.40271493212669
Macro F1-score: 0.3866337724462768
Normalized confusion matrix
###Markdown
Array API Comparison Notebook dependencies and initial setup...
###Code
import os
import pandas
import numpy as np
import matplotlib.pyplot as plt
# Adjust the default figure size:
plt.rcParams["figure.figsize"] = (20,10)
###Output
_____no_output_____
###Markdown
Find the root project directory...
###Code
# Determine the current working directory:
dir = os.getcwd()
# Walk the parent directories looking for a `package.json` file located in the root directory...
child = ''
while (child != dir):
spath = os.path.join(dir, 'package.json')
if (os.path.exists(spath)):
root_dir = dir
break
child = dir
dir = os.path.dirname(dir)
###Output
_____no_output_____
###Markdown
Resolve the directory containing data files...
###Code
data_dir = os.path.join(root_dir, 'data')
###Output
_____no_output_____
###Markdown
* * * Overview The following array libraries were initially analyzed:- [**NumPy**][numpy]: serves as the reference API against which all other array libraries are compared.- [**CuPy**][cupy]- [**Dask.array**][dask-array]- [**JAX**][jax]- [**MXNet**][mxnet]- [**PyTorch**][pytorch]- [**rnumpy**][rnumpy]: an opinionated curation of NumPy APIs, serving as an exercise in evaluating what is most "essential" (i.e., the smallest set of building block functionality on which most array functionality can be built).- [**PyData/Sparse**][pydata-sparse]- [**TensorFlow**][tensorflow]The data from this analysis can be found in the "join" dataset below.From the initial array library list, the following array libraries were subsequently analyzed in order to determine relatively common APIs:- [**NumPy**][numpy]- [**CuPy**][cupy]- [**Dask.array**][dask-array]- [**JAX**][jax]- [**MXNet**][mxnet]- [**PyTorch**][pytorch]- [**TensorFlow**][tensorflow][**PyData/Sparse**][pydata-sparse] was omitted due to insufficient and relatively nascent API coverage. [**rnumpy**][rnumpy] was omitted due to its nature as an intellectual exercise exploring what a minimal API could look like, rather than a ubiquitous library having widespread usage.In order to understand array API usage by downstream libraries, the following downstream libraries were analyzed (for additional information, see the [Python API Record][python-api-record] tooling repository):- [**Dask.array**][dask-array]- [**Matplotlib**][matplotlib]- [**pandas**][pandas]- [**scikit-image**][scikit-image] (alias: `skimage`)- [**xarray**][xarray][cupy]: https://docs-cupy.chainer.org/en/stable/reference/comparison.html[dask-array]: https://docs.dask.org/en/latest/array-api.html[jax]: https://jax.readthedocs.io/en/latest/[mxnet]: https://numpy.mxnet.io/api/deepnumpy[numpy]: https://docs.scipy.org/doc/numpy[pydata-sparse]: https://github.com/pydata/sparse[pytorch]: https://pytorch.org/docs/stable/[rnumpy]: https://github.com/Quansight-Labs/rnumpy[tensorflow]: https://www.tensorflow.org/api_docs/python[matplotlib]: https://matplotlib.org/[pandas]: https://pandas.pydata.org/[scikit-image]: https://scikit-image.org/ [xarray]: https://xarray.pydata.org/en/latest/[python-api-record]: https://github.com/data-apis/python-api-record * * * Datasets This notebook contains the following datasets... Categories Load a table mapping NumPy APIs to a usage "category"...
###Code
CATEGORIES = pandas.read_csv(os.path.join(data_dir, 'raw', 'numpy_categories.csv')).fillna(value='(other)')
###Output
_____no_output_____
###Markdown
Compute the number of rows, which will inform us as to the number of NumPy APIs...
###Code
NUM_APIS = len(CATEGORIES.index)
NUM_APIS
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
CATEGORIES.head()
###Output
_____no_output_____
###Markdown
In the above table, the first column corresponds to the NumPy API (arranged in alphabetical order). The second column corresponds to a high-level cateogory (as inspired by categorization found in [**rnumpy**][rnumpy]). The third column corresponds to a subcategory of the respective value in the second column. The categories are as follows:- `binary_ops`: APIs for performing bitwise operations- `creation`: APIs for array creation- `datetime`: APIs for manipulating dates and times- `finance`: APIs for computing financial quantities- `indexing`: APIs for array indexing- `io`: APIs for loading and writing data- `linalg`: APIs for performing linear algebra operations (e.g., dot product, matrix multiplication, etc.)- `logical`: APIs for logical operations (e.g., element-wise comparisions)- `manipulation`: APIs for array manipulation (e.g., reshaping and joining arrays)- `math`: APIs for basic mathematical functions (e.g., element-wise elementary functions)- `polynomials`: APIs for evaluating polynomials- `random`: APIs for pseudorandom number generation- `sets`: APIs for performing set operations (e.g., union, intersection, complement, etc.)- `signal_processing`: APIs for performing signal processing (e.g., FFTs)- `sorting`: APIs for sorting array elements- `statistics`: APIs for computing statistics (e.g., reductions such as computing the mean, variance, and standard deviation)- `string`: APIs for operating on strings- `utilities`: general utilities (e.g., displaying an element's binary representation)- `(other)`: APIs not categorized (or subcategorized)API categorization was manually compiled based on personal judgment and is undoubtedly imperfect.[rnumpy]: https://github.com/Quansight-Labs/rnumpy
###Code
CATEGORY_NAMES = [
'(other)',
'binary_ops',
'creation',
'datetime',
'finance',
'indexing',
'io',
'linalg',
'logical',
'manipulation',
'math',
'polynomials',
'random',
'sets',
'signal_processing',
'sorting',
'statistics',
'string',
'utilities'
]
###Output
_____no_output_____
###Markdown
Of the list of category names, we can define a subset of "core" categories (again, based on personal judgment)...
###Code
CORE_CATEGORY_NAMES = [
'creation',
'indexing',
'linalg',
'logical',
'manipulation',
'math',
'signal_processing', # mainly because of FFT
'sorting',
'statistics'
]
NON_CORE_CATEGORY_NAMES = np.setdiff1d(CATEGORY_NAMES, CORE_CATEGORY_NAMES).tolist()
###Output
_____no_output_____
###Markdown
From the category data above, we can determine the relative composition of the NumPy API...
###Code
category_breakdown = CATEGORIES.groupby(by=['category', 'subcategory']).count()
category_breakdown
###Output
_____no_output_____
###Markdown
We can visualize the relative composition for top-level categories as follows
###Code
category_count = CATEGORIES.loc[:,['name','category']].groupby(by='category').count().sort_values(by='name', ascending=True)
category_count.plot.barh()
###Output
_____no_output_____
###Markdown
If we omit functions which are not in "core" categories, we arrive at the following API frequency distribution...
###Code
# Compute the total number of non-"core" NumPy APIs:
non_core_categories_num_apis = category_count.loc[NON_CORE_CATEGORY_NAMES,:].sum()
# Create a DataFrame containing only NumPy APIs considered "core" and compute the empirical frequency distribution:
core_category_distribution = category_count.drop(index=NON_CORE_CATEGORY_NAMES) / (NUM_APIS-non_core_categories_num_apis)
core_category_distribution.sort_values(by='name', ascending=False)
core_category_distribution.sort_values(by='name', ascending=True).plot.barh()
###Output
_____no_output_____
###Markdown
NumPy Methods Load a table mapping NumPy `ndarray` methods to equivalent top-level NumPy APIs...
###Code
METHODS_TO_FUNCTIONS = pandas.read_csv(os.path.join(data_dir, 'raw', 'numpy_methods_to_functions.csv')).fillna(value='-')
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(METHODS_TO_FUNCTIONS.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
METHODS_TO_FUNCTIONS.head(10)
METHODS_TO_FUNCTIONS.tail(10)
###Output
_____no_output_____
###Markdown
Join Load API data for each array library as a single table, using NumPy as the reference API...
###Code
JOIN = pandas.read_csv(os.path.join(data_dir, 'join.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(JOIN.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
JOIN.head()
###Output
_____no_output_____
###Markdown
In the table above, the first column corresponds to the NumPy API (arranged in alphabetical order) and each subsequent column corresponds to an equivalent API in a respective array library. If an array library does not have an equivalent API, the row value for that array library is `NaN`. Intersection Load a table containing the API intersection (i.e., APIs implemented in **all** compared array libraries)...
###Code
INTERSECTION = pandas.read_csv(os.path.join(data_dir, 'intersection.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(INTERSECTION.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
INTERSECTION.head()
###Output
_____no_output_____
###Markdown
In the table above, the first column corresponds to the NumPy API (arranged in alphabetical order) and each subsequent column corresponds to an equivalent API in a respective array library.Using the API categorization data above, we can associate each NumPy API in the intersection with its respective category...
###Code
intersection_categories = pandas.merge(
INTERSECTION[['numpy']],
CATEGORIES[['name', 'category', 'subcategory']],
left_on='numpy',
right_on='name',
how='left'
)
intersection_categories.drop('numpy', axis=1, inplace=True)
intersection_categories.head()
###Output
_____no_output_____
###Markdown
From the previous table, we can compute the category composition of the intersection, which is as follows:
###Code
intersection_category_count = intersection_categories.loc[:,['name', 'category']].fillna(value='(other)').groupby(by='category').count().sort_values(by='name', ascending=False)
intersection_category_count
###Output
_____no_output_____
###Markdown
From which we can compute the empirical distribution...
###Code
intersection_category_distribution = intersection_category_count / intersection_category_count.sum()
intersection_category_distribution
###Output
_____no_output_____
###Markdown
whereby- `~50%` are basic element-wise mathematical functions, such as arithmetic and trigonometric functions- `~25%` are array creation and manipulation functions- `~15%` are linear algebra functions- `~10%` are indexing and statistics
###Code
intersection_category_count.sort_values(by='name', ascending=True).plot.barh()
###Output
_____no_output_____
###Markdown
SummaryArray libraries find the most agreement in providing APIs for (1) array creation and manipulation, (2) element-wise operations for evaluating elementary mathematical functions, (3) basic summary statistics, and (4) linear algebra operations. Complement (intersection) Load a table containing the API complement (i.e., APIs **not** included in the intersection above)...
###Code
COMPLEMENT = pandas.read_csv(os.path.join(data_dir, 'complement.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(COMPLEMENT.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
COMPLEMENT.head()
###Output
_____no_output_____
###Markdown
In the table above, the first column corresponds to the NumPy API (arranged in alphabetical order) and each subsequent column corresponds to an equivalent API in a respective array library. If an array library does not have an equivalent API, the row value for that array library is `NaN`.Using the API categorization data above, we can associate each NumPy API in the complement with its respective category...
###Code
complement_categories = pandas.merge(
COMPLEMENT[['numpy']],
CATEGORIES[['name', 'category', 'subcategory']],
left_on='numpy',
right_on='name',
how='left'
)
complement_categories.drop('numpy', axis=1, inplace=True)
complement_categories.head()
###Output
_____no_output_____
###Markdown
Common APIs Load a table containing (relatively) common APIs (where "common" is defined as existing in **at least** `5` of the `7` compared array libraries; this dataset may be considered a weaker and more inclusive intersection)...
###Code
COMMON_APIS = pandas.read_csv(os.path.join(data_dir, 'common_apis.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(COMMON_APIS.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
COMMON_APIS.head()
###Output
_____no_output_____
###Markdown
In the table above, the first column corresponds to the NumPy API (arranged in alphabetical order) and each subsequent column corresponds to an equivalent API in a respective array library. If an array library does not have an equivalent API, the row value for that array library is `NaN`.Using the API categorization data above, we can associate each NumPy API in the list of common APIs with its respective category...
###Code
common_apis_categories = pandas.merge(
COMMON_APIS[['numpy']],
CATEGORIES[['name', 'category', 'subcategory']],
left_on='numpy',
right_on='name',
how='left'
)
common_apis_categories.drop('numpy', axis=1, inplace=True)
common_apis_categories.head()
###Output
_____no_output_____
###Markdown
From the previous table, we can compute the category composition of the list of common APIs, which is as follows:
###Code
common_apis_category_count = common_apis_categories.loc[:,['name', 'category']].fillna(value='(other)').groupby(by='category').count().sort_values(by='name', ascending=False)
common_apis_category_count
###Output
_____no_output_____
###Markdown
From which we can compute the empirical distribution...
###Code
common_apis_category_distribution = common_apis_category_count / common_apis_category_count.sum()
common_apis_category_distribution
common_apis_category_count.sort_values(by='name', ascending=True).plot.barh()
###Output
_____no_output_____
###Markdown
SummaryIn addition to the categories discussed above in the `Intersection` section, array libraries find general agreement in providing APIs for (1) logical operations, (2) signal processing, and (3) indexing. Complement (common APIs) Load a table containing the complement of the above common APIs...
###Code
COMMON_COMPLEMENT = pandas.read_csv(os.path.join(data_dir, 'common_complement.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(COMMON_COMPLEMENT)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
COMMON_COMPLEMENT.head()
###Output
_____no_output_____
###Markdown
In the table above, the first column corresponds to the NumPy API (arranged in alphabetical order) and each subsequent column corresponds to an equivalent API in a respective array library. If an array library does not have an equivalent API, the row value for that array library is `NaN`.Using the API categorization data above, we can associate each NumPy API in the complement with its respective category...
###Code
common_complement_categories = pandas.merge(
COMMON_COMPLEMENT[['numpy']],
CATEGORIES[['name', 'category', 'subcategory']],
left_on='numpy',
right_on='name',
how='left'
)
common_complement_categories.drop('numpy', axis=1, inplace=True)
common_complement_categories.head()
###Output
_____no_output_____
###Markdown
Downstream Library Usage Downstream library usage was measured by running test suites for each respective downstream library and recording NumPy API calls. For further details, see the API record tooling [repository](https://github.com/data-apis/python-api-record).Load a table containing API usage data...
###Code
API_RECORD = pandas.read_csv(os.path.join(data_dir, 'vendor', 'record.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(API_RECORD.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
API_RECORD.head(10)
###Output
_____no_output_____
###Markdown
In the above table, the first column corresponds to the NumPy API (arranged in descending order according to line count), and the second column corresponds to the name of the downstream library. * * * Analysis Ranking (intersection) From the API record data, we can rank each API in the API intersection according to its relative usage and based on the following algorithm:- For each downstream library, compute the relative invocation frequency for each NumPy API based on the total number of NumPy API invocations for that library.- For each downstream library, rank NumPy APIs by invocation frequency in descending order (i.e., an API with a greater invocation frequency should have a higher rank).- For each NumPy API, use a [positional voting system](https://en.wikipedia.org/wiki/Borda_count) to tally library preferences. Here, we use a [Borda count](https://en.wikipedia.org/wiki/Borda_count) called the Dowdall system to assign points via a fractional weight scheme forming a harmonic progression. Note that this particular voting system favors APIs which have more first preferences. The assumption here is that lower relative ranks are more "noisy" and should contribute less weight to an API's ranking. Note that this can lead to scenarios where an API is used heavily by a single downstream library (and thus has a high ranking for that downstream library), but is rarely used (if at all) by other downstream libraries. In which case, that API may be ranked higher than other APIs which are used by all (or many) downstream libraries, but not heavily enough to garner enough points to rank higher. In practice, this situation does not appear common. APIs used heavily by one library are typically used heavily by several other libraries. In which case, the risk of assigning too much weight to a domain-specific use case should be minimal.The ranking data is available as a precomputed table.
###Code
INTERSECTION_RANKS = pandas.read_csv(os.path.join(data_dir, 'intersection_ranks.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(INTERSECTION_RANKS.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
INTERSECTION_RANKS.head(10)
INTERSECTION_RANKS.tail(10)
###Output
_____no_output_____
###Markdown
SummaryBased on the record data, the most commonly used NumPy APIs which are shared among **all** analyzed array libraries are those for (1) array creation (e.g., `zeros`, `ones`, etc.), (2) array manipulation (e.g., `reshape`), (3) element-wise evaluation of elementary mathematical functions (e.g., `sin`, `cos`, etc.), and (4) statistical reductions (e.g., `mean`, `var`, `std`, etc.). Ranking (common APIs) Similar to ranking the APIs found in the intersection, as done above, we can rank each API in the list of common APIs according to relative usage. The ranking data is available as a precomputed table.
###Code
COMMON_APIS_RANKS = pandas.read_csv(os.path.join(data_dir, 'common_apis_ranks.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(COMMON_APIS_RANKS.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
COMMON_APIS_RANKS.head(10)
COMMON_APIS_RANKS.tail(10)
###Output
_____no_output_____
###Markdown
SummaryBased on the record data, the most commonly used NumPy APIs which are common among analyzed array libraries are those for (1) array creation (e.g., `zeros`, `ones`, etc.), (2) array manipulation (e.g., `reshape`), (3) element-wise evaluation of elementary mathematical functions (e.g., `sin`, `cos`, etc.), and (4) statistical reductions (e.g., `amax`, `amin`, `mean`, `var`, `std`, etc.). Downstream API Usage Categories Load a precomputed table containing the API usage categories for the top `100` NumPy array APIs for each downstream library...
###Code
LIB_TOP_100_CATEGORY_STATS = pandas.read_csv(os.path.join(data_dir, 'lib_top_100_category_stats.csv'), index_col='category')
###Output
_____no_output_____
###Markdown
View table contents...
###Code
LIB_TOP_100_CATEGORY_STATS
groups = LIB_TOP_100_CATEGORY_STATS.index.values
fig, ax = plt.subplots()
index = np.arange(len(groups))
bar_width = 0.15
rects1 = plt.bar(index-(1*bar_width), LIB_TOP_100_CATEGORY_STATS['dask.array'], bar_width, label='dask.array')
rects2 = plt.bar(index-(0*bar_width), LIB_TOP_100_CATEGORY_STATS['matplotlib'], bar_width, label='matplotlib')
rects3 = plt.bar(index+(1*bar_width), LIB_TOP_100_CATEGORY_STATS['pandas'], bar_width, label='pandas')
rects4 = plt.bar(index+(2*bar_width), LIB_TOP_100_CATEGORY_STATS['skimage'], bar_width, label='skimage')
rects5 = plt.bar(index+(3*bar_width), LIB_TOP_100_CATEGORY_STATS['xarray'], bar_width, label='xarray')
plt.title('Array API Categories')
plt.xlabel('Categories')
plt.ylabel('Count')
plt.xticks(index + bar_width, groups)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Load data Load dataframe from csv: merged spotify and rolling stone data
###Code
base_path = project_dir + '/data/processed/'
file_name = 'df_spot_rs_merge_basic.csv'
print(base_path + file_name)
df_spot_rs_merge = pd.read_csv(base_path + file_name)
df_spot_rs_merge = df_spot_rs_merge.drop(['Unnamed: 0'], axis=1)
###Output
/Users/erik/metis/spotipy_hits/data/processed/df_spot_rs_merge_basic.csv
###Markdown
Analysis: Distributions inspect df
###Code
df_spot_rs_merge
###Output
_____no_output_____
###Markdown
distribution of year_album_unitsLook at distribution of year_album_units
###Code
x = df_spot_rs_merge.year_rank_score
plt.hist(x, bins=15)
plt.xlabel('year_rank_score')
plt.ylabel('frequency')
plt.title('album chart success histogram')
file_path = figures_folder + 'Album_chart_success_histogram_1.svg'
plt.savefig(file_path, format='svg')
pass
###Output
_____no_output_____
###Markdown
There's a wide spread, but most are comparitively low year_rank_score. distribution of log(year_album_units) Try viewing the log of year_rank_score
###Code
def format_title(s):
return s.replace(' ', '_').lower()
x = np.log10(df_spot_rs_merge.year_rank_score)
plt.hist(x, bins=15)
plt.xlabel('log year_rank_score')
plt.ylabel('frequency')
title = 'selected albums chart success histogram'
plt.title(title)
file_path = figures_folder + format_title(title) + '.svg'
plt.savefig(file_path, format='svg')
pass
###Output
_____no_output_____
###Markdown
That's better. So: most have around 200 points, while a about 150 have close to 10,000 points Add log_score to dfAdd log_score to df
###Code
df_spot_rs_merge['log_score'] = np.log10(df_spot_rs_merge['year_rank_score'])
###Output
_____no_output_____
###Markdown
And rearrange columns
###Code
df_spot_rs_merge = df_spot_rs_merge[['year_rank_score', 'log_score', 'year_album_units', 'avg_vol', 'total_tracks',
'duration', 'release_date', 'record_label', 'title_cleaned',
'caption_artist', 'album_id']]
###Output
_____no_output_____
###Markdown
Inspect df
###Code
df_spot_rs_merge
###Output
_____no_output_____
###Markdown
Distribution of average album volume look at distribution of average album volume.
###Code
x = df_spot_rs_merge.avg_vol
plt.hist(x, bins=15)
plt.xlabel('average album volume')
plt.ylabel('frequency')
plt.title('')
title = 'Histogram of Average Volume'
plt.title(title)
file_path = figures_folder + format_title(title) + '.svg'
plt.savefig(file_path, format='svg')
pass
###Output
_____no_output_____
###Markdown
Distribution of total number of tracks Look at distribution of number of tracks
###Code
x = df_spot_rs_merge.total_tracks
plt.hist(x, bins=15)
plt.xlabel('total tracks')
plt.ylabel('frequency')
title = 'histogram_of_total_tracks'
plt.title(title)
file_path = figures_folder + format_title(title) + '.svg'
plt.savefig(file_path, format='svg')
pass
###Output
_____no_output_____
###Markdown
For now, drop albums with total_tracks above 30, so we can see most of them in more detail
###Code
df = df_spot_rs_merge.loc[df_spot_rs_merge.total_tracks <= 30]
df.total_tracks
plt.hist(df.total_tracks, bins = 30)
plt.ylabel('frequency')
plt.xlabel('total tracks in album')
title = 'Histogram of Total Tracks for Selected Albums'
plt.title(title)
file_path = figures_folder + format_title(title) + '.svg'
plt.savefig(file_path, format='svg')
pass
###Output
_____no_output_____
###Markdown
Many albums have 14 tracks, and most seem to be between 10 and 20. distribution of album durationLook at distribution of album duration
###Code
x = df_spot_rs_merge.duration/1000 / 60
plt.hist(x, bins=15)
plt.xlabel('album duration (minutes)')
plt.ylabel('frequency')
title = 'Histogram of Album Duration'
plt.title(title)
file_path = figures_folder + format_title(title) + '.svg'
plt.savefig(file_path, format='svg')
pass
###Output
_____no_output_____
###Markdown
Most albums around around 50 minutes. Let's take a closer look at those between 20 and 80 minutes
###Code
duration = df_spot_rs_merge.duration/1000 / 60
x = duration[(duration >= 20) & (duration <= 80)]
plt.hist(x, bins=15)
plt.xlabel('album duration (minutes)')
plt.ylabel('frequency')
title = 'Histogram of Album Duration for Selected Albums'
plt.title(title)
file_path = figures_folder + format_title(title) + '.svg'
plt.savefig(file_path, format='svg')
pass
###Output
_____no_output_____
###Markdown
So Most are between 35 and 60 minutes Distribution of release date Let's look at the distribution of release dates
###Code
x = pd.to_datetime(df_spot_rs_merge['release_date'], unit='s')
plt.hist(x, bins=15)
plt.xlabel('album release date')
plt.ylabel('frequency')
title = 'Histogram of Album Release Dates'
plt.title(title)
file_path = figures_folder + format_title(title) + '.svg'
plt.savefig(file_path, format='svg')
pass
###Output
/Users/erik/anaconda3/envs/metis/lib/python3.7/site-packages/pandas/plotting/_matplotlib/converter.py:103: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters.
To register the converters:
>>> from pandas.plotting import register_matplotlib_converters
>>> register_matplotlib_converters()
warnings.warn(msg, FutureWarning)
###Markdown
That's nearly all between 2015 and 2020. Let's look at the decade to 2020
###Code
ser = pd.to_datetime(df_spot_rs_merge['release_date'], unit='s')
x = ser[ser > pd.to_datetime(2010, format='%Y')]
plt.hist(x, bins=10)
plt.xlabel('album release date')
plt.ylabel('frequency')
title = 'Histogram of Release Date for Selected Albums'
plt.title(title)
file_path = figures_folder + format_title(title) + '.svg'
plt.savefig(file_path, format='svg')
pass
###Output
_____no_output_____
###Markdown
It's still nearly all in 2019. Let's look just at year 2019 onwards
###Code
ser = pd.to_datetime(df_spot_rs_merge['release_date'], unit='s')
x = ser[(ser > pd.to_datetime(2019, format='%Y'))] # & (ser < pd.to_datetime(2020, format='%Y'))]
plt.hist(x, bins=11)
plt.xlabel('album release date')
plt.ylabel('frequency')
title = 'Histogram of Release Date for Selected Albums'
plt.title(title)
title += '_2'
file_path = figures_folder + format_title(title) + '.svg'
plt.savefig(file_path, format='svg')
pass
###Output
_____no_output_____
###Markdown
It appears that albums released later in the year have more success. Analysis: Pair Plot Let's draw a pair plot for selected data in df_spot_rs_merge
###Code
df_spot_rs_merge.columns
import seaborn as sns
df = df_spot_rs_merge[['avg_vol', 'total_tracks','duration', 'release_date', 'log_score']]
# sns.pairplot(df)
sns_plot = sns.pairplot(df)
sns_plot.savefig(figures_folder + 'pair_plot_1.svg')
###Output
_____no_output_____
###Markdown
Try it with only the tracks released from 2019 onwardrs
###Code
df = df_spot_rs_merge[['avg_vol', 'total_tracks','duration', 'release_date', 'log_score']].copy()
df['release_date_datetime'] = (pd.to_datetime(df['release_date'], unit='s'))
df = df[df['release_date_datetime'] > pd.to_datetime(2019, format='%Y')]
sns.pairplot(df)
# df.columns
###Output
_____no_output_____
###Markdown
Analysis: alpha for density of scatter plot of Look at the scatter plot for total tracks in album and log score. Set alpha rather low, to see if we can get some insight from the density of points
###Code
plt.scatter(df_spot_rs_merge.total_tracks, np.log(df_spot_rs_merge.year_rank_score), alpha = .2)
plt.xlabel('total tracks in album')
plt.ylabel('log year rank score')
pass
###Output
_____no_output_____
###Markdown
Doesn't look like much here, but let's focus on ones with fewer that 30 tracks
###Code
df = df_spot_rs_merge.loc[df_spot_rs_merge.total_tracks <= 30]
plt.scatter(df.total_tracks, np.log(df.year_rank_score), alpha = .2)
plt.xlabel('total tracks in album')
plt.ylabel('log year rank score')
pass
###Output
_____no_output_____
###Markdown
Looks neat, but doesn't look linear. Looks like a blob. Analysis: value counts of record label Let's look at the record label column, and see how many albums ap9pear for each record label
###Code
df_spot_rs_merge.record_label.value_counts()
###Output
_____no_output_____
###Markdown
oops, these data have already had record label converted to 'other' for everything other than the major 6. I'll pull from a pickle to get the data with record labels intact
###Code
file_path = project_dir + '/data/interim/' + 'merge_of_rs_and_spotify_with_all_record_labels'
df_record_labels = load_pickle(file_path)
df_record_labels.record_label.value_counts(dropna=False).head(10)
df_record_labels.record_label.value_counts(dropna=False).tail(10)
###Output
_____no_output_____
###Markdown
A relatively small number of record labels seem to have most of the albums. Let's look at this
###Code
df_record_labels.record_label.value_counts(dropna=False).value_counts().sort_index()
###Output
_____no_output_____
###Markdown
This notebook shows my solution to answering the question: Should the King of Norway allow COVID vaccinated Americans into Norway? My null hypothesis for this analysis is: No relationship exists between vaccines and new cases. No relationship exists between vaccines and hospitalizations. load packages
###Code
import pandas as pd
from scipy.stats import pearsonr
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
load datasets into dataframes
###Code
data_loc = '../data/KingHaraldsCovidQuest'
vaccines_df = pd.read_csv(f'{data_loc}/Vaccination.csv')
day_cases_df = pd.read_csv(f'{data_loc}/ReportedCovidCasesByDay.csv')
hospitals_df = pd.read_csv(f'{data_loc}/Hospitalizations.csv')
print(f'vaccines_df.shape: {vaccines_df.shape}')
print(f'day_cases_df.shape: {day_cases_df.shape}')
print(f'hospitals_df.shape: {hospitals_df.shape}')
###Output
vaccines_df.shape: (212, 5)
day_cases_df.shape: (497, 3)
hospitals_df.shape: (497, 3)
###Markdown
view sample of each dataset vaccines dataset
###Code
vaccines_df.head()
###Output
_____no_output_____
###Markdown
cases by day dataset
###Code
day_cases_df.head()
###Output
_____no_output_____
###Markdown
hospitalizations dataset
###Code
hospitals_df.head()
###Output
_____no_output_____
###Markdown
clean up columes for datasets
###Code
# rename Dato to date (Norwegian spelling conversion)
vaccines_df.rename({'Dato':'date'}, axis=1, inplace=True)
day_cases_df.rename({'Dato':'date'}, axis=1, inplace=True)
hospitals_df.rename({'Dato':'date'}, axis=1, inplace=True)
# lowercase all columns for easier coding
vaccines_df.columns = [col.lower() for col in vaccines_df.columns]
day_cases_df.columns = [col.lower() for col in day_cases_df.columns]
hospitals_df.columns = [col.lower() for col in hospitals_df.columns]
# convert date field to datetime
vaccines_df['date'] = pd.to_datetime(vaccines_df.date)
# these datasets had date formatted with the day value first
day_cases_df['date'] = pd.to_datetime(day_cases_df.date, dayfirst=True)
hospitals_df['date'] = pd.to_datetime(hospitals_df.date, dayfirst=True)
###Output
_____no_output_____
###Markdown
view some simple graphs of the data raw data over time and cumulative data over time
###Code
def plot_date_based_data(
df_in:pd.DataFrame,
new_col_name:str,
cum_col_name:str,
str_name:str='NO STR_NAME PROVIDED'):
plt.figure(figsize=(8,6))
plt.title(f'new {str_name} over time')
df_in.set_index('date')[new_col_name].plot()
plt.grid()
plt.xlabel(f'new {str_name}')
plt.show()
plt.figure(figsize=(8,6))
plt.title(f'cumulative {str_name} over time')
df_in.set_index('date')[cum_col_name].plot()
plt.grid()
plt.xlabel(f'new {str_name}')
plt.show()
###Output
_____no_output_____
###Markdown
vaccines plots
###Code
plot_date_based_data(df_in=vaccines_df,
new_col_name='nbrpersonsdose2',
cum_col_name='cumnbrdose2',
str_name='vaccines')
###Output
_____no_output_____
###Markdown
cases by day plots
###Code
plot_date_based_data(df_in=day_cases_df,
new_col_name='new',
cum_col_name='cumulative',
str_name='day cases')
###Output
_____no_output_____
###Markdown
hospitalization plots
###Code
plot_date_based_data(df_in=hospitals_df,
new_col_name='new',
cum_col_name='cumulative',
str_name='hospitalizations')
###Output
_____no_output_____
###Markdown
aggregate by 7-day rolling average to remove weekly cycles in the data
###Code
vaccines_rolling_df = vaccines_df.set_index('date').rolling('7D').mean().reset_index()
day_cases_rolling_df = day_cases_df.set_index('date').rolling('7D').mean().reset_index()
hospitals_rolling_df = hospitals_df.set_index('date').rolling('7D').mean().reset_index()
###Output
_____no_output_____
###Markdown
view visualizations of each 7 day average dataset vaccines dataset
###Code
plot_date_based_data(df_in=vaccines_rolling_df,
new_col_name='nbrpersonsdose2',
cum_col_name='cumnbrdose2',
str_name='vaccines 7-day average')
###Output
_____no_output_____
###Markdown
cases by day dataset
###Code
plot_date_based_data(df_in=day_cases_rolling_df,
new_col_name='new',
cum_col_name='cumulative',
str_name='cases by day 7-day average')
###Output
_____no_output_____
###Markdown
hospitalizations dataset
###Code
plot_date_based_data(df_in=hospitals_rolling_df,
new_col_name='new',
cum_col_name='cumulative',
str_name='hospitalizations 7-day average')
###Output
_____no_output_____
###Markdown
adjust vaccine data by 2 weeks to adjust for time after second dose where they are more effective - call this cum_innoculation
###Code
vaccines_rolling_df['cum_innoculation'] = vaccines_df.cumnbrdose2.shift(14)
###Output
_____no_output_____
###Markdown
join vaccine data with cases and hospitalizations by date
###Code
joined_df = vaccines_rolling_df.set_index('date').join(
day_cases_rolling_df.rename({
'new':'new_cases',
'cumulative':'cum_cases'}, axis=1).set_index('date'))
joined_df = joined_df.join(
hospitals_rolling_df.rename({
'new':'new_hospitalizations',
'cumulative':'cum_hospitalizations'}, axis=1).set_index('date'))
joined_df.head()
###Output
_____no_output_____
###Markdown
plot rolling 7-day vaccine and new cases data
###Code
fig, ax1 = plt.subplots(figsize=(12,10))
ax2 = ax1.twinx()
plt.title(f'7-day rolling average data over time')
joined_df.cum_innoculation.plot(marker='o',
ax=ax1,
c='green',
label='cumulative innoculation')
joined_df.new_cases.plot(marker='o',
ax=ax2,
c='orange',
label='new cases')
plt.xlabel('date')
plt.grid()
lines_1, labels_1 = ax1.get_legend_handles_labels()
lines_2, labels_2 = ax2.get_legend_handles_labels()
lines = lines_1 + lines_2
labels = labels_1 + labels_2
ax1.legend(lines, labels, loc=0)
ax1.set_ylabel('cumulative innoculations')
ax2.set_ylabel('new cases')
plt.show()
###Output
_____no_output_____
###Markdown
plot rolling 7-day vaccine and new hospitalizations data
###Code
fig, ax1 = plt.subplots(figsize=(12,10))
ax2 = ax1.twinx()
plt.title(f'7-day rolling average data over time')
joined_df.cum_innoculation.plot(marker='o',
ax=ax1,
c='green',
label='cumulative innoculation')
joined_df.new_hospitalizations.plot(marker='o',
ax=ax2,
c='orange',
label='new hospitalizations')
plt.xlabel('date')
plt.grid()
lines_1, labels_1 = ax1.get_legend_handles_labels()
lines_2, labels_2 = ax2.get_legend_handles_labels()
lines = lines_1 + lines_2
labels = labels_1 + labels_2
ax1.legend(lines, labels, loc=0)
ax1.set_ylabel('cumulative innoculations')
ax2.set_ylabel('new hospitalizations')
plt.show()
###Output
_____no_output_____
###Markdown
limit dataset to only consider from the date where the first innoculated person exists
###Code
first_innoc_date = joined_df[joined_df.cum_innoculation > 0].index.min()
analysis_df = joined_df[joined_df.index >= first_innoc_date]
print(f'analysis_df.shape: {analysis_df.shape}')
###Output
analysis_df.shape: (152, 9)
###Markdown
plot rolling 7-day vaccine and new cases data
###Code
fig, ax1 = plt.subplots(figsize=(12,10))
ax2 = ax1.twinx()
plt.title(f'7-day rolling average data over time - at least 1 innoculated person')
analysis_df.cum_innoculation.plot(marker='o',
ax=ax1,
c='green',
label='cumulative innoculation')
analysis_df.new_cases.plot(marker='o',
ax=ax2,
c='orange',
label='new cases')
plt.xlabel('date')
plt.grid()
lines_1, labels_1 = ax1.get_legend_handles_labels()
lines_2, labels_2 = ax2.get_legend_handles_labels()
lines = lines_1 + lines_2
labels = labels_1 + labels_2
ax1.legend(lines, labels, loc=0)
ax1.set_ylabel('cumulative innoculations')
ax2.set_ylabel('new cases')
plt.show()
###Output
_____no_output_____
###Markdown
plot rolling 7-day vaccine and new hospitalizations data
###Code
fig, ax1 = plt.subplots(figsize=(12,10))
ax2 = ax1.twinx()
plt.title(f'7-day rolling average data over time - at least 1 innoculated person')
analysis_df.cum_innoculation.plot(marker='o',
ax=ax1,
c='green',
label='cumulative innoculation')
analysis_df.new_hospitalizations.plot(marker='o',
ax=ax2,
c='orange',
label='new hospitalizations')
plt.xlabel('date')
plt.grid()
lines_1, labels_1 = ax1.get_legend_handles_labels()
lines_2, labels_2 = ax2.get_legend_handles_labels()
lines = lines_1 + lines_2
labels = labels_1 + labels_2
ax1.legend(lines, labels, loc=0)
ax1.set_ylabel('cumulative innoculations')
ax2.set_ylabel('new hospitalizations')
plt.show()
###Output
_____no_output_____
###Markdown
calcualate correlation between cumulative innoculation, new cases and new hospitalizations
###Code
corr_df = analysis_df[['cum_innoculation','new_hospitalizations','new_cases']].corr()
corr_df
###Output
_____no_output_____
###Markdown
view the pearson correlation and p-values of pearson correlation to build confidence in relationship between cumulative innoculation and new cases/hospitalizations
###Code
hospitals_pearson = pearsonr(analysis_df.cum_innoculation, analysis_df.new_hospitalizations)
cases_pearson = pearsonr(analysis_df.cum_innoculation, analysis_df.new_cases)
print('cumulative innoculation and new hospitalization pearson correlation details:')
print(f'\tcorrelation: {np.round(hospitals_pearson[0], 5)}')
print(f'\tcorrelation p-value: {np.round(hospitals_pearson[1], 12)}')
print()
print('cumulative innoculation and new cases pearson correlation details:')
print(f'\tcorrelation: {np.round(cases_pearson[0], 5)}')
print(f'\tcorrelation p-value: {np.round(cases_pearson[1], 12)}')
###Output
cumulative innoculation and new hospitalization pearson correlation details:
correlation: -0.41787
correlation p-value: 8.4901e-08
cumulative innoculation and new cases pearson correlation details:
correlation: -0.46584
correlation p-value: 1.47e-09
###Markdown
Analysis of trained models and training logsThis notebook shows how to load, process, and analyze logs that are automatically generated during training. It also demonstrates how to make plots to examine performance of a single model or compare performance of multiple models.Prerequisites:- To run this example live, you must train at least two models to generate the trained log directories and set the paths below.Each log directory contains the following:- args.txt: the arguments fed into regression.py to train the model- split: the train-tune-test split used to train the model- final_evaluation.txt: final evaluation metrics (MSE, Pearson's r, Spearman's r, and r^2) on each of the split sets- predictions: the model's score predictions for every variant in each of the split sets- the trained model itself: see the inference notebook for more information on how to use thisThis codebase provides several convenient functions for loading this log data.
###Code
# reload modules before executing code in order to make development and debugging easier
%load_ext autoreload
%autoreload 2
# this jupyter notebook is running inside of the "notebooks" directory
# for relative paths to work properly, we need to set the current working directory to the root of the project
# for imports to work properly, we need to add the code folder to the system path
import os
from os.path import abspath, join, isdir, basename
import sys
if not isdir("notebooks"):
# if there's a "notebooks" directory in the cwd, we've already set the cwd so no need to do it again
os.chdir("..")
module_path = abspath("code")
if module_path not in sys.path:
sys.path.append(module_path)
import numpy as np
import pandas as pd
import sklearn.metrics as skm
from scipy.stats import pearsonr, spearmanr
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import analysis as an
###Output
_____no_output_____
###Markdown
Define the log directoriesTo run this script live, you must train at least two models. As an example, we are using the avGFP linear regression and fully connected models, trained using the arguments in `pub/regression_args/avgfp_main_lr.txt` and `pub/regression_args/avgfp_main_fc.txt`. You can use these or train your own models. For comparing performance of many trained models, you must write your own function to collect the log directory names. Then, using them with this example is then relatively straightfoward.
###Code
log_dir_lr = "output/training_logs/log_local_local_2020-09-22_22-02-33_avgfp_lr_lr0.0001_bs128_DKPQxV5s"
log_dir_fc = "output/training_logs/log_local_local_2020-09-22_22-02-36_avgfp_fc-3xh100_lr0.0001_bs32_RbLfpQvW"
log_dirs = [log_dir_lr, log_dir_fc]
###Output
_____no_output_____
###Markdown
Loading score predictions (single model)The utility function uses the dataset tsv as a base and adds columns for the set name (train, tune, test, etc) and the predicted score.
###Code
ds_lr = an.load_predictions(log_dir_lr)
ds_lr.head()
###Output
_____no_output_____
###Markdown
Loading evaluation metrics (single model)
###Code
metrics_lr = an.load_metrics(log_dir_lr)
metrics_lr
###Output
_____no_output_____
###Markdown
Sometimes it is convenient to have access to other aspects of the model, such as the learning rate and batch size. You can load the regression arguments as a dictionary using `an.load_args()`. Or, you can use `an.load_metrics_and_args` to load both the metrics and arguments in a single dataframe. The combined dataframe is set up so that each row can be a different model, which helps with comparisons between models.
###Code
met_args_lr = an.load_metrics_and_args(log_dir_lr)
met_args_lr
###Output
_____no_output_____
###Markdown
Evaluating a single modelThe dataframe contains variants from all sets (train, tune, test, etc), so if you are interested in a single set, you must select just those variants.
###Code
# before creating the testset-only dataframe, add a column with mean absolute error, used below
ds_lr["abs_err"] = np.abs(ds_lr["score"] - ds_lr["prediction"])
# create a subset view of the dataframe containing only test set variants
ds_lr_stest = ds_lr.loc[ds_lr.set_name == "stest"]
###Output
_____no_output_____
###Markdown
Scatterplot of predicted vs. true score
###Code
fig, ax = plt.subplots(1)
sns.scatterplot(x="score", y="prediction", data=ds_lr_stest, ax=ax)
# draw a line of equivalence
x0, x1 = ax.get_xlim()
y0, y1 = ax.get_ylim()
lims = [max(x0, y0), min(x1, y1)]
ax.plot(lims, lims, ':k')
ax.set(ylabel="Predicted score", xlabel="True score", title="Predicted score vs. score (Linear regression)")
plt.show()
plt.close(fig)
###Output
_____no_output_____
###Markdown
Mean absolute error by number of mutations
###Code
# plot the mean absolute error vs. number of mutations
# can do this more easily with pandas groupby, apply
grouped_mean = ds_lr_stest.groupby("num_mutations", as_index=False).mean()
fig, ax = plt.subplots(1)
sns.stripplot(x="num_mutations", y="abs_err", data=grouped_mean[grouped_mean.num_mutations < 13], ax=ax)
ax.set(ylabel="Mean absolute error", xlabel="Number of mutations", title="Mean absolute error by number of mutations")
plt.show()
plt.close(fig)
###Output
_____no_output_____
###Markdown
Additional evaluation metricsThe regression training script automatically computes a few metrics, but you can also use the true and predicted scores to compute your own. Here, let's recompute Pearson's correlation coefficient and compare it to the same metric computed during training.
###Code
my_pearsonr = pearsonr(ds_lr_stest["score"], ds_lr_stest["prediction"])[0]
my_pearsonr
# the pearsonr from the metrics dataframe
met_args_lr.loc[0, "stest_pearsonr"]
###Output
_____no_output_____
###Markdown
There's a small amount of floating point imprecision, but otherwise the values are identical.
###Code
np.isclose(my_pearsonr, met_args_lr.loc[0, "stest_pearsonr"])
###Output
_____no_output_____
###Markdown
Loading score predictions and metrics (multiple models)The functions used above also accept lists of log directories. For loading predictions, you can optionally specify column names, otherwise the column names will be automatically labeled by number.
###Code
ds = an.load_predictions(log_dirs, col_names=["lr", "fc"])
ds.head()
###Output
_____no_output_____
###Markdown
Loading metrics is also straightforward. Note that `an.load_metrics()` does not support multiple log dirs, only `an.load_metrics_and_args()`.
###Code
metrics = an.load_metrics_and_args(log_dirs)
metrics
###Output
_____no_output_____
###Markdown
Comparing multiple models Make multiple scatterplots for different models. Note again, we must subset the dataframe to select our desired train/tune/test set.
###Code
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12, 4))
for i, pred_col in enumerate(["lr", "fc"]):
ax = sns.scatterplot(x="score", y=pred_col, data=ds[ds.set_name == "stest"], ax=axes[i])
# draw a line of equivalence
x0, x1 = ax.get_xlim()
y0, y1 = ax.get_ylim()
lims = [max(x0, y0), min(x1, y1)]
ax.plot(lims, lims, ':k')
ax.set(ylabel="Predicted score", xlabel="True score", title="Predicted score vs. score ({})".format(pred_col))
plt.show()
plt.close(fig)
###Output
_____no_output_____
###Markdown
Compare performance metrics between datasets.
###Code
metrics["parsed_net_file"] = metrics["net_file"].apply(lambda nf: basename(nf).split(".")[0])
fix, ax = plt.subplots(1)
ax = sns.stripplot(x="parsed_net_file", y="stest_pearsonr", data=metrics)
ax.set(xlabel="Network", ylabel="Pearson's r", title="Performance (test set)")
plt.show()
plt.close(fig)
###Output
_____no_output_____
###Markdown
Run one of the cells to load the dataset you want to run test for and move to the next section
###Code
best_mit = '../logs/graphembed/mitstates/base/mit.yml'
load_args(best_mit,args)
args.graph_init = '../'+args.graph_init
args.load = best_mit[:-7] + 'ckpt_best_auc.t7'
best_ut = '../logs/graphembed/utzappos/base/utzappos.yml'
load_args(best_ut,args)
args.graph_init = '../'+args.graph_init
args.load = best_ut[:-12] + 'ckpt_best_auc.t7'
###Output
_____no_output_____
###Markdown
Loading arguments and dataset
###Code
args.data_dir = '../'+args.data_dir
args.test_set = 'test'
testset = dset.CompositionDataset(
root= args.data_dir,
phase=args.test_set,
split=args.splitname,
model =args.image_extractor,
subset=args.subset,
return_images = True,
update_features = args.update_features,
clean_only = args.clean_only
)
testloader = torch.utils.data.DataLoader(
testset,
batch_size=args.test_batch_size,
shuffle=True,
num_workers=args.workers)
print('Objs ', len(testset.objs), ' Attrs ', len(testset.attrs))
image_extractor, model, optimizer = configure_model(args, testset)
evaluator = Evaluator(testset, model)
if args.load is not None:
checkpoint = torch.load(args.load)
if image_extractor:
try:
image_extractor.load_state_dict(checkpoint['image_extractor'])
image_extractor.eval()
except:
print('No Image extractor in checkpoint')
model.load_state_dict(checkpoint['net'])
model.eval()
print('Loaded model from ', args.load)
print('Best AUC: ', checkpoint['AUC'])
def print_results(scores, exp):
print(exp)
result = scores[exp]
attr = [evaluator.dset.attrs[result[0][idx,a]] for a in range(topk)]
obj = [evaluator.dset.objs[result[1][idx,a]] for a in range(topk)]
attr_gt, obj_gt = evaluator.dset.attrs[data[1][idx]], evaluator.dset.objs[data[2][idx]]
print(f'Ground truth: {attr_gt} {obj_gt}')
prediction = ''
for a,o in zip(attr, obj):
prediction += a + ' ' + o + '| '
print('Predictions: ', prediction)
print('__'*50)
###Output
_____no_output_____
###Markdown
An example of predictionsclosed -> Biased for unseen classesunbiiased -> Biased against unseen classes
###Code
data = next(iter(testloader))
images = data[-1]
data = [d.to(device) for d in data[:-1]]
if image_extractor:
data[0] = image_extractor(data[0])
_, predictions = model(data)
data = [d.to('cpu') for d in data]
topk = 5
results = evaluator.score_model(predictions, data[2], bias = 1000, topk=topk)
for idx in range(len(images)):
seen = bool(evaluator.seen_mask[data[3][idx]])
if seen:
continue
image = Image.open(ospj( args.data_dir,'images', images[idx]))
plt.figure(dpi=300)
plt.imshow(image)
plt.axis('off')
plt.show()
print(f'GT pair seen: {seen}')
print_results(results, 'closed')
print_results(results, 'unbiased_closed')
###Output
_____no_output_____
###Markdown
Run Evaluation
###Code
model.eval()
args.bias = 1e3
accuracies, all_attr_gt, all_obj_gt, all_pair_gt, all_pred = [], [], [], [], []
for idx, data in tqdm(enumerate(testloader), total=len(testloader), desc = 'Testing'):
data.pop()
data = [d.to(device) for d in data]
if image_extractor:
data[0] = image_extractor(data[0])
_, predictions = model(data) # todo: Unify outputs across models
attr_truth, obj_truth, pair_truth = data[1], data[2], data[3]
all_pred.append(predictions)
all_attr_gt.append(attr_truth)
all_obj_gt.append(obj_truth)
all_pair_gt.append(pair_truth)
all_attr_gt, all_obj_gt, all_pair_gt = torch.cat(all_attr_gt), torch.cat(all_obj_gt), torch.cat(all_pair_gt)
all_pred_dict = {}
# Gather values as dict of (attr, obj) as key and list of predictions as values
for k in all_pred[0].keys():
all_pred_dict[k] = torch.cat(
[all_pred[i][k] for i in range(len(all_pred))])
# Calculate best unseen accuracy
attr_truth, obj_truth = all_attr_gt.to('cpu'), all_obj_gt.to('cpu')
pairs = list(
zip(list(attr_truth.numpy()), list(obj_truth.numpy())))
topk = 1 ### For topk results
our_results = evaluator.score_model(all_pred_dict, all_obj_gt, bias = 1e3, topk = topk)
stats = evaluator.evaluate_predictions(our_results, all_attr_gt, all_obj_gt, all_pair_gt, all_pred_dict, topk = topk)
for k, v in stats.items():
print(k, v)
###Output
_____no_output_____
###Markdown
Get the data files and save them in the data directory! curl -o ../data/blah.bin https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/blah.bin Compute rating function The NFL passer rating formula ranges on a scale from 0 to 158.3 based on completion percentage, yards per attempt, touchdowns per attempt, and interceptions per attempt. Input your values below to calculate a rating:
###Code
else {
var a = ((pass_cmp/pass_att) - 0.3) * 5;
a = (a>2.375)?2.375:a;
a = (a<0)?0:a;
var b = ((pass_yds/pass_att) - 3) * .25;
b = (b>2.375)?2.375:b;
b = (b<0)?0:b;
var c = (pass_td/pass_att) * 20;
c = (c>2.375)?2.375:c;
c = (c<0)?0:c;
var d = 2.375 - ((pass_int/pass_att) * 25);
d = (d>2.375)?2.375:d;
d = (d<0)?0:d;
var rating = (a+b+c+d)/6 * 100;
output = Math.round(rating*100, 2)/100;
}
document.getElementById('rating').innerHTML = output;
return false;
}
def comp_passer_rating(pass_att, pass_cmp, pass_int, pass_tds, pass_yds):
'''
Function to compute passer rating.
INPUTS:if faces detected, update face column from False to True
OUTPUT: passer rating = real number 0 to 158.3
'''
# validate boundarires
if pass_cmp > pass_att:
print('Passes complete {} exceed passing attempts {}'.format(pass_cmp, pass_att))
return 0
if pass_tds > pass_att:
print('Passing TDs {} exceed passing attempts {}'.format(pass_tds, pass_att))
return 0
if pass_tds > pass_cmp:
print('Passing TDs {} exceed pass completions {}'.format(pass_tds, pass_cmp))
return 0
if pass_int > pass_att:
print('Interceptions {} exceed passing attempts {}'.format(pass_int, pass_att))
return 0
classifier_full = CascadeClassifier(classifier)
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
bounding_boxes = classifier_full.detectMultiScale(img_gray, scaleFactor=1.3, minNeighbors=7, minSize=(224, 224))
box_list = []
for box in bounding_boxes:
box_list.append(box)
if len(box_list) < 1:
return 0
else:
# update face to True
draw_boxes(img_gray, box_list)
draw_boxes(img_rgb, box_list)
img_rgb = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2RGB)
# write img to img_proc and/or save to os
return img_gray, img_rgb, roi
###Output
_____no_output_____
###Markdown
Preprocessing Loading and Sorting Images
###Code
! pwd
# # Renwming files as data fas the same filenames for each subject. Prepending the folder name to the filename, as the foldername is the subject id.
# # ONLY RUN ONCE!!!!!!!
# for root, dirs, files in os.walk('../data/TD_RGB_E'):
# if not files:
# continue
# prefix = os.path.basename(root)
# for f in files:
# os.rename(os.path.join(root, f), os.path.join(root, "{}_{}".format(prefix, f)))
# ONLY RUN ONCE
# sort_images('../data/base')
file_name = '' # image name
file_path = '' # path to file
# Process Images Directory for Feeding into the Model
image_list = build_image_list('../data/TD_RGB_E')
print(len(image_list), image_list[0])
# # copy files to a singular folder
# dest_base = '../data/base'
# for ig in image_list:
# shutil.copy(ig, dest_base)
# from PIL import Image
# im = Image.open('temp.png')
# data = np.array(im)
# flattened = data.flatten()
# print data.shape
# print flattened.shape
# (612, 812, 4)
# (1987776,)
# Alternately, instead of calling data.flatten(), you could call data.reshape(-1). -1 is used as a placeholder for "figure out what the given dimension should be".
# flattened = data.T.flatten()
y_train.head()
###Output
_____no_output_____
###Markdown
Crop Images with OpenCV
###Code
for image in images:
def detect_face(img_rgb, classifier):
'''
1st classifier: HAAR classifier for face detection
read image from img_orig column
process face
if faces detected, update face column from False to True
draw box around face
write modified image to img_proc column
call detect_eyes
if no face detected, exit loop
'''
classifier_full = CascadeClassifier(classifier)
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
bounding_boxes = classifier_full.detectMultiScale(img_gray, scaleFactor=1.3, minNeighbors=7, minSize=(224, 224))
box_list = []
for box in bounding_boxes:
box_list.append(box)
if len(box_list) < 1:
return 0
else:
# update face to True
draw_boxes(img_gray, box_list)
draw_boxes(img_rgb, box_list)
img_rgb = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2RGB)
# write img to img_proc and/or save to os
return img_gray, img_rgb, roi
X_test.head()
###Output
_____no_output_____
###Markdown
Main
###Code
# set classifier
classifier = '../src/haarcascade_frontalface_default.xml'
# read the image
img_rgb = X_train['img_orig']
# detect face - with HAAR cascade
img_gray, img_rgb, roi = detect_face(img_rgb, classifier)
# scale down
# return roi
# call model for image - call fisherfaces
# write processed image to dataframe
# update flag in dataframe
# choose next image
# once complete, pueh changes to database
###Output
_____no_output_____
###Markdown
Emotion Detection with VGG16 Pretrained CNN Convert the data into labels (encoded) and images converted to a 224x224x3 numpy array
###Code
emotions = ['commecicomm','happy','ugh','shocked','sunglasses']
!pwd
train_data_dir = '../data/train'
img_height = 224
img_width = 224
batch_size = 20
train_datagen = ImageDataGenerator(rescale=1./255,
validation_split=0.2) # set validation split
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical',
shuffle=False,
subset='training') # set as training data
validation_generator = train_datagen.flow_from_directory(
train_data_dir, # same directory as training data
target_size=(img_height, img_width),
batch_size=batch_size,
class_mode='categorical',
shuffle=False,
subset='validation') # set as validation data
train_generator.classes
#train_generator.classes = keras.utils.to_categorical(train_generator.classes, num_classes=5, dtype='int32')
# model.fit_generator(
# train_generator,
# steps_per_epoch = train_generator.samples // batch_size,
# validation_data = validation_generator,
# validation_steps = validation_generator.samples // batch_size,
# epochs = 10)
X, y = next(train_generator)
print('Input features shape', X.shape)
print('Actual labels shape', y.shape)
y
base_model = VGG16(include_top=False, weights='imagenet', input_shape=(224,224,3))
base_model.summary()
base_model.trainable = False
model = Sequential([
base_model,
Flatten(),
Dense(units=4096,activation="relu"),
Dense(units=4096,activation="relu"),
Dense(units=5, activation="softmax")
]) # GlobalAveragePooling2D(),
from keras.optimizers import Adam
opt = Adam(lr=0.0001)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy']) # sparce_categorical_crossentropy throws an error if usewd with OHE. See tensoeflow documentation.
model.summary()
epochs = 10
steps_per_epoch = train_generator.n // batch_size
validation_steps = validation_generator.n // batch_size
history = model.fit_generator(train_generator,
steps_per_epoch = steps_per_epoch,
epochs=epochs,
workers=4,
validation_data=validation_generator,
validation_steps=validation_steps)
scores = model.evaluate(X, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
scores
# save model and architecture to single file
model.save("model.h5")
print("Saved model to disk")
###Output
_____no_output_____
###Markdown
Cleanup
###Code
close_database(con)
###Output
_____no_output_____
###Markdown
Confusion Matrix
###Code
cm = confusion_matrix(y_test, y_pred_class)
tn = cm_2[0,0]
fn = cm_2[0,1]
fp = cm_2[1,0]
tp = cm_2[1,1]
accurracy = (tp + tn)/(tn+tp+fn+fp)
precision = tp / (tp+fp)
recall = tp / (tp + fn)
f1_score = 2*precision*recall/(precision+recall)
print('My model metrics were: Accurracy: {}, Precision: {}, Recall: {}, and F1: {}'.format(accurracy,precision,recall,f1_score))
plt.matshow(cm)
plt.colorbar()
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show();
###Output
_____no_output_____
###Markdown
MVP Add webcam support
###Code
cap = cv2.VideoCapture(0)
while True:
ret, img = cap.read()
'''
lots of code here
'''
cv2.imshow('img', img)
#Display camera feed until ESC key is pressed
k = cv2.waitKey(30) & 0xff
if k == 27:
break
cap.release()
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Precision Evaluation
###Code
def iter_entity_decodes(path):
with gzip.open(path, 'rt', encoding='utf-8') as f:
for line in tqdm(f):
yield json.loads(line)['decode']
metrics = evaluate_decodes(iter_entity_decodes(os.path.join(data_path, 'output.jsonl.gz')))
from collections import defaultdict
import numpy
def print_latex_evaluation(metrics):
systems = ['base', 'sys@1', 'sys@5']
micro_scores = defaultdict(list)
macro_scores = defaultdict(list)
print('\\hline\\hline')
for r, system_scores in sorted(metrics.items(), key=lambda kv: relation_order.index(kv[0])):
r_fmt = r.replace('<', '').replace('>', '').replace('_', ' ')
r_fmt = relation_name_map.get(r_fmt, r_fmt)
r_fmt = '\\texttt{' + r_fmt + '}'
report = r_fmt.rjust(25)
N = len(system_scores[systems[0]])
report += ' & ' + "{:,}".format(N).rjust(6)
for s, scores in [(s, system_scores[s]) for s in systems]:
score = numpy.mean(scores)
micro_scores[s].extend(scores)
macro_scores[s].append(score)
report += ' & ' + ('%.1f' % (100 * score)).rjust(4)
print(report + ' \\\\')
print('\\hline\\hline')
print('\\multicolumn{2}{r}{\\texttt{Micro Avg.}}'.rjust(25+9) + ' & ' + ' & '.join(('%.1f' % (100 * numpy.mean(micro_scores[s]))).ljust(4) for s in systems) + ' \\\\')
print('\\multicolumn{2}{r}{\\texttt{Macro Avg.}}'.rjust(25+9) + ' & ' + ' & '.join(('%.1f' % (100 * numpy.mean(macro_scores[s]))).ljust(4) for s in systems) + ' \\\\')
print('\\hline')
print_latex_evaluation(metrics)
###Output
\hline\hline
\texttt{sex or gender} & 139,272 & 83.5 & 94.2 & 99.0 \\
\texttt{date of birth} & 118,414 & 0.2 & 75.4 & 80.5 \\
\texttt{occupation} & 111,462 & 11.8 & 69.8 & 88.1 \\
\texttt{given name} & 110,770 & 3.4 & 88.0 & 94.1 \\
\texttt{citizenship} & 102,246 & 28.1 & 89.2 & 94.7 \\
\texttt{place of birth} & 81,324 & 1.5 & 25.7 & 36.9 \\
\texttt{date of death} & 55,610 & 0.1 & 68.3 & 75.4 \\
\texttt{place of death} & 27,618 & 3.8 & 27.8 & 39.2 \\
\texttt{educated at} & 25,633 & 3.7 & 16.3 & 33.0 \\
\texttt{sport} & 23,067 & 56.9 & 87.1 & 98.1 \\
\texttt{sports team} & 21,841 & 0.5 & 17.0 & 31.3 \\
\texttt{position held} & 13,953 & 6.3 & 63.0 & 78.8 \\
\texttt{award received} & 12,196 & 4.6 & 38.8 & 56.6 \\
\texttt{family name} & 11,368 & 4.4 & 61.5 & 70.4 \\
\texttt{participant of} & 11,054 & 6.3 & 44.5 & 81.1 \\
\texttt{political party} & 10,409 & 18.3 & 60.6 & 83.8 \\
\hline\hline
\multicolumn{2}{r}{\texttt{Micro Avg.}} & 20.9 & 70.0 & 79.5 \\
\multicolumn{2}{r}{\texttt{Macro Avg.}} & 14.6 & 58.0 & 71.3 \\
\hline
###Markdown
Inspecting model output
###Code
instances = {}
with gzip.open(os.path.join(data_path, 'output.jsonl.gz'), 'rt', encoding='utf-8') as f:
for line in tqdm(islice(f, 5000)):
obj = json.loads(line)
instances[obj['instance_id']] = obj['decode']
instance = next(iter(instances.values()))
instance = dict(instance)
instance['decodes'] = {r: v[:1] for r, v in instance['decodes'].items()}
print(json.dumps(instance, indent=2))
###Output
{
"sources": [
"his highest score came in 1939 when he made an innings of 217 against northumberland , he shared in a minor counties record first-wicket partnership of 323 with | harold theobald | ."
],
"targets": {
"<date_of_death>": "1982 07 20",
"<date_of_birth>": "1896 03 18",
"<place_of_birth>": "norwich",
"<sex_or_gender>": "male",
"<country_of_citizenship>": "united kingdom",
"<place_of_death>": "norwich",
"<given_name>": "harold",
"<occupation>": "cricketer"
},
"decodes": {
"<given_name>": [
{
"score": -0.0072908401,
"tokens": [
"harold",
"</s>"
],
"source_id": 0,
"decoded": "harold"
}
],
"<family_name>": [
{
"score": -0.0005378723,
"tokens": [
"<unk>",
"</s>"
],
"source_id": 0,
"decoded": "<unk>"
}
],
"<sex_or_gender>": [
{
"score": 0.0,
"tokens": [
"male",
"</s>"
],
"source_id": 0,
"decoded": "male"
}
],
"<date_of_birth>": [
{
"score": -3.923664093,
"tokens": [
"1912",
"01",
"01",
"</s>"
],
"source_id": 0,
"decoded": "1912 01 01"
}
],
"<occupation>": [
{
"score": -0.0610628128,
"tokens": [
"cricketer",
"</s>"
],
"source_id": 0,
"decoded": "cricketer"
}
],
"<country_of_citizenship>": [
{
"score": -0.3711585999,
"tokens": [
"united",
"kingdom",
"</s>"
],
"source_id": 0,
"decoded": "united kingdom"
}
],
"<sport>": [
{
"score": -0.1953792572,
"tokens": [
"athletics",
"</s>"
],
"source_id": 0,
"decoded": "athletics"
}
],
"<date_of_death>": [
{
"score": -4.4913721085,
"tokens": [
"1960",
"01",
"01",
"</s>"
],
"source_id": 0,
"decoded": "1960 01 01"
}
],
"<place_of_birth>": [
{
"score": -4.8144302368,
"tokens": [
"chicago",
"</s>"
],
"source_id": 0,
"decoded": "chicago"
}
],
"<educated_at>": [
{
"score": -2.8010969162,
"tokens": [
"eton",
"college",
"</s>"
],
"source_id": 0,
"decoded": "eton college"
}
],
"<member_of_sports_team>": [
{
"score": -3.5370111465,
"tokens": [
"kolkata",
"knight",
"riders",
"</s>"
],
"source_id": 0,
"decoded": "kolkata knight riders"
}
],
"<place_of_death>": [
{
"score": -2.4424953461,
"tokens": [
"london",
"</s>"
],
"source_id": 0,
"decoded": "london"
}
],
"<position_held>": [
{
"score": -0.9836673737,
"tokens": [
"member",
"of",
"parliament",
"in",
"the",
"united",
"kingdom",
"</s>"
],
"source_id": 0,
"decoded": "member of parliament in the united kingdom"
}
],
"<participant_of>": [
{
"score": -0.7339658737,
"tokens": [
"1948",
"summer",
"olympics",
"</s>"
],
"source_id": 0,
"decoded": "1948 summer olympics"
}
],
"<member_of_political_party>": [
{
"score": -0.8230638504,
"tokens": [
"republican",
"party",
"</s>"
],
"source_id": 0,
"decoded": "republican party"
}
],
"<award_received>": [
{
"score": -0.4960975647,
"tokens": [
"wisden",
"cricketer",
"of",
"the",
"year",
"</s>"
],
"source_id": 0,
"decoded": "wisden cricketer of the year"
}
]
}
}
###Markdown
Sampling Generated Output
###Code
def format_source(source):
if source.count('|') != 2 or len(source) < 30:
return None
tokens = source.split()
span_left = tokens.index('|')
span_right = len(tokens) - tokens[::-1].index('|')
span = tokens[span_left:span_right]
left = tokens[:span_left]
right = tokens[span_right:]
max_window_sz = 10
output = '{\\small ' + ' '.join(left[-max_window_sz:]) + ' '
output += ''+ ' '.join(span) + ' '
output += '' + ' '.join(right[:max_window_sz]) + '}'
return output
print(format_source(instance['sources'][0]))
#for left, span, right in random.sample(elon['mentions'], 5):
# print '{\\small ...', ' '.join(left[-5:]), '} &', '\\textttt{'+ ' '.join(span) + '}', '& {\\small', ' '.join(right[:5]), '...} \\\\'
rels = [
'<given_name>',
'<occupation>',
'<country_of_citizenship>',
'<sex_or_gender>',
'<date_of_birth>'
]
from collections import Counter
exclude = {
'Ron Finley (American football)',
'Roger Sherman',
'Alison Smale',
'Abigail Mejia',
'Chris Bingham',
'Eudoxia Lopukhina',
'Mary Dickens',
'Thomas W. Lawson (businessman)',
'Rosalind Brewer',
'Savanna Samson',
'Alex Hicks',
'Austin Codrington',
'Horatio Earle'
}
count = 0
male_count = 0
gender_counts = Counter()
citizenship_counts = Counter()
for entity_id, instance in islice(instances.items(), 200):
sources = instance['sources']
target_relations = instance['targets']
decodes_by_rsi = {
r:{
si: [d for d in instance['decodes'][r] if d['source_id'] == si]
for si in range(len(sources))
} for r in rels
}
decode_ranks_by_rsi = {r:{
si: instance['decodes'][r].index(decodes_by_rsi[r][si][0])
for si in range(len(sources))
} for r in rels
}
si_ranks_by_r = {
r:
sorted(range(len(sources)), key=lambda si: instance['decodes'][r].index(decodes_by_rsi[r][si][0]))
for r in rels
}
male = decodes_by_rsi['<sex_or_gender>'][0][0]['tokens'][0] == 'male'
if entity_id in exclude or any(format_source(s) is None for s in sources):
continue
if gender_counts['male'] >= 3 and male:
continue
if citizenship_counts[target_relations.get('<country_of_citizenship>')] == 1 and target_relations['<country_of_citizenship>'] == 'united states of america':
continue
if count >= 4:
break
count += 1
gender_counts[target_relations['<sex_or_gender>']] += 1
citizenship_counts[target_relations.get('<country_of_citizenship>')] += 1
print('\\hline')
print('\\textbf{'+ entity_id + '}')
row = ''
for r in rels:
row += ' & ' + target_relations.get(r, '\\texttt{nil}')
row += ' \\\\'
print(row)
print('\\hline')
print('\\hline')
for si, s in enumerate(sources):
result = format_source(s)
for r in rels:
#target = target_relations.get(relation)=
decodes = decodes_by_rsi[r][si]
rank = si_ranks_by_r[r].index(si)
system = ' '.join(takewhile(lambda t: t != '</s>', decodes[0]['tokens']))
if rank != 0:
system = '\\textcolor[HTML]{777777}{' + system + '}'
result += ' & ' + system # + ':' + str(si_ranks_by_r[r].index(si))
result = result.replace('#', '\\#')
result += '\\\\'
print(result)
print('\\hline')
print_evaluation(metrics)
def trim_items_data(path, filter_rels=None):
with open(os.path.join(path, 'output.trimmed.jsonl'), 'w') as out:
with gzip.open(os.path.join(path, 'output.jsonl.gz'), 'rt', encoding='utf-8') as f:
for line in tqdm(f):
obj = json.loads(line)
for k in obj['decode']['decodes']:
obj['decode']['decodes'][k] = obj['decode']['decodes'][k][:1]
out.write(json.dumps(obj)+'\n')
def iter_items(path, filter_rels=None):
with gzip.open(os.path.join(path, 'output.jsonl.gz'), 'rt', encoding='utf-8') as f:
for line in tqdm(f):
obj = json.loads(line)
for relation, target in obj['decode']['targets'].items():
if filter_rels is None or relation in filter_rels:
system = obj['decode']['decodes'][relation][0]['decoded']
score = obj['decode']['decodes'][relation][0]['score']
source = obj['decode']['sources'][obj['decode']['decodes'][relation][0]['source_id']]
yield obj['instance_id'], source, relation, target, system, score
###Output
_____no_output_____
###Markdown
Performance vs Mention Count
###Code
def get_instance_sources(path, filter_rels=None):
inst_sources = {}
with gzip.open(os.path.join(path, 'output.jsonl.gz'), 'rt', encoding='utf-8') as f:
for line in tqdm(f):
obj = json.loads(line)
inst_sources[obj['instance_id']] = obj['decode']['sources']
return inst_sources
instance_sources = get_instance_sources(data_path)
subsampled_instance_source_counts = {}
def iter_subsampled_items(path, subsample_size, filter_rels=None):
subsampled_instance_source_counts.clear()
with gzip.open(os.path.join(path, 'output.jsonl.gz'), 'rt', encoding='utf-8') as f:
for line in tqdm(f):
obj = json.loads(line)
if len(obj['decode']['sources']) == subsample_size:
retain_idxs = set(random.sample(range(0, subsample_size), random.randint(1, subsample_size)))
else:
continue
subsampled_instance_source_counts[obj['instance_id']] = len(retain_idxs)
for relation, target in obj['decode']['targets'].items():
if filter_rels is None or relation in filter_rels:
decodes = [d for d in obj['decode']['decodes'][relation] if d['source_id'] in retain_idxs]
system = decodes[0]['decoded']
score = decodes[0]['score']
source = obj['decode']['sources'][decodes[0]['source_id']]
yield obj['instance_id'], source, relation, target, system, score
subsampled_items = list(iter_subsampled_items(data_path, 5))
def get_bootstrapped_ci(instances, interval, N):
scores = [numpy.mean(numpy.random.choice(instances, size=len(instances), replace=True)) for _ in range(N)]
score = numpy.mean(instances)
interval_slice = (100-interval)/2
lower, upper = numpy.percentile(scores, [interval_slice, 100-interval_slice])
return lower, score, upper
get_bootstrapped_ci([1.0 if i[3] == i[4] else 0.0 for i in items], 99, 500)
import matplotlib.pyplot as plt
import numpy as np
from itertools import cycle
ALPHA = 95
N_BOOTSTRAP_SAMPLES = 1000
def get_performance_vs_source_count_df(items, instance_source_counts):
df = []
for count in tqdm(range(1, 5+1)):
items_at_count = [i for i in items if instance_source_counts[i[0]] == count]
for r in relation_order:
results = [1.0 if i[3] == i[4] else 0.0 for i in items_at_count if i[2] == r]
lower, score, upper = get_bootstrapped_ci(results, ALPHA, N_BOOTSTRAP_SAMPLES)
df.append({
'count': count,
'fact': r,
'num_items': len(results),
'lower_bound': lower,
'score': score,
'upper_bound': upper,
})
return pd.DataFrame(df)
all_entities_df = get_performance_vs_source_count_df(items, {k:len(vs) for k, vs in instance_sources.items()})
subsampled_entities_df = get_performance_vs_source_count_df(subsampled_items, subsampled_instance_source_counts)
def plot_scores_vs_link_count_by_fact_type(df, legend=True):
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
lines = ["-","-."]
linecycler = cycle(lines)
relation_order_by_perf = list(df[df['count'] == 5].sort_values('score')['fact'])
relation_order_by_perf.reverse()
for key, group in sorted(list(df.groupby('fact')), key=lambda kv: relation_order_by_perf.index(kv[0])):
name = key.replace('<','').replace('>','').replace('_', ' ')
name = relation_name_map.get(name, name)
#ax.plot(group['count'], group['score'], label=name)
(_, caps, _) = ax.errorbar(
x=group['count'],
y=group['score'],
yerr=[group['score']-group['lower_bound'], group['upper_bound']-group['score']],
label=name,
capsize=5,
fmt='-o')
for cap in caps:
cap.set_markeredgewidth(1)
ax.set_xlim(1, 5)
ax.set_ylim(0, 1)
#plt.ylabel('Precision', fontsize='xx-large')
#plt.xlabel('Inlink Count', fontsize='xx-large')
plt.xticks([i for i in range(1, 5+1)], fontsize='large')
plt.yticks([i/10 for i in range(0, 11)], fontsize='large')
if legend:
plt.legend(bbox_to_anchor=(1.025, 1), loc=2, borderaxespad=0, fontsize='x-large',frameon=False)
plt.show()
plot_scores_vs_link_count_by_fact_type(all_entities_df)
def plot_macro_scores_vs_link_count(dfs, legend=True):
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
lines = ["-","-."]
linecycler = cycle(lines)
for k, df in dfs:
group = df.groupby('count').mean()
#ax.plot(group.index, group['score'], label=k)
(_, caps, _) = ax.errorbar(
x=group.index,
y=group['score'],
yerr=[group['score'] - group['lower_bound'], group['upper_bound']-group['score']],
label=k,
capsize=5,
fmt='-o')
for cap in caps:
cap.set_markeredgewidth(1)
ax.set_xlim(0.9, 5.1)
ax.set_ylim(0.2, 0.4)
#plt.ylabel('Macro Precision', fontsize='xx-large')
#plt.xlabel('Inlink Count', fontsize='xx-large')
plt.xticks([i for i in range(1, 5+1)], fontsize='large')
plt.yticks([i/20 for i in range(4, 10)], fontsize='large')
if legend:
plt.legend(fontsize='xx-large')
plt.show()
plot_macro_scores_vs_link_count([
('All entities', all_entities_df),
('5 or more inlinks', subsampled_entities_df)
])
metrics = evaluate_decodes(iter_entity_decodes(os.path.join(data_path, 'output.jsonl.gz')))
numpy.mean([len(vs) for vs in instance_sources.values()]), numpy.median([len(vs) for vs in instance_sources.values()])
from collections import Counter
items = list(iter_items(data_path))
random.seed(1447)
random.shuffle(items)
###Output
_____no_output_____
###Markdown
Threshold
###Code
def get_prfs_across_thresholds(items_iter):
scores = defaultdict(list)
items_by_r = defaultdict(list)
for item in items_iter:
iid, source, relation, target, system, score = item
items_by_r[relation].append(item)
scores[relation].append(score)
metrics = []
percentiles = [p/2 for p in list(range(0, 100*2, 1))]
for r in relation_order:
for percentile, threshold in zip(percentiles, numpy.percentile(scores[r], percentiles)):
tps = 0.
fps = 0.
fns = 0.
for iid, source, relation, target, system, score in items_by_r[r]:
if score >= threshold:
tps += 1.0 if target == system else 0.0
fps += 1.0 if target != system else 0.0
else:
fns += 1.0
P = 0. if tps == 0 else tps / (tps+fps)
R = 0. if tps == 0 else tps / (tps+fns)
F = 0. if tps == 0 else 2 * P * R / (P+R)
metrics.append({
'fact': r,
'percentile': percentile,
'threshold': threshold,
'P': P,
'R': R,
'F': F
})
return metrics
import matplotlib.pyplot as plt
import numpy as np
from itertools import cycle
def plot_pvsr_across_fact_types(df, legend=True):
relation_order_by_perf = list(df[df['percentile'] == 0].sort_values('P')['fact'])
relation_order_by_perf.reverse()
fig = plt.figure(figsize=(7,7))
ax = fig.add_subplot(111)
#cm = plt.get_cmap('jet')
#ax.set_color_cycle([cm( 1.*relation_order_by_perf.index(relation_order[i]) /len(relation_order)) for i in range(len(relation_order))])
lines = ["-","-."]
linecycler = cycle(lines)
for key, group in sorted(list(df.groupby('fact')), key=lambda kv: relation_order_by_perf.index(kv[0])):
group = group.sort_values('percentile')
name = key.replace('<','').replace('>','').replace('_', ' ')
name = relation_name_map.get(name, name)
ax.plot(group['R'], group['P'], label=name) #.plot(x='R', y='P',figsize=(16,8))
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
#plt.ylabel('Precision', fontsize='xx-large')
#plt.xlabel('Recall', fontsize='xx-large')
plt.xticks([i/10 for i in range(0, 11)], fontsize='large')
plt.yticks([i/10 for i in range(0, 11)], fontsize='large')
if legend:
plt.legend(bbox_to_anchor=(1.025, 1), loc=2, borderaxespad=0, fontsize='x-large',frameon=False)
plt.show()
#lnk_metrics_df = pd.DataFrame(get_prfs_across_thresholds(islice(iter_items('../data/results/mentions'), None)))
#bio_metrics_df = pd.DataFrame(get_prfs_across_thresholds(islice(iter_items('../data/results/bio'), None)))
plot_pvsr_across_fact_types(lnk_metrics_df)
plot_pvsr_across_fact_types(bio_metrics_df, legend=False)
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
#cm = plt.get_cmap('jet')
#ax.set_color_cycle([cm( 1.*relation_order_by_perf.index(relation_order[i]) /len(relation_order)) for i in range(len(relation_order))])
group = bio_metrics_df.groupby('percentile').mean()
ax.plot(group['R'], group['P'], 'o', label='BIO model')
group = lnk_metrics_df.groupby('percentile').mean()
ax.plot(group['R'], group['P'], 'o', label='LNK model')
ax.set_xlim(0, 1)
ax.set_ylim(0, 1)
#plt.ylabel('Precision', fontsize='xx-large')
#plt.xlabel('Recall', fontsize='xx-large')
plt.xticks([i/10 for i in range(0, 11)], fontsize='large')
plt.yticks([i/10 for i in range(0, 11)], fontsize='large')
plt.legend(loc=0, fontsize='xx-large')
plt.show()
items[0]
###Output
_____no_output_____
###Markdown
Date of birth Inference Analysis
###Code
target_years = []
system_years = []
for entity, source, relation, target, system, score in items:
if relation == '<date_of_birth>' and system.split()[0].isdigit() and target.split()[0].isdigit():
target_years.append(int(target.split()[0]))
system_years.append(int(system.split()[0]))
from scipy.stats import pearsonr
pearsonr(target_years, system_years)
import numpy
numpy.median(numpy.abs(numpy.array(target_years) - numpy.array(system_years)))
###Output
_____no_output_____
###Markdown
Explicitness Annotation
###Code
import ipywidgets as widgets
from IPython.display import display
from ipywidgets import interact, interactive, fixed, interact_manual
from ipywidgets import Layout
from time import time
items = list(iter_items(data_path, relation_order[:5]))
NUM_SAMPLES = 250
MAX_SAMPLES_PER_RELATION = NUM_SAMPLES/5
def iter_samples(samples, max_per_relation, exclude_ids, relation_counts):
for entity, source, relation, target, system, score in samples:
if relation_counts[relation] >= max_per_relation:
continue
iid = entity + ':' + relation
if iid not in exclude_ids:
yield {
'id': iid,
'entity': entity,
'relation': relation,
'source': source,
'target': target,
}
relation_counts[relation] += 1
annotations_path = os.path.join(data_path, 'annotations.jsonl')
annotated_item_ids = set()
relation_counts = Counter()
with open(annotations_path, 'r') as f:
for line in f:
annotation = json.loads(line)
annotated_item_ids.add(annotation['item']['id'])
if annotation['annotation']['decision'] != 'error':
relation_counts[annotation['item']['relation']] += 1
pending = list(iter_samples(items, MAX_SAMPLES_PER_RELATION, annotated_item_ids, relation_counts))
state = {
'item': None
}
'Pending', len(pending)
text_source = widgets.HTML()
text_target = widgets.HTML(layout=Layout(width='50%'))
text_relation = widgets.HTML(layout=Layout(width='50%'))
chk_interesting = widgets.Checkbox(value=False, description='Interesting?', disabled=False)
btn_decision = widgets.ToggleButtons(
options=['Explicit', 'Justified', 'Guessable', 'Not Justified', 'Error'],
description='Inference:',
disabled=False,
button_style='',
tooltips=['', '', ''],
)
progress_text = widgets.HTML()
progress_bar = widgets.IntProgress(
min=0,
max=len(annotated_item_ids)+len(pending),
step=1,
description='Progress:',
bar_style='', # 'success', 'info', 'warning', 'danger' or ''
orientation='horizontal'
)
display(widgets.VBox([
text_source,
text_relation,
text_target
], layout=Layout(width='100%')))
display(chk_interesting)
display(btn_decision)
display(widgets.HBox([progress_bar, progress_text]))
def update():
item = state['item']
chk_interesting.value = False
btn_decision.value = None
if item:
text_source.value = """
<div style='margin:10px; padding: 20px;background: #eee'>""" + item['source'] + """</div>
"""
rel_fmt = item['relation'].replace('<', '').replace('>', '').replace('_', ' ').title()
text_relation.value = """
<b>""" + rel_fmt + """</b>: \"""" + item['target'] + """\"
"""
else:
text_source.value = 'Nothing left to annotated!'
text_relation.value=text_target.value=''
num_completed = len(annotated_item_ids)
progress_text.value = '[%d / %d] ' % (num_completed, num_completed+len(pending))
progress_bar.value = num_completed
def get_next_item(state):
if pending:
return pending[0]
def commit():
with open(annotations_path, 'a+') as f:
f.write(json.dumps({
'time': time(),
'item': state['item'],
'annotation': {
'interesting': chk_interesting.value,
'decision': btn_decision.value.lower()
}
}) + '\n')
annotated_item_ids.add(state['item']['id'])
pending.pop(0)
def transition():
state['item'] = get_next_item(state)
if not state['item']:
btn_decision.disabled=True
update()
def on_decision(b):
if btn_decision.value != None and state['item']:
commit()
transition()
btn_decision.observe(on_decision, names='value')
transition()
category_map = {
'explicit': 'explicit',
'justified': 'reasonable',
'guessable': 'guessable',
'not justified': 'unjustified',
'error': None
}
relation_counts = Counter()
stats = defaultdict(Counter)
with open(annotations_path, 'r') as f:
for line in f:
annotation = json.loads(line)
relation_counts[annotation['item']['relation']] += 1
if relation_counts[annotation['item']['relation']] <= MAX_SAMPLES_PER_RELATION:
decision = category_map[annotation['annotation']['decision']]
if decision is not None:
stats[annotation['item']['relation']][decision] += 1
report_catgories = sorted(set(v for v in category_map.values() if v is not None), key=lambda c: list(category_map.values()).index(c))
print('Fact Type'.rjust(30) + ' & ' + ' & '.join((r.title() + '').rjust(15) for r in report_catgories), '\\\\')
print('\\hline\\hline')
category_agg = defaultdict(int)
category_totals = defaultdict(int)
for relation, counts in stats.items():
report = relation.replace('<', '').replace('>', '').replace('_', ' ')
report = report.replace('country of ', '')
report = '\\texttt{' + report + '}'
report = report.rjust(30)
for c in report_catgories:
category_agg[c] += counts[c]
category_totals[c] += sum(counts.values())
report += ' & ' + ('%.1f' % (counts[c]*100 / sum(counts.values()))).rjust(15)
print(report + ' \\\\')
print('\\hline')
print('All Types'.rjust(30) + ' & ' + ' & '.join(
('%.1f'%(category_agg[c]*100/category_totals[c])).rjust(15) for c in report_catgories) + ' \\\\')
from collections import Counter
items = []
with gzip.open(os.path.join(data_path, 'output.jsonl.gz'), 'rt', encoding='utf-8') as f:
for line in tqdm(islice(f, 1000)):
obj = json.loads(line)
source = random.choice(obj['decode']['sources'])
score = obj['decode']['decodes'][relation][0]['score']
for relation, target in obj['decode']['targets'].items():
system = ' '.join(takewhile(lambda t: t != '</s>', obj['decode']['decodes'][relation][0]['tokens']))
if relation_order.index(relation) < 5:
items.append((obj['instance_id'], source, relation, target, system, score))
random.seed(1447)
random.shuffle(items)
items = sorted(items, key=lambda i: i[-1], reverse=True)
items[:10]
for r in relation_order[:5]:
print('\t', r)
print()
ris = [i for i in items if i[2] == r]
for i in ris[:5]:
print(i[1])
print('\t', i[3], '=', i[4])
print()
###Output
<sex_or_gender>
fortunately for augustus , his commanding officer , captain | william m. gardner | , testified on his behalf and prevailed on the court to suspend the harsher penalties .
male = male
| zbigniew beta | " "
male = male
he was the father of egyptologist | georges foucart | .
male = male
prior to independence , he became minister of the interior , posts and telecommunications in 1976 , as part of the transitional government headed by | abdallah mohamed kamil | .
male = male
van hoolwerff , as helmsman on the dutch 8 metre " hollandia " , took the 2nd place with fellow crew members : | lambertus doedes | , henk kersken , cornelis van staveren , gerard
male = male
<date_of_birth>
from 1980 to 1982 , he served in the cabinet as minister of public works under president elias sarkis and prime minister | shafik wazzan | .
1925 01 01 = 1928 01 01
wales : billy bancroft ( swansea ) , tom pearson ( cardiff ) , dickie garrett ( penarth ) , charlie thomas ( newport ) , percy lloyd ( llanelli ) , evan james (
1865 08 22 = 1878 01 01
captain boom is a filipino comic book character created by | mars ravelo | and illustrated by his son ric ravelo .
1916 10 09 = 1973 01 01
douglas rain did the narration for the english version ; the french version was titled notre univers with narration by | gilles pelletier | .
1925 03 22 = 1959 01 01
he was born in mt . morris , new york , younger brother to saxophonist pat labarbera , and trumpeter and arranger/composer | john labarbera | .
1945 11 10 = 1928 01 01
<occupation>
kamakura was finally attacked by shogun | ashikaga yoshinori | and retaken by force .
samurai = politician
soto , norberto luis romero , care santos , | josé carlos somoza | , josé maría tamparillas , david torres , josé miguel vilar-bou and marian womack .
psychiatrist = painter
| george fenneman | was the announcer , and basil adlam led the orchestra .
actor = radio personality
madani competed in men 's doubles at the 1979 us open where he partnered mike myburg but were beaten by | steve docherty | and john james in the first round .
tennis player = tennis player
in february 2016 , eight players signed as full-time players for the first time in the club 's history : emily simpkins , rhiannon roberts , | courtney sweetman-kirk | , natasha dowie , becky
association football player = association football player
<given_name>
the current dean of carey business school is | bernard t. ferrari | .
bernard = bernard
brother of damien and | darren gaspar | .
darren = darren
professor of the moscow theological academy | a.i. osipov | analyzes the teaching on the prayer by st . ignatius ( brianchaninov ) and points out that the prayer should have three properties : attention
alexei = constantin
once again , the british government did not wish to entrust its interests in japan to a foreign officer , so the british chiefs of staff appointed air vice marshal | cecil bouchier | as
cecil = cecil
| michael faulds | of western mustangs was in sixth place just behind brannagan with 9,137 career passing yards and justin dunk , of the guelph gryphons was seventh with 9,093 passing yards .
michael = michael
<country_of_citizenship>
the name was coined by | clemo | and popularized by kenyan rappers jua cali and nonini who started off at calif records , and is commonly sung in sheng ( slang ) , swahili
kenya = japan
throughout the series , ned is played by devon werkheiser , cookie by | daniel curtis lee | , and lindsey shaw as moze .
united states of america = united states of america
gohan is voiced in the original japanese anime and all other media by | masako nozawa | .
japan = japan
planned by | henric piccardt | ( 1636-1712 ) , lord of the manors of klein martijn at harkstede and the fraeylemaborg at slochteren , it was built by two master craftsmen from the city
netherlands = germany
polonora again led his team in scoring with a game-best 20 points , complemented by 15 points and 8 assists from andrea cinciarini whilst | jeff brooks | paced sassari with 18 points and 11
united states of america = united states of america
###Markdown
Geo Query Dataset Analysis
###Code
import os
import sys
import pandas as pd
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
import utils
%matplotlib inline
%load_ext autoreload
%autoreload 2
CSV_PATH = '../../data/unique_counts_semi.csv'
# load data
initial_df = utils.load_queries(CSV_PATH)
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Do some cleanup
###Code
# filter out queries with length less than 2 characters long
start_num = len(initial_df)
df = utils.clean_queries(initial_df)
print("{} distinct queries after stripping {} queries of length 1".format(len(df), start_num-len(df)))
print("Yielding a total of {} query occurrences.".format(df['countqstring'].sum()))
###Output
39135 distinct queries after stripping 38 queries of length 1
Yielding a total of 84011 query occurrences.
###Markdown
Query Frequency Analysis Let's take a look
###Code
df.head(10)
###Output
_____no_output_____
###Markdown
The frequency of queries drops off pretty quickly, suggesting a long tail of low frequency queries. Let's get a sense of this by looking at the cumulative coverage of queries with frequencies between 1 and 10.While we're at it, we can plot the cumulative coverage up until a frequency of 200 (in ascending order of frequency).
###Code
total = df['countqstring'].sum()
fig, ax = plt.subplots(ncols=2, figsize=(20, 8))
cum_coverage = pd.Series(range(1,200)).apply(lambda n: df[df['countqstring'] <= n]['countqstring'].sum())/total
cum_coverage = cum_coverage*100
cum_coverage = cum_coverage.round(2)
# plot the cumulative coverage
cum_coverage.plot(ax=ax[0])
ax[0].set_xlabel('Query Frequency')
ax[0].set_ylabel('Cumulative Coverage (%)')
# see if it looks Zipfian. ie plot a log-log graph of query frequency against query rank
df.plot(ax=ax[1], y='countqstring', use_index=True, logx=True, logy=True)
ax[1].set_xlabel('Rank of Query (ie most frequent to least frequent)')
ax[1].set_ylabel('Query Frequency');
print("Freq Cumulative Coverage")
for i, val in enumerate(cum_coverage[:10].get_values()):
print("{:>2} {:0<5}%".format(i+1, val))
###Output
Freq Cumulative Coverage
1 29.71%
2 48.20%
3 58.36%
4 64.60%
5 68.29%
6 71.33%
7 73.12%
8 75.03%
9 76.36%
10 77.66%
###Markdown
ie queries with a frequency of 1 account for about 30% of queries, queries with frequency of 2 or less account for 48%, 3 or less account for 58%, etc. Looking at the graph it seems like coverate rates drops off exponentially. Plotting a log-log graph of the query frequencies (y-axis) against the descending rank of the query frequency (x-axis) shows a linear-ish trend, suggesting that it does indeed look like some kind of inverse power function situation. Annotator Results for Pilot Annotation RoundThe pilot annotation round consisted of 50 queries sampled randomly from the total 84011 query instances. Below is a summary of the annotator's results. Annotation Codes MapQ2.```'YY' = Yes -- with place name'YN' = Yes -- without place name'NY' = No (but still a place)'NN' = Not applicable (ie not explicit location and not a place)```Q3.```'IAD' = INFORMATIONAL_ADVICE'IDC' = INFORMATIONAL_DIRECTED_CLOSED'IDO' = INFORMATIONAL_DIRECTED_OPEN'ILI' = INFORMATIONAL_LIST'ILO' = INFORMATIONAL_LOCATE'IUN' = INFORMATIONAL_UNDIRECTED'NAV' = NAVIGATIONAL'RDE' = RESOURCE_ENTERTAINMENT'RDO' = RESOURCE_DOWNLOAD'RIN' = RESOURCE_INTERACT'ROB' = RESOURCE_OBTAIN```
###Code
print(utils.get_user_results('annotator1'))
print('\n')
print(utils.get_user_results('annotator2'))
print('\n')
print(utils.get_user_results('martin'))
###Output
*** Annotator: annotator1 ***
===================================
1 Skipped Queries:
"has it been f"
--- "Appear to be incomplete; fragment lacks explicit reference to anything"
49 Annotations:
1. Is this query best answered with a pin on a map?
True: 8
False: 41
2. Is a location explicit in the query?
YY: 16
NY: 11
NN: 22
3. What type of query is this?
IAD: 1
IDC: 5
ILI: 4
ILO: 8
IUN: 22
NAV: 8
RDE: 1
*** Annotator: annotator2 ***
===================================
2 Skipped Queries:
"has it been f"
--- "Appears to be an incomplete question"
"http:/www.eatability.com.au/au/sydney/chat-thai-sydney/"
--- "Looks like they entered a url into the search bar. Cannot work out what it's about."
48 Annotations:
1. Is this query best answered with a pin on a map?
True: 11
False: 37
2. Is a location explicit in the query?
YY: 14
YN: 1
NY: 19
NN: 14
3. What type of query is this?
IAD: 1
IDC: 4
IDO: 2
ILI: 7
ILO: 15
IUN: 8
NAV: 9
RIN: 2
*** Annotator: martin ***
===================================
1 Skipped Queries:
"has it been f"
--- "Incomplete query"
49 Annotations:
1. Is this query best answered with a pin on a map?
True: 22
False: 27
2. Is a location explicit in the query?
YY: 17
YN: 1
NY: 14
NN: 17
3. What type of query is this?
IAD: 3
IDC: 3
IDO: 2
ILI: 11
ILO: 15
NAV: 12
RDE: 1
ROB: 2
###Markdown
Comments* It looks like Martin leant substantially more towards annotating queries as being geographical, ie is_geo = True for Q1, compared to both annotators.* for Q3, Annotator 1 was biased slightly towards INFORMATIONAL_UNDIRECTED, whereas Annotator 2 was biased towards INFORMATIONAL_LOCATE. Martin, on the other hand, favoured INFORMATIONAL_UNDIRECTED, INFORMATIONAL_LOCATE, and NAVIGATIONAL, compared to the remaining categories. * what should we do about URLs? Annotator 2 skipped the one URL. Martin and Annotator 1 labelled it as Web Navigational but disagreed regarding location explicit. Martin said 'YN', Annotator 1 said 'NN'. Inter-annotator Agreement Scores for Pilot AnnotationThe following results present inter-annotater agreement for the pilot round using Fleiss' kappa.Super handwavy concensus guide to interpreting kappa scores for annotation exercies in computation linguistics (Artstein and Poesio 2008:576): * kappa > 0.8 = good reliabiility * 0.67 < kappa < 0.8 = tentative conclusions may be drawn regarding the reliability of the data
###Code
user_pairs = [
['annotator1', 'annotator2'],
['martin', 'annotator1'],
['martin', 'annotator2'],
]
results = utils.do_iaa_pairs(user_pairs)
utils.print_iaa_pairs(results, user_pairs)
###Output
annotator1, annotator2 martin, annotator1 martin, annotator2
Q1: 0.541 0.321 0.400
Q2: 0.659 0.461 0.505
Q3: 0.318 0.264 0.303
###Markdown
These scores are not particularly high. We're struggling to get into even 'tentative' reliability land. We're probably going to need to do some disagreement analysis to work out what's going on.We can, however, look at agreement for Q1 and Q2 using coarser level of agreement. For Q2, this is whether annotators agreed that a location was explicit in the query (but ignoring whether the query included a place name).For Q3, this is whether they agreed that the query was navigational, informational, or pertaining to a resource.
###Code
results = utils.do_iaa_pairs(user_pairs, questions=(2,3), level='coarse')
utils.print_iaa_pairs(results, user_pairs)
###Output
annotator1, annotator2 martin, annotator1 martin, annotator2
Q2: 0.857 0.861 0.719
Q3: 0.619 0.661 0.488
###Markdown
Agreement has improved, especially for Q2. Q3, however, is still a bit on the low side. Disagreements
###Code
for question in (1,2,3):
print(utils.show_agreement(question, ['annotator1', 'annotator2', 'martin']))
print('\n')
###Output
Question 1:
Number all agree: 31
Number with some disagreement: 17
annotator1 annotator2 martin
0 0 1 Movies in Sydney George st cinemas
0 0 1 Double room best for couple or two girls or partner. student accommodation
0 0 1 sale women
0 0 1 Flatshare
0 1 0 preise im sefton playhouse
0 1 1 ogilvy
0 0 1 Other Jobs
0 0 1 skechers
0 0 1 Hospitality
1 0 1 gilly hicks sydney australia
0 0 1 Jobs
0 0 1 nike free 5.0
1 0 1 negative-ions sydney
0 1 1 gucci
0 0 1 Keyboards
0 1 1 sculpture sydney
0 1 1 taronga zo
Question 2:
Number all agree: 27
Number with some disagreement: 21
annotator1 annotator2 martin
NN NY NY Moving
NN YN NY Double room best for couple or two girls or partner. student accommodation
NN NY YY bdnews24 bangla
NN NN YY msn jp
NY NY NN Live-out Nannies
NY NY NN the little black jacket
NN NY NN tory burch patent zip wallet
NY NY NN Gardening
NN NY NY ogilvy
NY NY NN onlinecomputers
NN NY NY skechers
NN NN NY Hospitality
NN NY NN cube
YY YY NY negative-ions sydney
NN NY NY gucci
YY NN YY nz copyright tribuna'
NY NY NN sony xperia s outright
NN NN NY rob mills
NY NY NN Real Estate
NN NY NY Keyboards
YY NN YY nsw transport
Question 3:
Number all agree: 14
Number with some disagreement: 34
annotator1 annotator2 martin
ILI ILO IAD Moving
ILO ILO ILI Used Cars subaru outback NSW
RDE IUN RDE gangnam style
IDC NAV ILI Movies in Sydney George st cinemas
IAD ILI ILO Double room best for couple or two girls or partner. student accommodation
ILO IDC ILO building 10 26-34 Dunning Ave Rosebery NSW
IUN ILO ILI Live-out Nannies
ILO ILI ILI sale women
IUN ILO ROB the little black jacket
IUN ILI ILI Flatshare
IUN ILO ILI tory burch patent zip wallet
IUN ILI IDO Gardening
IUN NAV ILO ogilvy
IUN IUN ILO noosa
IUN ILO NAV onlinecomputers
ILI IAD ILI Other Jobs
IUN ILO ILO skechers
IUN IUN ILO Hospitality
ILO IDO ILO gilly hicks sydney australia
IUN IUN IDO cube
IUN ILO ILI nike free 5.0
IUN ILO ILO gucci
IUN NAV ILI sa law society
ILI IDO IAD 3D47A7000i review
IUN IUN NAV nz copyright tribuna'
IUN ILO ILO bn2411 prad
IDC RIN ROB sony xperia s outright
IUN IUN NAV rob mills
IUN ILI NAV Real Estate
IUN IUN ILO Keyboards
IUN ILI ILI sculpture sydney
NAV RIN NAV nsw transport
IUN ILO ILO taronga zo
IUN IUN IAD pain in neck vertebrae
###Markdown
Analysis of Algorithms [Click here to run this chapter on Colab](https://colab.research.google.com/github/AllenDowney/DSIRP/blob/main/notebooks/analysis.ipynb) **Analysis of algorithms** is a branch of computer science that studiesthe performance of algorithms, especially their run time and spacerequirements. See .The practical goal of algorithm analysis is to predict the performanceof different algorithms in order to guide design decisions.During the 2008 United States Presidential Campaign, candidate BarackObama was asked to perform an impromptu analysis when he visited Google.Chief executive Eric Schmidt jokingly asked him for "the most efficientway to sort a million 32-bit integers." Obama had apparently been tippedoff, because he quickly replied, "I think the bubble sort would be thewrong way to go." See .This is true: bubble sort is conceptually simple but slow for large datasets. The answer Schmidt was probably looking for is "radix sort"().But if you get a question like this in an interview, I think abetter answer is, "The fastest way to sort a million integers is touse whatever sort function is provided by the language I'm using.Its performance is good enough for the vast majority ofapplications, but if it turned out that my application was too slow,I would use a profiler to see where the time was being spent. If itlooked like a faster sort algorithm would have a significant effecton performance, then I would look around for a good implementationof radix sort." The goal of algorithm analysis is to make meaningful comparisons between algorithms, but there are some problems:- The relative performance of the algorithms might depend on characteristics of the hardware, so one algorithm might be faster on Machine A, another on Machine B. The usual solution to this problem is to specify a **machine model** and analyze the number of steps, or operations, an algorithm requires under a given model.- Relative performance might depend on the details of the dataset. For example, some sorting algorithms run faster if the data are already partially sorted; other algorithms run slower in this case. A common way to avoid this problem is to analyze the **worst case** scenario. It is sometimes useful to analyze average case performance, but that's usually harder, and it might not be obvious what set of cases to average over.- Relative performance also depends on the size of the problem. A sorting algorithm that is fast for small lists might be slow for long lists. The usual solution to this problem is to express run time (or number of operations) as a function of problem size, and group functions into categories depending on how quickly they grow as problem size increases. The good thing about this kind of comparison is that it lends itself tosimple classification of algorithms. For example, if I know that the runtime of Algorithm A tends to be proportional to the size of the input,$n$, and Algorithm B tends to be proportional to $n^2$, then I expect Ato be faster than B, at least for large values of $n$.This kind of analysis comes with some caveats, but we'll get to that later. Order of growthSuppose you have analyzed two algorithms and expressed their run times in terms of the size of the input: Algorithm A takes $100n+1$ steps to solve a problem with size $n$; Algorithm B takes $n^2 + n + 1$ steps.The following table shows the run time of these algorithms for different problem sizes:
###Code
import numpy as np
import pandas as pd
n = np.array([10, 100, 1000, 10000])
table = pd.DataFrame(index=n)
table['Algorithm A'] = 100 * n + 1
table['Algorithm B'] = n**2 + n + 1
table['Ratio (B/A)'] = table['Algorithm B'] / table['Algorithm A']
table
###Output
_____no_output_____
###Markdown
At $n=10$, Algorithm A looks pretty bad; it takes almost 10 times longerthan Algorithm B. But for $n=100$ they are about the same, and for larger values A is much better. The fundamental reason is that for large values of $n$, any functionthat contains an $n^2$ term will grow faster than a function whoseleading term is $n$. The **leading term** is the term with the highestexponent.For Algorithm A, the leading term has a large coefficient, 100, which iswhy B does better than A for small $n$. But regardless of thecoefficients, there will always be some value of $n$ where$a n^2 > b n$, for any values of $a$ and $b$.The same argument applies to the non-leading terms. Suppose the run timeof Algorithm C is $n+1000000$; it would still be better than AlgorithmB for sufficiently large $n$.
###Code
import numpy as np
import pandas as pd
n = np.array([10, 100, 1000, 10000])
table = pd.DataFrame(index=n)
table['Algorithm C'] = n + 1000000
table['Algorithm B'] = n**2 + n + 1
table['Ratio (C/B)'] = table['Algorithm B'] / table['Algorithm C']
table
###Output
_____no_output_____
###Markdown
In general, we expect an algorithm with a smaller leading term to be abetter algorithm for large problems, but for smaller problems, there maybe a **crossover point** where another algorithm is better. The following figure shows the run times (in arbitrary units) for the three algorithms over a range of problems sizes. For small problem sizes, Algorithm B is the fastest, but for large problems sizes, it is the worst.In the figure, we can see where the crossover points are.
###Code
import matplotlib.pyplot as plt
ns = np.arange(10, 1500)
ys = 100 * ns + 1
plt.plot(ns, ys, label='Algorithm A')
ys = ns**2 + ns + 1
plt.plot(ns, ys, label='Algorithm B')
ys = ns + 1_000_000
plt.plot(ns, ys, label='Algorithm C')
plt.yscale('log')
plt.xlabel('Problem size (n)')
plt.ylabel('Run time')
plt.legend();
###Output
_____no_output_____
###Markdown
The location of these crossover points depends on the details of the algorithms, theinputs, and the hardware, so it is usually ignored for purposes ofalgorithmic analysis. But that doesn't mean you can forget about it. Big O notationIf two algorithms have the same leading order term, it is hard to saywhich is better; again, the answer depends on the details. So foralgorithmic analysis, functions with the same leading term areconsidered equivalent, even if they have different coefficients.An **order of growth** is a set of functions whose growth behavior isconsidered equivalent. For example, $2n$, $100n$ and $n+1$ belong to thesame order of growth, which is written $O(n)$ in **Big-O notation** andoften called **linear** because every function in the set grows linearlywith $n$.All functions with the leading term $n^2$ belong to $O(n^2)$; they arecalled **quadratic**. The following table shows some of the orders of growth that appear mostcommonly in algorithmic analysis, in increasing order of badness. | Order of growth | Name ||-----------------|---------------------------|| $O(1)$ | constant || $O(\log_b n)$ | logarithmic (for any $b$) || $O(n)$ | linear || $O(n \log_b n)$ | linearithmic || $O(n^2)$ | quadratic || $O(n^3)$ | cubic || $O(c^n)$ | exponential (for any $c$) | For the logarithmic terms, the base of the logarithm doesn't matter;changing bases is the equivalent of multiplying by a constant, whichdoesn't change the order of growth. Similarly, all exponential functionsbelong to the same order of growth regardless of the base of theexponent. Exponential functions grow very quickly, so exponentialalgorithms are only useful for small problems. ExerciseRead the Wikipedia page on Big-O notation at and answer the followingquestions:1. What is the order of growth of $n^3 + n^2$? What about $1000000 n^3 + n^2$? What about $n^3 + 1000000 n^2$?2. What is the order of growth of $(n^2 + n) \cdot (n + 1)$? Before you start multiplying, remember that you only need the leading term.3. If $f$ is in $O(g)$, for some unspecified function $g$, what can we say about $af+b$, where $a$ and $b$ are constants?4. If $f_1$ and $f_2$ are in $O(g)$, what can we say about $f_1 + f_2$?5. If $f_1$ is in $O(g)$ and $f_2$ is in $O(h)$, what can we say about $f_1 + f_2$?6. If $f_1$ is in $O(g)$ and $f_2$ is in $O(h)$, what can we say about $f_1 \cdot f_2$? Programmers who care about performance often find this kind of analysishard to swallow. They have a point: sometimes the coefficients and thenon-leading terms make a real difference. Sometimes the details of thehardware, the programming language, and the characteristics of the inputmake a big difference. And for small problems, order of growth isirrelevant.But if you keep those caveats in mind, algorithmic analysis is a usefultool. At least for large problems, the "better" algorithm is usuallybetter, and sometimes it is *much* better. The difference between twoalgorithms with the same order of growth is usually a constant factor,but the difference between a good algorithm and a bad algorithm isunbounded! Example: Adding the elements of a listIn Python, most arithmetic operations are constant time; multiplication usually takes longer than addition and subtraction, and division takes even longer, but these run times don't depend on the magnitude of the operands. Very large integers are an exception; in that case the run time increases with the number of digits.A `for` loop that iterates a list is linear, as long as all of the operations in the body of the loop are constant time. For example, adding up the elements of a list is linear:
###Code
def compute_sum(t):
total = 0
for x in t:
total += x
return total
t = range(10)
compute_sum(t)
###Output
_____no_output_____
###Markdown
The built-in function `sum` is also linear because it does the same thing, but it tends to be faster because it is a more efficient implementation; in the language of algorithmic analysis, it has a smaller leading coefficient.
###Code
%timeit compute_sum(t)
%timeit sum(t)
###Output
_____no_output_____
###Markdown
Old API (shap 0.36以前)
###Code
explainer = shap.KernelExplainer(f, background_samples)
shap_values = explainer.shap_values(samples, nsamples=50)
# 各特徴量の予測値への平均寄与率
shap.summary_plot(shap_values[0], samples, plot_type="bar")
# 上記に加えて特徴量の増減と予測値の増減の関係
shap.summary_plot(shap_values[0], samples)
# サンプル毎の各特徴量の予測値への寄与率
shap.force_plot(explainer.expected_value, shap_values[0], samples)
###Output
_____no_output_____
###Markdown
Analysis of covered and uncovered files
###Code
from pathlib import Path
import pandas as pd
import numpy as np
dPath = Path("../docs/Data")
all_files = []
for file in dPath.iterdir():
all_files.append(file)
li = []
for file in all_files:
df = pd.read_csv(file)
df = df.replace('?',np.NaN)
df[df.columns[3:]] = df[df.columns[3:]].apply(pd.to_numeric)
df.dropna(inplace=True)
li.append(df)
dPath = Path("../docs/dumps")
with open(dPath / "train_files.pkl", 'rb') as filename:
train_files = pickle.load(filename)
with open(dPath / "valid_files.pkl", 'rb') as filename:
valid_files = pickle.load(filename)
with open(dPath / "test_files.pkl", 'rb') as filename:
test_files = pickle.load(filename)
li_train = []
for file in train_files:
df = pd.read_csv(file)
df = df.replace('?',np.NaN)
df[df.columns[3:]] = df[df.columns[3:]].apply(pd.to_numeric)
df.dropna(inplace=True)
li_train.append(df)
li_valid = []
for file in valid_files:
df = pd.read_csv(file)
df = df.replace('?',np.NaN)
df[df.columns[3:]] = df[df.columns[3:]].apply(pd.to_numeric)
df.dropna(inplace=True)
li_valid.append(df)
li_test = []
for file in test_files:
df = pd.read_csv(file)
df = df.replace('?',np.NaN)
df[df.columns[3:]] = df[df.columns[3:]].apply(pd.to_numeric)
df.dropna(inplace=True)
li_test.append(df)
###Output
_____no_output_____
###Markdown
Visualize Uncovered Mutants for each project
###Code
def compute_the_ratio_of_uncovered(df):
return df[df['numExecuted']>0].shape[0]/df.shape[0]
all_ratio_uncovered = []
for i in range(len(li)):
all_ratio_uncovered.append(compute_the_ratio_of_uncovered(li[i]))
train_ratio_uncovered = []
for i in range(len(li_train)):
train_ratio_uncovered.append(compute_the_ratio_of_uncovered(li_train[i]))
valid_ratio_uncovered = []
for i in range(len(li_valid)):
valid_ratio_uncovered.append(compute_the_ratio_of_uncovered(li_valid[i]))
test_ratio_uncovered = []
for i in range(len(li_test)):
test_ratio_uncovered.append(compute_the_ratio_of_uncovered(li_test[i]))
np.mean(all_ratio_uncovered)
np.mean(train_ratio_uncovered)
np.mean(valid_ratio_uncovered)
np.mean(test_ratio_uncovered)
n_bins = 10
from matplotlib.ticker import PercentFormatter
fig, ax = plt.subplots(figsize=(8, 4))
a = ax.hist(all_ratio_uncovered,cumulative=0, weights=np.ones(len(all_ratio_uncovered)) / len(all_ratio_uncovered), color='darkgrey', bins=n_bins)
ax.yaxis.set_major_formatter(PercentFormatter(1))
ax.set_xticks(np.arange(0.1,1,0.1))
ax.set_xlabel("The ratio of unreached mutants")
ax.set_xlim(0,1)
ax.set_ylim(0,0.21)
plt.show()
fig.savefig("AllU.pdf", bbox_inches='tight')
print(a)
fig, ax = plt.subplots(figsize=(8, 4))
a = ax.hist(train_ratio_uncovered,cumulative=0, weights=np.ones(len(train_ratio_uncovered)) / len(train_ratio_uncovered), color='darkgrey', bins=n_bins, label='Train')
ax.yaxis.set_major_formatter(PercentFormatter(1))
ax.set_xticks(np.arange(0.1,1,0.1))
ax.set_xlabel("The ratio of unreached mutants")
ax.set_xlim(0,1)
ax.set_ylim(0,0.21)
ax.legend(loc='upper left')
plt.show()
print(a)
fig.savefig("TrainU.pdf", bbox_inches='tight')
fig, ax = plt.subplots(figsize=(8, 4))
a = ax.hist(valid_ratio_uncovered,cumulative=0, weights=np.ones(len(valid_ratio_uncovered)) / len(valid_ratio_uncovered), color='darkgrey', bins=n_bins, label='Valid')
ax.yaxis.set_major_formatter(PercentFormatter(1))
ax.set_xticks(np.arange(0.1,1,0.1))
ax.set_xlabel("The ratio of unreached mutants")
ax.set_xlim(0,1)
ax.set_ylim(0,0.21)
ax.legend(loc='upper left')
plt.show()
print(a)
fig.savefig("ValidU.pdf", bbox_inches='tight')
fig, ax = plt.subplots(figsize=(8, 4))
a = ax.hist(test_ratio_uncovered,cumulative=0, weights=np.ones(len(test_ratio_uncovered)) / len(test_ratio_uncovered), color='darkgrey', bins=n_bins, label='Test')
ax.yaxis.set_major_formatter(PercentFormatter(1))
ax.set_xlabel("The ratio of unreached mutants")
ax.set_xticks(np.arange(0.1,1,0.1))
ax.set_xlim(0,1)
ax.set_ylim(0,0.21)
ax.legend(loc='upper left')
plt.show()
print(a)
fig.savefig("TestU.pdf", bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Investigate the whole data set
###Code
def compute_total_mutants(df):
return df.shape[0]
li_total_mutants = []
for i in range(len(li)):
li_total_mutants.append(compute_total_mutants(li[i]))
np.max(li_total_mutants), np.min(li_total_mutants), np.average(li_total_mutants), np.median(li_total_mutants)
np.quantile(li_total_mutants, q=1), np.quantile(li_total_mutants, q=0.75), np.quantile(li_total_mutants, q=0.5), np.quantile(li_total_mutants, q=0.25),np.quantile(li_total_mutants, q=0),np.average(li_total_mutants)
def compute_ratio_killed_mutants(df):
return df.Detected.sum() / df.shape[0]
li_ratio_killed_mutants = []
for i in range(len(li)):
li_ratio_killed_mutants.append(compute_ratio_killed_mutants(li[i]))
len(li_ratio_killed_mutants)
fig, ax = plt.subplots(figsize=(8, 4))
a = ax.hist(li_ratio_killed_mutants,cumulative=0, weights=np.ones(len(li_ratio_killed_mutants)) / len(li_ratio_killed_mutants), color='darkgrey', bins=10)
ax.yaxis.set_major_formatter(PercentFormatter(1))
ax.set_xticks(np.arange(0.1,1,0.1))
ax.set_xlabel("The ratio of killed mutants")
ax.set_xlim(0,1)
#ax.set_ylim(0,0.21)
plt.show()
fig.savefig("KilledR.pdf", bbox_inches='tight')
print(a)
np.quantile(li_ratio_killed_mutants, q=1), np.quantile(li_ratio_killed_mutants, q=0.75), np.quantile(li_ratio_killed_mutants, q=0.5), np.quantile(li_ratio_killed_mutants, q=0.25),np.quantile(li_ratio_killed_mutants, q=0),np.average(li_ratio_killed_mutants)
def compute_test_cover(df):
return df.numTestCover.max()
li_test_cover = []
for i in range(len(li)):
li_test_cover.append(compute_test_cover(li[i]))
np.quantile(li_test_cover, q=1), np.quantile(li_test_cover, q=0.75), np.quantile(li_test_cover, q=0.5), np.quantile(li_test_cover, q=0.25),np.quantile(li_test_cover, q=0),np.average(li_test_cover)
###Output
_____no_output_____
###Markdown
Group by age and generation. Join with replicate weights and calculate standard errors.
###Code
import pandas as pd
from math import sqrt
from IPython.display import display, HTML
###Output
_____no_output_____
###Markdown
Set options
###Code
pd.options.display.float_format = '{:,.0f}'.format
###Output
_____no_output_____
###Markdown
Function for building table of estimates and standard errors
###Code
def calc_estimates(group):
# Constant from Formula (16) in the SIPP Source and Accuracy Statement
# 240 = number of replicant columns
const = (1 / (240 * (0.5**2)))
cweight = group['wpfinwgt'].sum()
# get only the rep columns
cols = [ each for each in group.columns if each[0:5] == "repwt"]
# sum each colummn
sums = group[cols].apply(lambda col: col.sum())
res = sums.apply( lambda each: (each - cweight)**2 )
var = const * res.sum()
stder = sqrt(var)
conf = stder * 1.645
return pd.Series([cweight, conf, stder])
###Output
_____no_output_____
###Markdown
Function for grouping by age
###Code
def group_age(joined, variable):
df = pd.DataFrame(
joined.groupby(variable).apply(calc_estimates)
).reset_index().rename( columns = {
0: "estimate",
1: "conf_interval",
2: "standard_error",
})
return df
###Output
_____no_output_____
###Markdown
Function for grouping and joining for both variables
###Code
# takes a df of unique supporters and the replicates file
# returns a dataframe containing age group estimates and standard errors
def group_and_join(supports, replicates):
joined = (
supports
.merge(
replicates,
on = "uid",
how = "left"
)
)
ages = group_age(joined, 'age_group')
generations = group_age(joined, 'generation')
return ages, generations
###Output
_____no_output_____
###Markdown
Run for each in Wave 1
###Code
wave1_files = ["../output/w1_supports_children.csv", "../output/w1_supports_parents.csv"]
wave1_replicates = pd.read_csv("../output/w1_replicates.csv")
for each in wave1_files:
print(each.split("/")[-1])
supports = pd.read_csv(each, dtype = {"uid": "object"})
dfs = group_and_join(supports, wave1_replicates)
[display(df) for df in dfs]
###Output
w1_supports_children.csv
###Markdown
Run for each in Wave 4
###Code
wave4_files = ["../output/w4_supports_children.csv", "../output/w4_supports_parents.csv"]
wave4_replicates = pd.read_csv("../output/w4_replicates.csv")
for each in wave4_files:
print(each.split("/")[-1])
supports = pd.read_csv(each, dtype = {"uid": "object"})
dfs = group_and_join(supports, wave4_replicates)
[display(df) for df in dfs]
###Output
w4_supports_children.csv
###Markdown
Hypothesis test Testing this statement: "In 2016, more Baby Boomers provided for children outside the home than Millennials provided for parents living outside the home."
###Code
w4_parents_main = pd.read_csv(
"../output/w4_supports_parents.csv",
dtype = {"uid": "object"}
)
w4_children_main = pd.read_csv(
"../output/w4_supports_children.csv",
dtype = {"uid": "object"}
)
w4_par = group_and_join(w4_parents_main, wave4_replicates)[1]
w4_chi = group_and_join(w4_children_main, wave4_replicates)[1]
w4_par
w4_chi
boomer = w4_chi.loc[ lambda x: x['generation'] == "Boomer"]
millennial = w4_par.loc[ lambda x: x['generation'] == "Millennials"]
# calculate standard error of a difference
def get_sdiff(a,b):
sdiff = sqrt(a**2 + b**2)
return sdiff
# test standard error
def test(a, b):
sdiff = get_sdiff(a['standard_error'].array[0], b['standard_error'].array[0])
diff = a["estimate"].array[0] - b["estimate"].array[0]
return ~((-1.645 * sdiff) < diff < (1.645 * sdiff))
test(boomer, millennial)
###Output
_____no_output_____
###Markdown
AnalysisIn this notebook, we analyze the data that we curated and the results of our experiments. The notebook is organized around the three types of data in the project:1. Feature data_This data is curated from both the Universal Dependencies and the UniMorph projects. BERT gives us a distribution over its vocab for the masked out word. In order to know how correct BERT was, we need to know the feature values of those words._2. Cloze data_This data comes from the Universal Dependencies project. We mask out a word from these sentences and ask BERT to predict the missing word._3. Experiental results_We ask BERT to predict the masked out word and compute whether, on average, BERT assigned a higher probability to words with the correct feature values than words with incorrect feature values. This data holds that information as well as the number of distractors in the sentence and the length between the target and control._
###Code
%matplotlib inline
import glob
import os
import sys
sys.path.insert(0, '../src')
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context("paper")
sns.set_style("whitegrid")
sns.set(font='serif')
sns.set_style("white", {"font.family": "serif","font.serif": ["Times", "Palatino", "serif"]})
from bert import BERT
from constants import LANGUAGES
from experiment import ENGLISH_MODEL, MULTILINGUAL_MODEL
CODE_TO_LANGUAGE = {code: lg for lg, code in LANGUAGES.items()}
FEATURES_FNAMES = glob.glob('../data/features/*.csv')
CLOZE_FNAMES = glob.glob('../data/cloze/*.csv')
EXPERIMENT_FNAMES = glob.glob('../data/experiments/*.csv')
BLUE = sns.color_palette()[0]
###Output
Better speed can be achieved with apex installed from https://www.github.com/nvidia/apex.
###Markdown
1. FeaturesI want to answer the following questions:1. How many languages did we curate feature data for?2. How many feature values did we get per language?3. For each language, how big is the intersection between the feature data we have and the words in BERT's vocab?4. We allowed for the possibility of a word having no value for a particular feature. How dense are our feature values? I.e. How many words have no value? This is restricted to the words that are in BERT's vocab.5. Are particular features more sparse than others?Here are the answers:1. We curated feature data for 34 languages. The languages are: Afrikaans, Arabic, Armenian, Basque, Breton, Catalan, Croatian, Czech, Danish, Dutch, English, Finnish, French, German, Greek, Hebrew, Hindi, Hungarian, Irish, Italian, Latin, Norwegian-Nynorsk, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Swedish, Tamil, Telugu, Turkish, Ukrainian and Urdu.2. We got between 182 (Telegu) and 1,212,893 (Finnish) feature values per langauge. The full list of counts is below. The median number per language is 23,774 and the average is 98,783. Finnish has three times as many feature values as the second most well-represented language (Hungarian), which drags the average up so much.3. For English, we have 6,743 word forms with feature values out of BERT's vocab of 28,996 word forms. This is pretty good coverage considering that we restricted feature values to only nouns and pronouns whereas BERT's vocab has all POS. For the multilingual model, we have a vocab of 119,547 word forms. German has the most word forms with feature values at 4,790. This seems pretty good coverage as well, considering that 104 languages are vying for a shared vocab and we're only looking at nouns and pronouns here. French and Spanish are the next two most well-represented, each with other 3,000 word forms. Telegu has the worst representation, with only a single word form with a feature value in the multilingual data. Tamil is next with 67, then Breton with 157, and Basque, Irish, Armenian and Greek all have between 200-300 word forms. In this experimental setup we're making an assumption that words with idential POS and these four feature values are always grammatical replacements for one another. We don't take into account selection restrictions, either syntactic or semantic. To make this assumption as palatable as possible, we want to have a decent number of words with feature values. **We clearly need to drop Telegu from the analysis.** Note that this answer is counting unique word _forms_, so words with multiple feature bundles are only counted once.4. I started answering this question (and the code is still in place to continue answering it) but I realized that we don't care about the density of the individual features but rather the number of correct/incorrect words for each cloze example. Knowing that the feature data for a language has many singular words isn't what I want to know, because perhaps there are many singular words but all of them are masculine and my cloze example requires a singular feminine word. What we want to know is, for each cloze examples, how many correct and incorrect words (that I have features on) did BERT have the opportunity to predict. If I only have data on three singular feminine words but many many singular masculine words in a particular language and a cloze example expected the feminine form, it could be unfair to include that example. Such an approach will also handle the Telegu case and allow me to set a consistent threshold across languages.5. This has a similar response to above.Note that we restricted the feature data to only include nouns and pronouns, because these are the only POS that we are predicting in these experiments. That is the case because we decided to always predict the controller of the agreement, not the target.
###Code
# Read in the feature data
features = []
for fname in FEATURES_FNAMES:
fts = pd.read_csv(fname, index_col=0, dtype={'person': str})
code = fname[-7:-4]
language = CODE_TO_LANGUAGE[code]
fts['language'] = language
features.append(fts)
features = pd.concat(features, sort=False)
features = features[features['pos'].isin(['NOUN', 'PRON'])]
features.reset_index(inplace=True)
features.head()
# 1. How many languages did we curate feature data for?
print(len(features['language'].unique()))
', '.join(sorted(features['language'].unique()))
# 2. How many feature values did we get per language?
features['language'].value_counts()
features['language'].value_counts().describe().apply(lambda n: format(n, 'f'))
# Visualize the number of feature data points per language
def countplot(series, figsize=(14, 10)):
"""Helper function to visualize a pd.Series as a countplot."""
plt.figure(figsize=figsize)
order = series.value_counts().index
plot = sns.countplot(series, color=BLUE, order=order)
plot.set_xticklabels(plot.get_xticklabels(), rotation=45, horizontalalignment='right');
countplot(features['language'])
# 3. For each language, how big is the intersection between the feature data we have and the words in BERT's vocab?
# We first compute this for the single language English model.
bert = BERT(ENGLISH_MODEL)
# although we use the cased models, we lowercase the vocab in the experiment to increase our vocab coverage. This
# assumes that the feature values of a word do not change depending on its casing, which is an assumption I'm pretty
# comfortable making.
english_vocab = [word.lower() for word in bert.vocab]
print("Size of English BERT's vocab: ", len(english_vocab))
english_features = features[features['language'] == 'English'].copy()
english_features['in_vocab'] = english_features['word'].isin(english_vocab)
counts = english_features.drop_duplicates(subset=['word'])['in_vocab'].value_counts()
print("Size of English BERT's vocab with feature values:", counts[True])
# Now we compute this for all other languages with the multilingual model.
bert = BERT(MULTILINGUAL_MODEL)
multilingual_vocab = [word.lower() for word in bert.vocab]
print("Size of multilingual BERT's vocab: ", len(multilingual_vocab))
multilingual_features = features[features['language'] != 'English'].copy()
multilingual_features['in_vocab'] = multilingual_features['word'].isin(multilingual_vocab)
counts = (multilingual_features.drop_duplicates(subset=['word', 'language'])
.groupby('language')['in_vocab']
.value_counts()
.to_frame(name='count')
.reset_index())
counts = counts[counts['in_vocab']]
counts.sort_values(by='count', ascending=False).head()
# Visualize the size of overlap between feature data and BERT's vocab
def barplot(df, x, y, figsize=(14, 10)):
"""Helper function to plot a barplot"""
plt.figure(figsize=figsize)
order = df.sort_values(by=y, ascending=False)[x]
plot = sns.barplot(x=x, y=y, data=df, color=BLUE, order=order);
plot.set_xticklabels(plot.get_xticklabels(), rotation=45, horizontalalignment='right');
return plot
# Add in English data
if "English" not in list(counts["language"]):
counts = counts.append({'language': 'English', 'in_vocab': True, 'count': 6743}, ignore_index=True)
plot = barplot(counts, 'language', 'count');
fontsize = 30
plt.xlabel("Language", fontsize=fontsize)
plt.ylabel("Number of feature bundles", fontsize=fontsize)
plt.tick_params(labelsize=14)
counts.sort_values(by='count', ascending=False)
# 4. We allowed for the possibility of a word having no value for a particular feature. How dense are our
# feature values? I.e. How many words have no value? This is restricted to the words that are in BERT's vocab.
in_vocab_features = pd.concat([english_features, multilingual_features], ignore_index=True, sort=False)
in_vocab_features = in_vocab_features[in_vocab_features['in_vocab']]
# hard-code order of feature values
order = {'number': ['Sing', 'Plur', 'NO VALUE'],
'gender': ['Fem', 'Masc', 'Neut', 'NO VALUE'],
'person': ['1', '2', '3', 'NO VALUE'],
'case': ['Nom', 'Acc', 'Abs', 'Erg', 'NO VALUE']}
def plot_feature_density(df, feature):
"""Helper function for plotting the density of feature values in `df`."""
df.sort_values(by='language', inplace=True) # order by language alphabetically
values = order[feature]
g = sns.FacetGrid(data=df, col='language', col_wrap=4, sharey=False)
g = g.map(sns.countplot, feature, order=values)
for ax in g.axes.flat:
plt.setp(ax.get_xticklabels(), visible=True);
# We first compute the density of the number feature
# Armenian only has ~25 plural words, Basque ~15, Breton ~15, Hungarian ~20, Persian ~30, Tamil 8, Telegu 1,
# Turkish 33
plot_feature_density(in_vocab_features, 'number')
###Output
_____no_output_____
###Markdown
We could continue plotting the feature densities for the remaining three features, but it's at this point I realized that we don't want to know the density of individual features in the dataset. We're using an "all-or-none" approach in which a word is only correct if all four feature values are correct. So what we care about is the number of words that are correct and incorrect words for each cloze example. If an example doesn't have many possible correct words, then we might consider dropping it. ClozeI want to answer the following questions:1. How many cloze examples do we have in total?2. How many languages do we have cloze examples for?3. How many examples did we get per language?4. What is the distribution of types i) in the entire dataset, and ii) per language?5. What is the distribution of POS i) in the entire dataset, and ii) per language?6. For each cloze example, how many correct and incorrect words with features do we have?7. What is the distribution of intervening nouns and the number of distractors per language?8. What is the distribution of the lengths of the sentences (including the "mask" token)?Here are the answers:1. We have 2,089,557 cloze examples in total.2. We have 33 languages with cloze examples. They are: Afrikaans, Arabic, Armenian, Basque, Breton, Catalan, Croatian, Czech, Danish, Dutch, English, Finnish, French, German, Greek, Hebrew, Hindi, Hungarian, Irish, Italian, Latin, Norwegian-Nynorsk, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Swedish, Tamil, Turkish, Ukrainian and Urdu. We had 34 languages with features, and the missing language is Telegu.3. We got between 637,428 for German and 193 for Hungarian. The average was 63,319 and the median 22,959. German really drags up the average, as the next most well-represented languages are Czech (270,034), Spanish (176,138), Russian (144,458), Italian (137,268) and French (117,769) which all have over 100,000. Tamil (909), Breton (263) and Hungarian (193) all have less than 1,000 examples. **We may want to drop Tamil, Breton and Hungarian.**4. We have 774,038 examples of modifying adjectives, 771,806 of determiners, 494,205 of nouns and 49,508 of predicated adjectives. There's a nice table below showing the counts per type for each language, which I think is the best way to display this information. The plots are not as helpful in my opinion.5. We have 1,934,950 nouns and 154,607 pronouns in our data, so overwhelmingly nouns. A similar table is shown below.6. The average number of correct words was 572 and the average number of incorrect words was 6,412. The median numbers were 418 and 15,783. Some cloze examples have 1 or only a handful of correct examples, in which case **we should not count these examples.**7. The answer is below but I'm not sure I see any immediate relevance.8. Again, the answer is below but I don't think it's immediate relevant.
###Code
# Read in the cloze data
cloze = []
for fname in CLOZE_FNAMES:
cl = pd.read_csv(fname, index_col=0, dtype={'person': str, 'num_distractors': int})
code = fname[-7:-4]
language = CODE_TO_LANGUAGE[code]
cl['language'] = language
cloze.append(cl)
cloze = pd.concat(cloze, sort=False)
cloze.reset_index(inplace=True)
cloze.head()
# 1. How many cloze examples do we have in total?
len(cloze)
# 2. How many languages do we have cloze examples for?
print(len(cloze['language'].unique()))
', '.join(sorted(cloze['language'].unique()))
# Which language(s) are in the feature data but not in the cloze data?
set(features['language'].unique()).difference(set(cloze['language'].unique()))
# 3. How many examples did we get per language?
cloze['language'].value_counts()
cloze['language'].value_counts().describe()
# Visualize the number of cloze examples per language
countplot(cloze['language']);
# 4. What is the distribution of types i) in the entire dataset, and ii) per language?
# We start with the entire dataset
to_exclude = ["German", "Czech", "Spanish", "Arabic", "Tamil", "Breton", "Hungarian"]
tmp = cloze[~cloze["language"].isin(to_exclude)]
tmp['type'].value_counts()
tmp.shape
countplot(cloze['type'], figsize=(8, 6));
# Now we split by language
types = cloze.groupby('language')['type'].value_counts().to_frame('count').reset_index()
types.head()
pivoted_types = types.pivot(index='language', columns='type', values='count')
pivoted_types.fillna('-', inplace=True)
pivoted_types.head()
# 5. What is the distribution of POS i) in the entire dataset, and ii) per language?
# We start with the entire dataset
cloze['pos'].value_counts()
# Now we split by language
pos = cloze.groupby('language')['pos'].value_counts().to_frame('count').reset_index()
pivoted_pos = pos.pivot(index='language', columns='pos', values='count')
pivoted_pos.fillna('-', inplace=True)
pivoted_pos['NOUN'] = pivoted_pos['NOUN'].astype(int)
pivoted_pos.head()
# 6. For each cloze example, how many correct and incorrect words with features do we have?
group_cols = ['language', 'pos']
feature_cols = ['number', 'gender', 'case', 'person']
types = cloze.drop_duplicates(subset=group_cols+feature_cols)
types = types.merge(in_vocab_features, how='left', on=group_cols, suffixes=('', '_'))
vocab_feature_cols = [f + '_' for f in feature_cols]
vocab_features = types[vocab_feature_cols].copy()
vocab_features.columns = feature_cols
types['correct'] = (types[feature_cols] == vocab_features).all(axis=1)
counts = (types.groupby(group_cols+feature_cols)['correct']
.value_counts()
.to_frame('count')
.reset_index())
correct = cloze.merge(counts, how='left', on=group_cols+feature_cols)
correct.groupby('correct')['count'].describe()
# This is a second way of calculating the numbers of correct and incorrect words with features
# I cannot think of a better name than "numbers" at the moment :S
group_cols = ['language', 'pos']
feature_cols = ['number', 'gender', 'case', 'person']
num_correct = in_vocab_features.groupby(group_cols+feature_cols).size().to_frame('num_correct').reset_index()
num_total = in_vocab_features.groupby(group_cols).size().to_frame('num_total')
numbers = num_correct.merge(num_total, on=group_cols, how='left')
numbers['num_incorrect'] = numbers['num_total'] - numbers['num_correct']
numbers.head()
numbers['num_total'].describe()
# 7. What is the distribution of intervening nouns and the number of distractors per language?
# We first start with the number of distractors
distractors = cloze.groupby('language')['num_distractors'].value_counts().to_frame('count').reset_index()
distractors.head()
# Now we calculate the intervening nouns
intervening = cloze.groupby('language')['intervening_noun'].value_counts().to_frame('count').reset_index()
intervening.head()
# 8. What is the distribution of the lengths of the sentences (including the "mask" token)?
cloze['length'] = cloze['masked'].str.len()
cloze.groupby('language')['length'].describe()
###Output
_____no_output_____
###Markdown
ExperimentsI want to answer the following questions:1. How many languages do we have results on so far?2. How many cloze examples did we have to skip per language?3. At the highest level, how well does BERT do on this task? What about when we restrict to cloze examples with enough correct/incorrect words?4. How well does BERT do on each type of agreement?5. How well does BERT do on each language?6. What is the distribution of distance i) per type and ii) per language?7. What is the distribution of number of distractors i) per type and ii) per language?8. What is the relationship between BERT's performance and distance/number of distractors?Here are the answers:1. So far we have results from 29 languages. The missing languages are German, Czech, Spanish and Arabic. We went in reverse order of size of cloze dataset, so we should have got Arabic results because we have results from languages with more cloze examples. This suggests that there was some error that got handled that skipped Arabic. **I should look into this and re-run Arabic.**2. We most likely skipped cloze examples because we couldn't find the mask token in the masked sentence, which is a bug in the creation of the cloze data. We skipped 4,170 French examples, which is not too troubling because we have over 100,000 French examples, but it would be nice to have them back. We only lost a handful of examples from a handful of other languages, so I'm not worried about this.3. BERT got 93.5% of the cloze examples correct (without regard to number of correct/incorrect examples). When we restrict the analysis to the examples that meet some threshold for number of correct/incorrect examples, the number is basically the same: 94.2%.4. BERT does well on all types of agreement, with all types scoring above 90%. It performs best on determiners and modifying adjectives, while worst on predicated adjectives and verbs. This makes sense, as there is normally larger distances between the target and controller in the latter two types than in the first two. When we restrict by the threshold the results do not change.5. The plots below show how well BERT does on each language for the dataset I curated. Overall, it looks like BERT performs well for the majority of languages, but that there are some languages fare much worse than others. For example, Breton and Hungarian have less than 50% accuracy (although we said we'd drop these languages because they have too few examples). The problem with looking at these results is that even though the datasets were curated using the same methodology, the datasets will be different and not comparable. Perhaps the examples that I have for Breton are just much harder than they are for Portuguese.6. Most (75%) cloze examples have at most three tokens between the target and controller. This is consistent across types and languages, although the slicing by language is messy because they are so many languages. The English plot is a good example, and Turkisk shows the same overall shape. As you would expect, the average distance is higher for predicated adjectives and verbs than determiners and modifying adjectives.7. Most (75%) have less than four distractors.8. The plot belows shows that, aggregating across all languages and types, as the distance between the target and controller increases, BERT's performance decreases slightly. It's important to note that this decrease is not that big. The drop off for number of distractors starts later but drops more radidly once it starts. One way to read this is that BERT is robust to a few distractors but still gets confused when there's too many distractors. When we split by type, it's clear that modifying adjectives drop off much more than the other types, and verbs to a more reduced amount. Overall, the results suggest that BERT is pretty robust to distance and distractors across all languages and types but that modifying adjectives and verbs perform worst.Note that I have dropped three languages (Tamil, Breton and Hungarian) for having fewer than 1,000 cloze examples and these results come from restricting to cloze examples with enough data.
###Code
# Read in the experimental data
experiments = []
for fname in EXPERIMENT_FNAMES:
ex = pd.read_csv(fname, index_col=0, dtype={'person': str, 'num_distractors': int})
code = fname[-7:-4]
language = CODE_TO_LANGUAGE[code]
ex['language'] = language
experiments.append(ex)
experiments = pd.concat(experiments, sort=False)
experiments.reset_index(inplace=True)
experiments.replace({"type": {"predicated": "predicate", "modifying": "attributive", "verb": "subject-verb"}}, inplace=True)
experiments.head(2)
# 1. How many languages do we have results on so far?
print(len(experiments['language'].unique()))
missing = set(cloze['language'].unique()).difference(experiments['language'].unique())
missing
# 2. How many cloze examples did we have to skip per language?
num_cloze = cloze['language'].value_counts()
num_experiments = experiments['language'].value_counts()
difference = (num_cloze - num_experiments).to_frame('count')
difference[difference['count'] > 0]
# Here we drop Tamil, Breton and Hungarian for having fewer than 1,000 cloze examples.
lgs_to_drop = ['Tamil', 'Breton', 'Hungarian']
experiments = experiments[~experiments['language'].isin(lgs_to_drop)]
# 3. At the highest level, how well does BERT do on this task?
# We can simply ask for the total accuracy of BERT on the cloze task. This will give us a single number that
# summarizes BERT's performance
experiments['right'].value_counts(normalize=True)
# Now we restrict this analysis to cloze examples in which we had above some threshold of correct/incorrect answers.
# The threshold has been set at 10 words each. This will almost surely be too high for all pronouns, so perhaps I'll
# need to revisit this threshold mechanism.
threshold_correct = 10
threshold_incorrect = 10
enough_correct = (numbers['num_correct'] > threshold_correct)
enough_incorrect = (numbers['num_incorrect'] > threshold_incorrect)
numbers['above_threshold'] = enough_correct & enough_incorrect
subset = numbers[group_cols+feature_cols+['above_threshold']]
experiments = experiments.merge(subset, on=group_cols+feature_cols)
# Accuracy when restricted to examples with enough data
above_threshold = experiments[experiments['above_threshold']]
above_threshold['right'].value_counts(normalize=True)
# Here I filter out all cloze examples that don't have enough data
experiments = experiments[experiments['above_threshold']]
# 4. How well does BERT do on each type of agreement?
results_by_type = (experiments.groupby('type')['right']
.value_counts(normalize=True)
.to_frame('proportion')
.reset_index())
results_by_type
# Quick hack
def horizontal_barplot(df, x, y, figsize=(14, 10), fontsize=20):
"""Helper function to plot a barplot"""
plt.figure(figsize=figsize)
order = df.sort_values(by=y, ascending=False)[x]
plot = sns.barplot(x=y, y=x, data=df, color=BLUE, order=order);
ticks = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
plt.xticks(ticks, ticks)
plt.tick_params(labelsize=fontsize)
plt.xlabel("Accuracy", fontsize=fontsize);
return plot
# Quicker hack
def errorbar_barplot(df, x, y, figsize=(14, 10), fontsize=20):
results = experiments.groupby(x)["right"].value_counts(normalize=True).to_frame('proportion').reset_index()
order = list(results[results["right"]].sort_values(by="proportion", ascending=False)[x])
plt.figure(figsize=figsize)
plot = sns.barplot(x=y, y=x, data=df, color=BLUE, order=order);
ticks = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
plt.xticks(ticks, ticks)
plt.tick_params(labelsize=fontsize)
plt.xlabel("Accuracy", fontsize=fontsize);
return plot
plot = errorbar_barplot(experiments, "type", "right", figsize=(8, 6))
fontsize = 20
plt.ylabel("Agreement type", fontsize=fontsize);
#plot = horizontal_barplot(results_by_type[results_by_type['right']], 'type', 'proportion', figsize=(8,6))
# Now we restrict to those above the threshold
results_by_type = (experiments.groupby('type')['right']
.value_counts(normalize=True)
.to_frame('proportion')
.reset_index())
results_by_type
experiments.replace({"language": {"Norwegian-Nynorsk": "Norwegian"}}, inplace=True)
plot = errorbar_barplot(experiments, "language", "right", figsize=(14,20))
fontsize = 30
plt.ylabel("Language", fontsize=fontsize);
# 5. How well does BERT do on each language?
# There are so many languages that it's not as useful to look at the numbers themselves, so a plot it show instead
results_by_lg = (experiments.groupby('language')['right']
.value_counts(normalize=True)
.to_frame('proportion')
.reset_index())
results_by_lg.replace({"language": {"Norwegian-Nynorsk": "Norwegian"}}, inplace=True)
fs = 30
plot = horizontal_barplot(results_by_lg[results_by_lg['right']], 'language', 'proportion', figsize=(14,20), fontsize=fs)
plt.ylabel("Language", fontsize=fs);
# 6. What is the distribution of distance i) per type and ii) per language?
experiments['distance'].describe()
sns.barplot(x='type', y='distance', data=experiments, order=['determiner', 'modifying', 'verb', 'predicated'],
color=BLUE);
# Let's look at the distribution of distance for one language
def display_for(language, variable, high=10):
"""Convenience function for plotting distribution for one language."""
lg = (experiments['language'] == language)
hi = (experiments[variable] < high)
low = (experiments[variable] > 0)
subset = experiments[ lg & hi & low]
sns.countplot(x=variable, data=subset, color=BLUE);
display_for('English', 'distance')
# Similar pattern for Turkish
display_for('Turkish', 'distance')
# 7. What is the distribution of number of distractors i) per type and ii) per language?
experiments['num_distractors'].describe()
sns.barplot(x='type', y='num_distractors', data=experiments, order=['determiner', 'modifying', 'verb', 'predicated'],
color=BLUE);
display_for('English', 'num_distractors', high=20)
display_for('Turkish', 'num_distractors', high=20)
# Recode the 'right' column as 1's and 0's instead of True and False
experiments['right'] = experiments['right'].astype(int)
# 8. What is the relationship between BERT's performance and distance/number of distractors?
def plot_results(variable, high=10, figsize=(14, 10), fontsize=20):
hi = (experiments[variable] <= high)
low = (experiments[variable] > 0)
df = experiments[hi & low].copy()
df[variable] = df[variable].astype("category")
plt.figure(figsize=figsize)
order = range(1, high+1)
plot = sns.barplot(x="right", y=variable, data=df, color=BLUE, order=order);
ticks = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]
plt.xticks(ticks, ticks)
plt.xlabel("Accuracy", fontsize=fontsize)
plt.tick_params(labelsize=14)
return plot
plot = plot_results('distance', high=10, figsize=(14,16))
plt.ylabel("Distance", fontsize=fontsize);
plot = plot_results('num_distractors', high=15, figsize=(14,16))
plt.ylabel("Number of distactors", fontsize=fontsize);
# Now we want to look at the relationship between performance and distance/distractors by type and by language
# We start with segmenting by type
def plot_results_by(by, variable, high=15, figsize=(14, 10)):
hi = (experiments[variable] <= high)
low = (experiments[variable] > 0)
df = experiments[hi & low]
order = range(1, high+1)
col_order = sorted(experiments[by].unique())
plot = sns.FacetGrid(data=df, col=by, col_wrap=4, col_order=col_order)
plot = plot.map(sns.barplot, variable, 'right', color=BLUE, order=order);
plot_results_by('type', 'distance')
plot_results_by('type', 'num_distractors')
# Now we split by language
plot_results_by('language', 'distance')
plot_results_by('language', 'num_distractors')
###Output
_____no_output_____
###Markdown
Issues:- [ ] inflated \ units (e.g. parkmerced) due to double counting or counting non-residential units- [x] properties in some assessor roll years and not others - **solution**: take record from closest year, only consider evictions >= 2007- [x] properties with multiple rent control eligibility use codes: - **solution**: count how many of these there are, take the max use code- [ ] zero-unit buildings in new construction - **solution**: year_built 0- [ ] properties with multiple year-built's - **solution**: take the max year built, which will give a conservative estimate w/r/t rent control- [x] year-built = 0 - **solution**: year_built > 1800- [ ] condo conversion or parcel splits after eviction but before earliest assessor record - SRES --> MRES would overcount MRES evictions - MRES --> SRES would undercount MRES evictions - **solution**: only count evictions after 2007 - many of these are the "0000 0000 000000000" values in assessor rolls- [ ] rent controlled properties more likely to be in state of disrepair and therefore require demolition/capital improvements - **solution**: fit hedonic and control for stddev above/below predicted value- [ ] petition applies to multiple units - **solution**: use new eviction .csv with unit counts
###Code
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
import statsmodels.formula.api as smf
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load assessor universe
###Code
# asr_all = pd.read_csv('../data/assessor_2007-2018_clean_w_none_sttyps.csv')
asr = pd.read_csv('../data/asr_grouped_by_yr.csv')
asr['any_ev'] = (asr['ev_count'] > 0).astype(int)
asr['any_ev_07'] = (asr['ev_count_post_07'] > 0).astype(int)
asr['pre_1980'] = (asr['year_built_max'] < 1980)
asr['built_1980'] = None
asr.loc[asr['pre_1980'], 'built_1980'] = 'before'
asr.loc[~asr['pre_1980'], 'built_1980'] = 'after'
asr['ev_per_unit'] = asr['ev_count'] / asr['total_units']
asr['ev_per_unit_since_07'] = asr['ev_count_post_07'] / asr['total_units']
asr
asr.columns
###Output
_____no_output_____
###Markdown
Load eviction data
###Code
ev = pd.read_csv('../data/ev_matched.csv')
###Output
/Users/max/anaconda3/envs/evictions/lib/python3.7/site-packages/IPython/core/interactiveshell.py:2714: DtypeWarning: Columns (4) have mixed types.Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Eviction type counts by built year Retain only evictions since 2007
###Code
ev = ev[ev['year'] >= 2007]
###Output
_____no_output_____
###Markdown
% evictions matched to assessor records
###Code
len(ev[~pd.isnull(ev['asr_index'])]) / len(ev)
ev = ev.merge(asr, left_on='asr_index', right_on='index', suffixes=('_ev', '_asr'))
ev = ev[ev['any_rc_eligibility'] == 1]
ev.loc[pd.isnull(ev['type']), 'type'] = 'unknown'
type_counts = ev.groupby(['built_1980', 'type']).agg(count=('index_ev', 'nunique')).reset_index()
pre_sums = type_counts.groupby('built_1980')['count'].sum()
type_counts = type_counts.pivot(index='type', columns='built_1980', values='count')
type_counts['pct_after'] = type_counts['after'] / pre_sums['after']
type_counts['pct_before'] = type_counts['before'] / pre_sums['before']
###Output
_____no_output_____
###Markdown
8x rate of OMI's, but this is prob due to structural differences
###Code
type_counts.sort_values('pct_before', ascending=False)
ev['ev_type_cat'] = 'breach of lease'
ev.loc[ev['type'].isin([
'OMI', 'Capital Improvement', 'ELLIS', 'Condo Conversion', 'Substantial Rehabilitation',
'Lead Remediation', 'Good Samaritan Tenancy Ends',
'Development Agreement', 'Demolition']), 'ev_type_cat'] = 'no fault'
ev.loc[ev['type'].isin(['unknown', 'Other']), 'ev_type_cat'] = 'unknown/Other'
cat_counts = ev.groupby(['built_1980', 'ev_type_cat']).agg(count=('index_ev', 'nunique')).reset_index()
cat_counts
cat_counts = cat_counts.pivot(index='ev_type_cat', columns='built_1980', values='count')
cat_counts['pct_after'] = cat_counts['after'] / pre_sums['after']
cat_counts['pct_before'] = cat_counts['before'] / pre_sums['before']
cat_counts
###Output
_____no_output_____
###Markdown
Mean differences Evictions post-2007:
###Code
mean_diffs = asr[
(asr['year_built_max'] < 2007) &
(asr['year_built_min'] > 0)].groupby(['any_rc_eligibility', 'pre_1980']).agg(
mean_any_ev=('any_ev_07', 'mean'),
total_addresses=('index', 'count'),
total_units=('total_units', 'sum'),
total_evictions=('ev_count_post_07', 'sum'),
)
mean_diffs['units_per_address'] = mean_diffs['total_units'] / mean_diffs['total_addresses']
mean_diffs['evictions_per_address'] = mean_diffs['total_evictions'] / mean_diffs['total_addresses']
mean_diffs['evictions_per_unit'] = mean_diffs['total_evictions'] / mean_diffs['total_units']
mean_diffs
###Output
_____no_output_____
###Markdown
All Evictions
###Code
mean_diffs = asr[
(asr['year_built_max'] < 2007) &
(asr['year_built_min'] > 0)].groupby(['any_rc_eligibility', 'pre_1980']).agg(
mean_any_ev=('any_ev_07', 'mean'),
total_addresses=('index', 'count'),
total_units=('total_units', 'sum'),
total_evictions=('ev_count', 'sum'),
)
mean_diffs['units_per_address'] = mean_diffs['total_units'] / mean_diffs['total_addresses']
mean_diffs['evictions_per_address'] = mean_diffs['total_evictions'] / mean_diffs['total_addresses']
mean_diffs['evictions_per_unit'] = mean_diffs['total_evictions'] / mean_diffs['total_units']
mean_diffs
###Output
_____no_output_____
###Markdown
Plots
###Code
rc_pop = asr[
(asr['any_rc_eligibility'] == 1) & (asr['year_built_max'] > 1500) &
(asr['year_built_max'] < 2500) & (asr['total_units'] > 0)]
yr_vs_ev = rc_pop.groupby('year_built_max').agg({
'ev_per_unit':'mean',
'ev_per_unit_since_07':'mean'
}).reset_index()
yr_vs_ev1 = yr_vs_ev[(yr_vs_ev['year_built_max'] < 1980) &
(yr_vs_ev['year_built_max'] >= 1953)]
yr_vs_ev2 = yr_vs_ev[(yr_vs_ev['year_built_max'] >= 1980) &
(yr_vs_ev['year_built_max'] <= 2007)]
fig, ax = plt.subplots(figsize=(13,7))
sns.regplot('year_built_max', 'ev_per_unit_since_07', yr_vs_ev1, ax=ax, truncate=True, label='rent controlled')
sns.regplot('year_built_max', 'ev_per_unit_since_07', yr_vs_ev2, ax=ax, truncate=True, label='non-rent controlled')
ax.axvline(1979.5, ls=':', c='r')
ax.legend()
_ = ax.set_xlabel("property built-year", fontsize=16)
_ = ax.set_ylabel("avg.\nevictions/unit\nper year", fontsize=16, rotation=0, labelpad=70)
_ = ax.set_title("SF Eviction Rates (2007-2017)\nfor Multi-family Residential Properties\n"
"(incl. SRO's, excl. TIC's)", fontsize=20)
ax.set_ylim((-0.005, 0.05))
ax.annotate('rent control \nbuilt-year threshold', xy=(1979, 0.04), xycoords='data',
xytext=(0.3, 0.8), textcoords='axes fraction',
arrowprops=dict(facecolor='black',frac=0.05, width=0.5, headwidth=10),
horizontalalignment='center', verticalalignment='center', fontsize=12
)
###Output
_____no_output_____
###Markdown
Fit Hedonic regression
###Code
asr_all = pd.read_csv('./evictions/data/assessor_2007-2018_clean_w_none_sttyps.csv')
asr_all['total_value'] = asr_all['RP1LNDVAL'] + asr_all['RP1IMPVAL']
asr_all.loc[pd.isnull(asr_all['RP1NBRCDE']), 'RP1NBRCDE'] = 'unknown'
asr_grouped_by_yr = asr_all.groupby(['asr_yr', 'house_1', 'house_2', 'street_name', 'street_type']).agg(
total_units=('UNITS', 'sum'),
diff_unit_counts=('UNITS', 'nunique'),
min_units=('UNITS', 'min'),
diff_bldg_types=('bldg_type', 'nunique'),
bldg_type_min=('bldg_type', 'min'),
bldg_type_max=('bldg_type', 'max'),
diff_rc_eligibility=('rc_eligible', 'nunique'),
any_rc_eligibility=('rc_eligible', 'max'),
diff_years_built=('YRBLT', 'nunique'),
year_built_min=('YRBLT', 'min'),
year_built_max=('YRBLT', 'max'),
total_value=('total_value', 'sum'),
total_beds=('BEDS', 'sum'),
total_baths=('BATHS', 'sum'),
mean_stories=('STOREYNO', 'mean'),
total_sqft=('SQFT', 'sum'),
nbd=('RP1NBRCDE', pd.Series.mode),
total_rooms=('ROOMS', 'sum'),
total_area=('LAREA', 'sum')
).reset_index()
asr_grouped_by_yr['nbd'] = asr_grouped_by_yr['nbd'].apply(lambda x: list(x)[0] if type(x) == np.ndarray else x)
asr_grouped_by_yr['yr_built_since_1900'] = asr_grouped_by_yr['year_built_max'] - 1900
df_hed = asr_grouped_by_yr[
(asr_grouped_by_yr['any_rc_eligibility'] == 1) &
(asr_grouped_by_yr['total_units'] > 0) &
(asr_grouped_by_yr['year_built_max'] >= 1950) &
(asr_grouped_by_yr['year_built_max'] <= 2010) &
(asr_grouped_by_yr['total_sqft'] > 0) &
# (asr_grouped_by_yr['total_beds'] > 0)
(asr_grouped_by_yr['total_baths'] > 0) &
(asr_grouped_by_yr['total_rooms'] > 0) &
(asr_grouped_by_yr['mean_stories'] > 0) &
(asr_grouped_by_yr['total_area'] > 0)
]
hedonic = smf.ols(
'total_value ~ total_sqft + np.log1p(total_beds) + np.log1p(total_baths) + np.log1p(total_units) + mean_stories + total_area + '
'total_rooms + yr_built_since_1900 + C(asr_yr) + nbd', data=df_hed
).fit()
df_hed['hedonic_resid'] = hedonic.resid
print(hedonic.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: total_value R-squared: 0.908
Model: OLS Adj. R-squared: 0.908
Method: Least Squares F-statistic: 6721.
Date: Sat, 25 Jan 2020 Prob (F-statistic): 0.00
Time: 16:52:28 Log-Likelihood: -9.8665e+05
No. Observations: 61479 AIC: 1.973e+06
Df Residuals: 61388 BIC: 1.974e+06
Df Model: 90
Covariance Type: nonrobust
=========================================================================================
coef std err t P>|t| [0.025 0.975]
-----------------------------------------------------------------------------------------
Intercept -1.15e+06 8.32e+04 -13.824 0.000 -1.31e+06 -9.87e+05
C(asr_yr)[T.2008] 8.132e+04 4.25e+04 1.914 0.056 -1957.907 1.65e+05
C(asr_yr)[T.2009] 1.349e+05 4.25e+04 3.171 0.002 5.15e+04 2.18e+05
C(asr_yr)[T.2010] 1.525e+05 4.26e+04 3.584 0.000 6.91e+04 2.36e+05
C(asr_yr)[T.2011] 1.42e+05 4.28e+04 3.319 0.001 5.81e+04 2.26e+05
C(asr_yr)[T.2012] 1.872e+05 4.26e+04 4.391 0.000 1.04e+05 2.71e+05
C(asr_yr)[T.2013] 2.367e+05 4.26e+04 5.553 0.000 1.53e+05 3.2e+05
C(asr_yr)[T.2014] 2.642e+05 4.27e+04 6.191 0.000 1.81e+05 3.48e+05
C(asr_yr)[T.2015] 1.47e+05 5.29e+04 2.779 0.005 4.33e+04 2.51e+05
C(asr_yr)[T.2016] 3.547e+05 4.26e+04 8.321 0.000 2.71e+05 4.38e+05
C(asr_yr)[T.2017] 4.389e+05 4.27e+04 10.289 0.000 3.55e+05 5.22e+05
nbd[T.01B] -3810.7164 4.79e+04 -0.080 0.937 -9.77e+04 9.01e+04
nbd[T.01C] 6.307e+04 7.92e+04 0.796 0.426 -9.22e+04 2.18e+05
nbd[T.01D] 1.4e+04 6.81e+04 0.205 0.837 -1.2e+05 1.48e+05
nbd[T.01E] 6.04e+04 4.65e+04 1.298 0.194 -3.08e+04 1.52e+05
nbd[T.01F] -2.351e+05 2.59e+05 -0.907 0.364 -7.43e+05 2.73e+05
nbd[T.01G] 5.751e+04 7.27e+04 0.791 0.429 -8.49e+04 2e+05
nbd[T.02A] 1.978e+05 1.81e+05 1.093 0.274 -1.57e+05 5.52e+05
nbd[T.02B] 2.301e+05 7.18e+04 3.207 0.001 8.95e+04 3.71e+05
nbd[T.02C] 1.494e+05 5.06e+04 2.953 0.003 5.02e+04 2.49e+05
nbd[T.02D] 1.153e+05 9.81e+04 1.175 0.240 -7.7e+04 3.08e+05
nbd[T.02E] 1.019e+05 5.57e+04 1.830 0.067 -7216.136 2.11e+05
nbd[T.02F] 7.353e+04 4.47e+04 1.646 0.100 -1.4e+04 1.61e+05
nbd[T.02G] 2.899e+05 1.8e+05 1.608 0.108 -6.36e+04 6.43e+05
nbd[T.03D] 6.851e+08 3.7e+06 185.375 0.000 6.78e+08 6.92e+08
nbd[T.03G] 2.781e+04 1.55e+05 0.179 0.858 -2.77e+05 3.32e+05
nbd[T.03H] 4.71e+04 1.07e+05 0.439 0.660 -1.63e+05 2.57e+05
nbd[T.03J] 2.15e+05 1.93e+05 1.113 0.266 -1.64e+05 5.94e+05
nbd[T.04B] 1.671e+06 1.72e+05 9.740 0.000 1.33e+06 2.01e+06
nbd[T.04C] 2.803e+05 1.1e+05 2.556 0.011 6.54e+04 4.95e+05
nbd[T.04D] 1.339e+06 1.09e+05 12.322 0.000 1.13e+06 1.55e+06
nbd[T.04F] -5.545e+04 2.16e+05 -0.257 0.797 -4.78e+05 3.67e+05
nbd[T.04H] 2.167e+06 3.43e+05 6.317 0.000 1.49e+06 2.84e+06
nbd[T.04N] 2.77e+04 1.91e+05 0.145 0.885 -3.47e+05 4.03e+05
nbd[T.04S] 2.576e+05 9.85e+04 2.617 0.009 6.47e+04 4.51e+05
nbd[T.04T] 1.227e+06 6.82e+05 1.800 0.072 -1.09e+05 2.56e+06
nbd[T.05A] 3.667e+05 8.03e+04 4.566 0.000 2.09e+05 5.24e+05
nbd[T.05B] 1.09e+05 1.01e+05 1.076 0.282 -8.95e+04 3.08e+05
nbd[T.05C] 2.938e+05 4.94e+04 5.946 0.000 1.97e+05 3.91e+05
nbd[T.05D] 1.639e+05 7.17e+04 2.286 0.022 2.34e+04 3.04e+05
nbd[T.05E] 1.431e+05 9.45e+04 1.515 0.130 -4.2e+04 3.28e+05
nbd[T.05F] 3.855e+05 1.43e+05 2.703 0.007 1.06e+05 6.65e+05
nbd[T.05G] 2.671e+05 7.68e+04 3.479 0.001 1.17e+05 4.18e+05
nbd[T.05H] 2.771e+05 1.75e+05 1.587 0.113 -6.52e+04 6.19e+05
nbd[T.05J] -2.269e+06 2.03e+05 -11.179 0.000 -2.67e+06 -1.87e+06
nbd[T.05K] 3.424e+05 6.15e+04 5.570 0.000 2.22e+05 4.63e+05
nbd[T.05M] 1.855e+05 9.18e+04 2.020 0.043 5542.434 3.66e+05
nbd[T.06A] 2.163e+04 1.09e+05 0.198 0.843 -1.93e+05 2.36e+05
nbd[T.06B] 1.099e+04 1.11e+05 0.099 0.921 -2.06e+05 2.28e+05
nbd[T.06C] 4.65e+05 8.94e+04 5.202 0.000 2.9e+05 6.4e+05
nbd[T.06D] -1.865e+06 1.58e+05 -11.766 0.000 -2.18e+06 -1.55e+06
nbd[T.06E] 2.433e+04 1.99e+05 0.122 0.903 -3.66e+05 4.14e+05
nbd[T.06F] -2.519e+05 1.27e+05 -1.987 0.047 -5.01e+05 -3363.875
nbd[T.07A] 2.93e+05 9.73e+04 3.011 0.003 1.02e+05 4.84e+05
nbd[T.07B] 2.794e+05 6.94e+04 4.029 0.000 1.43e+05 4.15e+05
nbd[T.07C] -1.738e+05 3.15e+05 -0.552 0.581 -7.91e+05 4.43e+05
nbd[T.07D] 1.814e+05 7.32e+04 2.480 0.013 3.8e+04 3.25e+05
nbd[T.08A] 2.826e+06 1.72e+05 16.445 0.000 2.49e+06 3.16e+06
nbd[T.08B] -6.786e+05 6.82e+05 -0.995 0.320 -2.01e+06 6.58e+05
nbd[T.08C] 2.555e+05 8.75e+04 2.922 0.003 8.41e+04 4.27e+05
nbd[T.08D] -4.767e+05 1.66e+05 -2.868 0.004 -8.03e+05 -1.51e+05
nbd[T.08E] 1.976e+05 7.21e+04 2.741 0.006 5.63e+04 3.39e+05
nbd[T.08F] 8.406e+05 1.8e+05 4.677 0.000 4.88e+05 1.19e+06
nbd[T.08G] 9.171e+05 8.54e+04 10.739 0.000 7.5e+05 1.08e+06
nbd[T.08H] -4.212e+06 3.4e+05 -12.386 0.000 -4.88e+06 -3.55e+06
nbd[T.09A] 2.454e+05 8.68e+04 2.827 0.005 7.53e+04 4.16e+05
nbd[T.09B] 1.728e+07 8.55e+05 20.217 0.000 1.56e+07 1.9e+07
nbd[T.09C] 1.459e+05 5.76e+04 2.535 0.011 3.31e+04 2.59e+05
nbd[T.09D] 4.436e+06 6.06e+05 7.322 0.000 3.25e+06 5.62e+06
nbd[T.09E] 4.299e+05 6.26e+04 6.863 0.000 3.07e+05 5.53e+05
nbd[T.09F] 3.109e+06 2.01e+05 15.507 0.000 2.72e+06 3.5e+06
nbd[T.09G] 1.977e+05 6.51e+04 3.037 0.002 7.01e+04 3.25e+05
nbd[T.10A] 5.223e+05 9.84e+04 5.309 0.000 3.29e+05 7.15e+05
nbd[T.10B] 1.289e+05 1.07e+05 1.203 0.229 -8.11e+04 3.39e+05
nbd[T.10C] 1.009e+05 8.83e+04 1.143 0.253 -7.21e+04 2.74e+05
nbd[T.10D] 1.44e+05 1.23e+05 1.172 0.241 -9.68e+04 3.85e+05
nbd[T.10E] 1.96e+05 9.74e+04 2.012 0.044 5063.520 3.87e+05
nbd[T.10F] 2.091e+05 9.04e+04 2.314 0.021 3.2e+04 3.86e+05
nbd[T.10G] 3.032e+04 1.78e+05 0.171 0.865 -3.18e+05 3.79e+05
nbd[T.10H] 1.797e+05 1.21e+05 1.490 0.136 -5.67e+04 4.16e+05
nbd[T.10J] -3.703e+05 3.18e+05 -1.166 0.244 -9.93e+05 2.52e+05
nbd[T.10K] 1.513e+05 1.83e+05 0.826 0.409 -2.08e+05 5.11e+05
nbd[T.unknown] -7.102e+05 2.26e+06 -0.314 0.753 -5.14e+06 3.72e+06
total_sqft 52.1297 1.005 51.889 0.000 50.161 54.099
np.log1p(total_beds) 1.398e+05 1.17e+04 11.949 0.000 1.17e+05 1.63e+05
np.log1p(total_baths) 7.647e+05 3.73e+04 20.478 0.000 6.92e+05 8.38e+05
np.log1p(total_units) -2.458e+05 3.27e+04 -7.514 0.000 -3.1e+05 -1.82e+05
mean_stories 1.342e+05 1.28e+04 10.468 0.000 1.09e+05 1.59e+05
total_area -45.7552 0.764 -59.897 0.000 -47.252 -44.258
total_rooms 9411.7576 360.377 26.116 0.000 8705.418 1.01e+04
yr_built_since_1900 -330.6120 951.613 -0.347 0.728 -2195.776 1534.552
==============================================================================
Omnibus: 100098.512 Durbin-Watson: 1.996
Prob(Omnibus): 0.000 Jarque-Bera (JB): 21121730581.549
Skew: -8.935 Prob(JB): 0.00
Kurtosis: 2874.432 Cond. No. 2.64e+07
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 2.64e+07. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
Fitting the sharp RD Control variables to add:- rent burden?- stddev prop value
###Code
bandwidth = 27
df = asr[
(asr['any_rc_eligibility'] == 1) & (asr['year_built_max'] > 1980 - bandwidth) &
(asr['year_built_max'] < 1980 + bandwidth) & (asr['total_units'] > 0)]
df['rent_control'] = False
df.loc[df['pre_1980'] == True, 'rent_control'] = True
df['year_built_centered'] = df['year_built_max'] - 1980
df.groupby('pre_1980').agg(
mean_any_ev=('any_ev', 'mean'),
total_addresses=('index', 'count'),
total_units=('total_units', 'sum'),
total_evictions=('ev_count', 'sum'),
ev_per_unit=('ev_per_unit', 'mean')
)
df.columns
df = pd.merge(
df,
df_hed[[
'asr_yr', 'house_1', 'house_2', 'street_name', 'street_type', 'total_rooms',
'total_value', 'total_area', 'total_sqft', 'nbd', 'total_baths', 'hedonic_resid']],
on=['asr_yr', 'house_1', 'house_2', 'street_name', 'street_type'])
rd = smf.ols(
"ev_per_unit_since_07 ~ rent_control + year_built_centered*rent_control + "
"np.log1p(total_value):np.log(total_sqft) + np.log(total_units)",
data=df)
fitted = rd.fit()
print(fitted.summary())
fitted.params[1]
###Output
_____no_output_____
###Markdown
Potential evictions
###Code
units_by_yr = asr[
(asr['any_rc_eligibility'] == 1) &
(asr['year_built_max'] > 1900) &
(asr['year_built_max'] < 2100)].groupby('year_built_max').agg({'total_units': 'sum'}).reset_index()
fig, ax = plt.subplots(figsize=(13,8))
ax.scatter(units_by_yr['year_built_max'], units_by_yr['total_units'], s=25, facecolors='none', edgecolors='r')
ax.plot(units_by_yr['year_built_max'], units_by_yr['total_units'], lw=1, c='k', )
_ = ax.set_xlabel("year built", fontsize=16)
_ = ax.set_ylabel("# new units", fontsize=16)
_ = ax.set_title("SF New Construction: Rent-control eligible use-codes", fontsize=20)
rc_pop = asr[(asr['any_rc_eligibility'] == 1) & (asr['year_built_max'] > 1979)]
rc_pop = rc_pop.groupby('year_built_max').agg({'total_units': 'sum'})
rc_pop.index.name = "new rent control year-built cutoff"
rc_pop['cumulative_units'] = rc_pop['total_units'].cumsum()
rc_pop['potential_evictions'] = rc_pop['cumulative_units'] * fitted.params[1]
rc_pop['pct_growth'] = rc_pop['potential_evictions'] / ev_per_year
rc_pop
###Output
_____no_output_____
###Markdown
RDD package
###Code
import rdd
###Output
_____no_output_____
###Markdown
Check class balance in test data
###Code
data = Utils().test_data('data/sst/sst_test.txt')
ax = data['truth'].value_counts(sort=False).plot(kind='barh');
ax.set_xlabel("Number of Samples in Test Set");
ax.set_ylabel("Label");
###Output
_____no_output_____
###Markdown
TextBlob
###Code
tb = textblob('data/sst/sst_test.txt', lower_case=False)
plot_confusion_matrix(tb['truth'], tb['textblob_pred'], normalize=True);
###Output
Accuracy: 28.3710407239819
Macro F1-score: 0.2468141571266554
Normalized confusion matrix
###Markdown
Vader
###Code
va = vader('data/sst/sst_test.txt', lower_case=False)
plot_confusion_matrix(va['truth'], va['vader_pred'], normalize=True);
###Output
Accuracy: 31.538461538461537
Macro F1-score: 0.31297326018199634
Normalized confusion matrix
###Markdown
FastText
###Code
ft = fasttext('data/sst/sst_test.txt',
model='models/fasttext/sst.bin',
lower_case=False)
plot_confusion_matrix(ft['truth'], ft['fasttext_pred'], normalize=True);
###Output
Accuracy: 41.40271493212669
Macro F1-score: 0.3866337724462768
Normalized confusion matrix
###Markdown
Image Analysis with Python - Tutorial Pipelineadapted from https://git.embl.de/grp-bio-it/image-analysis-with-python/tree/master/session-3to5 Importing Modules & Packages Let's start by importing the package NumPy, which enables the manipulation of numerical arrays:
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Recall that, once imported, we can use functions/modules from the package, for example to create an array:
###Code
a = np.array([1, 2, 3])
print(a)
print(type(a))
###Output
[1 2 3]
<class 'numpy.ndarray'>
###Markdown
Note that the package is imported under a variable name (here `np`). You can freely choose this name yourself. For example, it would be just as valid (but not as convenient) to write:```pythonimport numpy as lovelyArrayToola = lovelyArrayTool.array([1,2,3])``` ExerciseUsing the import command as above, follow the instructions in the comments below to import two additional modules that we will be using frequently in this pipeline.
###Code
# The plotting module matplotlib.pyplot as plt
import matplotlib.pyplot as plt
# The image processing module scipy.ndimage as ndi
import scipy.ndimage as ndi
###Output
_____no_output_____
###Markdown
Side Note for Jupyter Notebook UsersYou can configure how the figures made by matplotlib are displayed.The most common options are the following:- **inline**: displays as static figure in code cell output- **notebook**: displays as interactive figure in code cell output - **qt**: displays as interactive figure in a separate windowFeel free to test them out on one of the figures you will generate later on in the tutorial. The code cell below shows how to set the different options. Note that combinations of different options in the same notebook do not always work well, so it is best to decide for one and use it throughout. You may need to restart the kernel (`Kernel > Restart`) when you change from one option to another.
###Code
# Set matplotlib backend
%matplotlib inline
#%matplotlib notebook
#%matplotlib qt
###Output
_____no_output_____
###Markdown
Loading & Handling Image Data BackgroundImages are essentially just numbers (representing intensity) in an ordered grid of pixels. Image processing is simply to carry out mathematical operations on these numbers.The ideal object for storing and manipulating ordered grids of numbers is the **array**. Many mathematical operations are well defined on arrays and can be computed quickly by vector-based computation.Arrays can have any number of dimensions (or "axes"). For example, a 2D array could represent the x and y axis of a grayscale image (xy), a 3D array could contain a z-stack (zyx), a 4D array could also have multiple channels for each image (czyx) and a 5D array could have time on top of that (tczyx). ExerciseWe will now proceed to load one of the example images and verify that we get what we expect. Note: Before starting, it always makes sense to have a quick look at the data in Fiji/ImageJ so you know what you are working with!Follow the instructions in the comments below.
###Code
# (i) specify the file path to a suitable test image, using pathlib's `Path`
from pathlib import Path
dir_path = Path("~/Documents/Alvise/summer/notebooks")
file_path = dir_path / "example.tif"
print(file_path)
# (ii) Load the image
# Import the function 'imread' from the module 'skimage.io'.
# (Note: If this gives you an error, please refer to the note below!)
from skimage.io import imread
# Load one of your images and store it in a variable.
img = imread('example.tif')
###Output
_____no_output_____
###Markdown
----*Important note for those who get an error when trying to import `imread` from `skimage.io`:*Some users have been experiencing problems with this module, even though the rest of skimage is installed correctly (running `import skimage` does not given an error). This may have something to do with operating system preferences. The easiest solution in this case is to install the module `tifffile` (with three`f`) and use the function `imread` from that module (it is identical to the `imread` function of `skimage.io` when reading `tif` files). The `tifffile` module does not come with the Anaconda distribution, so it's likely that you don't have it installed. To install it, save and exit Jupyter notebook, then go to a terminal and type `conda install -c conda-forge tifffile`. After the installation is complete, restart Jupyter notebook, come back here and import `imread` from `tifffile`. This should now hopefully work.----
###Code
# (iii) Check that everything is in order
# Check that 'img' is a variable of type 'ndarray' - use Python's built-in function 'type'.
print("Loaded array is of type:", type(img))
# Print the shape of the array using the numpy-function 'shape'.
# Make sure you understand the output!
print("Loaded array has shape:", img.shape)
# Check the datatype of the individual numbers in the array. You can use the array attribute 'dtype' to do so.
# Make sure you understand the output!
print("Loaded values are of type:", img.dtype)
# (iv) Look at the image to confirm that everything worked as intended
# To plot the array as an image, use pyplot's functions 'plt.imshow' followed by 'plt.show'.
# Check the documentation for 'plt.imshow' and note the parameters that can be specified, such as colormap (cmap)
# and interpolation. Since you are working with scientific data, interpolation is unwelcome, so you should set it
# to "none". The most common cmap for grayscale images is naturally "gray".
# You may also want to adjust the size of the figure. You can do this by preparing the figure canvas with
# the function 'plt.figure' before calling 'plt.imshow'. The canvas size is adjusted using the keyword argument
# 'figsize' when calling 'plt.figure'.
plt.imshow(img)
plt.show()
# (v) Look at the image to confirm that everything worked as intended
# To plot an array as an image, use pyplot's functions 'plt.imshow' followed by 'plt.show'.
# Check the documentation for 'plt.imshow' and note the parameters that can be specified, such as colormap (cmap)
# and interpolation. Since you are working with scientific data, interpolation is unwelcome, so you should set it
# to "none". The most common cmap for grayscale images is naturally "gray".
# You may also want to adjust the size of the figure. You can do this by preparing the figure canvas with
# the function 'plt.figure' before calling 'plt.imshow'. The canvas size is adjusted using the keyword argument
# 'figsize' when calling 'plt.figure'.
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Preprocessing BackgroundThe goal of image preprocessing is to prepare or optimize the images to make further analysis easier. Usually, this boils down to increasing the signal-to-noise ratio by removing noise and background and by enhancing structures of interest.The specific preprocessing steps used in a pipeline depend on the type of sample, the microscopy technique used, the image quality, and the desired downstream analysis. The most common operations include:- Deconvolution - Image reconstruction based on information about the PSF of the microscope - These days deconvolution is often included with microscope software - *Our example images are not deconvolved, but will do just fine regardless*- Conversion to 8-bit images to save memory / computational time - *Our example images are already 8-bit*- Cropping of images to an interesting region - *The field of view in our example images is fine as it is*- Smoothing of technical noise - This is a very common step and usually helps to improve almost any type of downstream analysis - Commonly used filters are the `Gaussian filter` and the `median filter` - *Here we will be using a Gaussian filter.*- Corrections of technical artifacts - Common examples are uneven illumination and multi-channel bleed-through- Background subtraction - There are various ways of sutracting background signal from an image - Two different types are commonly distinguished: - `uniform background subtraction` treats all regions of the image the same - `adaptive or local background subtraction` automatically accounts for differences between regions of the image Gaussian SmoothingA Gaussian filter smoothens an image by convolving it with a Gaussian-shaped kernel. In the case of a 2D image, the Gaussian kernel is also 2D and will look something like this:How much the image is smoothed by a Gaussian kernel is determined by the standard deviation of the Gaussian distribution, usually referred to as **sigma** ($\sigma$). A higher $\sigma$ means a broader distribution and thus more smoothing.**How to choose the correct value of $\sigma$?**This depends a lot on your images, in particular on the pixel size. In general, the chosen $\sigma$ should be large enough to blur out noise but small enough so the "structures of interest" do not get blurred too much. Usually, the best value for $\sigma$ is simply found by trying out some different options and looking at the result. ExercisePerform Gaussian smoothing and visualize the result.Follow the instructions in the comments below.
###Code
# (ii) Clip the image using `np.ndarry.clip` (`img.clip(...)`)
# hint: `np.percentile` might come in handy
img_clipped = np.clip(img,int(np.percentile(img,60)),int(np.percentile(img,99)))
# visualize the clipped image using 'plt.imshow'
plt.figure(figsize= (8,8))
plt.imshow(img_clipped)
plt.show()
np.percentile(img,60)
# (i) Create a variable for the smoothing factor sigma, which should be an integer value
sigma = 6
# After implementing the Gaussian smoothing function below, you can modify this variable
# to find the ideal value of sigma.
# (iii) Perform the smoothing on the clipped image
# To do so, use the Gaussian filter function 'ndi.filters.gaussian_filter' from the
# image processing module 'scipy.ndimage', which was imported at the start of the tutorial.
# Check out the documentation of scipy to see how to use this function.
img_smooth = ndi.filters.gaussian_filter(img_clipped,sigma)
plt.figure(figsize= (9.5,9.5))
plt.imshow(img_smooth)
plt.show()
# (iv) Visualize the result using 'plt.imshow'
# Compare with the original image visualized above.
# Does the output make sense? Is this what you expected?
# Can you optimize sigma such that the image looks smooth without blurring the membranes too much?
sigma2 = 4
img_smooth = ndi.filters.gaussian_filter(img_clipped,sigma2)
# To have a closer look at a specific region of the image, crop that region out and show it in a
# separate plot. Remember that you can crop arrays by "indexing" or "slicing" them similar to lists.
# Use such "zoomed-in" views throughout this tutorial to take a closer look at your intermediate
# results when necessary.
crop=img_smooth[550:1150,600:1200]
plt.figure(figsize=(9,9))
plt.imshow(crop)
plt.show()
# (v) BONUS: Show the raw and smoothed images side by side using 'plt.subplots'
###Output
_____no_output_____
###Markdown
Manual Thresholding & Threshold Detection BackgroundThe easiest way to distinguish foreground objects (here: membranes) from the image background is to threshold the image, meaning all pixels with an intensity above a certain threshold are accepted as foreground, all others are set as background.To find the best threshold for a given image, one option is to simply try out different thresholds manually. Alternatively, one of many algorithms for automated 'threshold detection' can be used. These algorithms use information about the image (such as the histogram) to automatically find a suitable threshold value, often under the assumption that the background and foreground pixels in an image belong to two clearly distinct populations in terms of their intensity. There are many different algorithms for threshold detection and it is often hard to predict which one will produce the nicest and most robust result for a particular dataset. It therefore makes sense to try out a bunch of different options.For this pipeline, we will ultimately use a more advanced thresholding approach, which also accounts (to some extent) for variations in signal across the field of view: adaptive thresholding. But first, let's experiment a bit with threshold detection. ExerciseTry out manual thresholding and automated threshold detection.Follow the instructions in the comments below.
###Code
# (i) Create a variable for a manually set threshold, which should be an integer
# This can be changed later to find a suitable value.
manThres=37
#seg1 = img.astype(np.uint8)
#print(seg1.dtype, seg1.shape)
# (ii) Perform thresholding on the smoothed image
# Remember that you can use relational (Boolean) expressions such as 'smaller' (<), 'equal' (==)
# or 'greater or equal' (>=) with numpy arrays - and you can directly assign the result to a new
# variable.
cropT= crop > manThres
seg1 = cropT.astype(np.uint8)
# Check the dtype of your thresholded image
# You should see that the dtype is 'np.bool', which stands for 'Boolean' and means the array
# is now simply filled with 'True' and 'False', where 'True' is the foreground (the regions
# above the threshold) and 'False' is the background.
print(seg1[200,100,1])
# (iii) Visualize the result
plt.figure(figsize=(9,9))
plt.imshow(seg1*crop)
plt.show()
# (iii) Visualize the result
plt.figure(figsize=(9,9))
plt.imshow(seg1*255)
plt.show()
# (iv) Try out different thresholds to find the best one
# If you are using jupyter notebook, you can adapt the code below to
# interactively change the threshold and look for the best one. These
# kinds of interactive functions are called 'widgets' and are very
# useful in exploratory data analysis to create greatly simplified
# 'User Interfaces' (UIs) on the fly.
# As a BONUS exercise, try to understand or look up how the widget works
# and play around with it a bit!
# (Note: If this just displays a static image without a slider to adjust
# the threshold or if it displays a text warning about activating
# the 'widgetsnbextension', check out the note below!)
# Prepare widget
from ipywidgets import interact
@interact(thresh=(30,80,2))
def select_threshold(thresh=40):
# Thresholding
### ADAPT THIS: Change 'img_smooth' into the variable you stored the smoothed image in!
mem = crop > thresh
# Visualization
plt.figure(figsize=(7,7))
plt.imshow(mem.astype(np.uint8)*255, interpolation='none', cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
----*Important note for those who get a static image (no slider) or a text warning:*For some users, it is necessary to specifically activate the widgets plugin for Jupyter notebook. To do so, save and exit Jupyter notebook, then go to a terminal and write `jupyter nbextension enable --py --sys-prefix widgetsnbextension`. After this, you should be able to restart Jupyter notebook and the widget should display correctly. If it still doesn't work, you may instead have to type `jupyter nbextension enable --py widgetsnbextension` in the terminal. However, note that this implies that your installation of Conda/Jupyter is not optimally configured (see [this GitHub issue](https://github.com/jupyter-widgets/ipywidgets/issues/541) for more information, although this is not something you necessarily need to worry about in the context of this course).----
###Code
# (v) Perfom automated threshold detection with Otsu's method
# The scikit-image module 'skimage.filters.thresholding' provides
# several threshold detection algorithms. The most popular one
# among them is Otsu's method. Using what you've learned so far,
# import the 'threshold_otsu' function, use it to automatically
# determine a threshold for the smoothed image, apply the threshold,
# and visualize the result.
### YOUR CODE HERE!
# (vi) BONUS: Did you notice the 'try_all_threshold' function?
# That's convenient! Use it to automatically test the threshold detection
# functions in 'skimage.filters.thresholding'. Don't forget to adjust the
# 'figsize' parameter so the resulting images are clearly visible.
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Adaptive Thresholding BackgroundSimply applying a fixed intensity threshold does not always produce a foreground mask of sufficiently high quality, since background and foreground intensities often vary across the image. In our example image, for instance, the intensity drops at the image boundaries - a problem that cannot be resolved just by changing the threshold value.One way of addressing this issue is to use an *adaptive thresholding* algorithm, which adjusts the threshold locally in different regions of the image to account for varying intensities.Although `scikit-image` provides a function for adaptive thresholding (called `threshold_local`), we will here implement our own version, which is slightly different and will hopefully make the concept of adaptive thresholding very clear.Our approach to adaptive tresholding works in two steps:1. Generation of a "background image" This image should - across the entire image - always have higher intensities than the local background but lower intensities than the local foreground. This can be achieved by strong blurring/smoothing of the image, as illustrated in this 1D example: 2. Thresholding of the original image with the background Instead of thresholding with a single value, every pixel in the image is thresholded with the corresponding pixel of the "background image". Exercise Implement the two steps of the adaptive background subtraction:1. Use a strong "mean filter" (aka "uniform filter") to create the background image. This simply assigns each pixel the average value of its local neighborhood. Just like the Gaussian blur, this can be done by convolution, but this time using a "uniform kernel" like this one: To define which pixels should be considered as the local neighborhood of a given pixel, a `structuring element` (`SE`) is used. This is a small binary image where all pixels set to `1` will be considered as part of the neighborhood and all pixels set to `0` will not be considered. Here, we use a disc-shaped `SE`, as this reduces artifacts compared to a square `SE`. *Side note:* A strong Gaussian blur would also work to create the background mask. For the Gaussian blur, the analogy to the `SE` is the `sigma` value, which in a way also determines the size of the local neighborhood.2. Use the background image for thresholding. In practical terms, this works in exactly the same way as thresholding with a single value, since numpy arrays will automatically perform element-wise (pixel-by-pixel) comparisons when compared to other arrays of the same shape by a relational (Boolean) expression.Follow the instructions in the comments below.
###Code
# Step 1
# ------
# (i) Create a disk-shaped structuring element and asign it to a new variable.
# Structuring elements are small binary images that indicate which pixels
# should be considered as the 'neighborhood' of the central pixel.
#
# An example of a small disk-shaped SE would be this:
# 0 0 1 0 0
# 0 1 1 1 0
# 1 1 1 1 1
# 0 1 1 1 0
# 0 0 1 0 0
#
# The expression below creates such structuring elements.
# It is an elegant but complicated piece of code and at the moment it is not
# necessary for you to understand it in detail. Use it to create structuring
# elements of different sizes (by changing 'i') and find a way to visualize
# the result (remember that the SE is just a small 'image').
#
# Try to answer the following questions:
# - Is the resulting SE really circular?
# - Could certain values of 'i' cause problems? If so, why?
# - What value of 'i' should be used for the SE?
# Note that, similar to the sigma in Gaussian smoothing, the size of the SE
# is first estimated based on the images and by thinking about what would
# make sense. Later it can be optimized by trial and error.
# Create SE
i = ???
struct = (np.mgrid[:i,:i][0] - np.floor(i/2))**2 + (np.mgrid[:i,:i][1] - np.floor(i/2))**2 <= np.floor(i/2)**2
# Visualize the result
### YOUR CODE HERE!
# (ii) Create the background
# Run a mean filter over the image using the disc SE and assign the output to a new variable.
# Use the function 'skimage.filters.rank.mean'.
### YOUR CODE HERE!
# (iii) Visualize the resulting background image. Does what you get make sense?
### YOUR CODE HERE!
# Step 2
# ------
# (iv) Threshold the Gaussian-smoothed original image against the background image created in step 1
# using a relational expression
### YOUR CODE HERE!
# (v) Visualize and understand the output.
### YOUR CODE HERE!
# What do you observe?
# Are you happy with this result as a membrane segmentation?
# Adapt the size of the circular SE to optimize the result!
###Output
_____no_output_____
###Markdown
Improving Masks with Binary Morphology BackgroundMorphological operations such as `erosion`, `dilation`, `closing` and `opening` are common tools used to improve masks after they are generated by thresholding. They can be used to fill small holes, remove noise, increase or decrease the size of an object, or smoothen mask outlines.Most morphological operations are once again simple kernel functions that are applied at each pixel of the image based on their neighborhood as defined by a `structuring element` (`SE`). For example, `dilation` simply assigns to the central pixel the maximum pixel value within the neighborhood; it is a maximum filter. Conversely, `erosion` is a minimum filter. Additional options emerge from combining the two: `morphological closing`, for example, is a `dilation` followed by an `erosion`. This is used to fill in gaps and holes or smoothing mask outlines without significantly changing the mask's area. Finally, there are also some more complicated morphological operations, such as `hole filling`. ExerciseImprove the membrane segmentation from above with morphological operations.Specifically, use `binary hole filling` to get rid of the speckles of foreground pixels that litter the insides of the cells. Furthermore, try different other types of morphological filtering to see how they change the image and to see if you can improve the membrane mask even more, e.g. by filling in gaps.Follow the instructions in the comments below. Visualize all intermediate results of your work and remember to "zoom in" to get a closer look by slicing out and then plotting a subsection of the image array.
###Code
# (i) Get rid of speckles using binary hole filling
# Use the function 'ndi.binary_fill_holes' for this. Be sure to check the docs to
# understand exactly what it does. For this to work as intended, you will have to
# invert the mask, which you can do using the function `np.logical_not` or the
# corresponding operator '~'. Again, be sure to understand why this has to be done
# and don't forget to revert the result back.
### YOUR CODE HERE!
# (ii) Try out other morphological operations to further improve the membrane mask
# The various operations are available in the ndimage module, for example 'ndi.binary_closing'.
# Play around and see how the different functions affect the mask. Can you optimize the mask,
# for example by closing gaps?
# Note that the default SE for these functions is a square. Feel free to create another disc-
# shaped SE and see how that changes the outcome.
# BONUS: If you pay close attention, you will notice that some of these operations introduce
# artefacts at the image boundaries. Can you come up with a way of solving this? (Hint: 'np.pad')
### YOUR CODE HERE!
# (iii) Visualize the final result
### YOUR CODE HERE
# At this point you should have a pretty neat membrane mask.
# If you are not satisfied with the quality your membrane segmentation, you should go back
# and fine-tune the size of the SE in the adaptive thresholding section and also optimize
# the morphological cleaning operations.
# Note that the quality of the membrane segmentation will have a significant impact on the
# cell segmentation we will perform next.
###Output
_____no_output_____
###Markdown
Connected Components Labeling BackgroundBased on the membrane segmentation, we can get a preliminary segmentation of the cells in the image by considering each background region surrounded by membranes as a cell. This can already be good enough for many simple measurements.The only thing we still need to do in order to get there is to label each cell individually. Only if each separate cell has a unique number (an `ID`) assigned, values such as the mean intensity can be measured and analyzed at the single-cell level.The approach used to achieve this is called `connected components labeling`. It gives every connected group of foreground pixels a unique `ID` number. ExerciseUse your membrane segmentation for connected components labeling.Follow the instructions in the comments below.
###Code
# (i) Label connected components
# Use the function 'ndi.label' from the 'ndimage' module.
# Note that this function labels foreground pixels (1s, not 0s), so you may need
# to invert your membrane mask just as for hole filling above.
# Also, note that 'ndi.label' returns another result in addition to the labeled
# image. Read up on this in the function's documention and make sure you don't
# mix up the two outputs!
### YOUR CODE HERE!
# (ii) Visualize the output
# Here, it is no longer ideal to use a 'gray' colormap, since we want to visualize that each
# cell has a unique ID. Play around with different colormaps (check the docs to see what
# types of colormaps are available) and choose one that you are happy with.
### YOUR CODE HERE!
# Take a close look at the picture and note mistakes in the segmentation. Depending on the
# quality of your membrane mask, there will most likely be some cells that are 'fused', meaning
# two or more cells are labeled as the same cell; this is called "under-segmentation".
# We will resolve this issue in the next step. Note that our downstream pipeline does not involve
# any steps to resolve "over-segmentation" (i.e. a cell being wrongly split into multiple labeled
# areas), so you should tune your membrane mask such that this is not a common problem.
###Output
_____no_output_____
###Markdown
Segmentation by Seeding & Expansion BackgroundThe segmentation we achieved by membrane masking and connected components labeling is a good start. We could for example use it to measure the fluorescence intensity in each cell's cytoplasm. However, we cannot use it to measure intensities at the membrane of the cells, nor can we use it to accurately measure features like cell shape or size.To improve this (and to resolve cases of under-segmentation), we can use a "seeding & expansion" strategy. Expansion algorithms such as the `watershed` start from a small `seed` and "grow outward" until they touch the boundaries of neighboring cells, which are themselves growing outward from neighboring seeds. Since the "growth rate" at the edge of the growing areas is dependent on image intensity (higher intensity means slower expansion), these expansion methods end up tracing the cells' outlines. Seeding by Distance Transform BackgroundA `seed image` contains a few pixels at the center of each cell labeled by a unique `ID` number and surrounded by zeros. The expansion algorithm will start from these central pixels and grow outward until all zeros are overwritten by an `ID` label. In the case of `watershed` expansion, one can imagine the `seeds` as the sources from which water pours into the cells and starts filling them up.For multi-channel images that contain a nuclear label, it is common practice to mask the nuclei by thresholding and use an eroded version of the nuclei as seeds for cell segmentation. However, there are good alternative seeding approaches for cases where nuclei are not available or not nicely separable by thresholding.Here, we will use a `distance transform` for seeding. In a `distance transform`, each pixel in the foreground (here the cells) is assigned a value corresponding to its distance from the closest background pixel (here the membrane segmentation). In other words, we encode within the image how far each pixel of a cell is away from the membrane (see figure below). The pixels furthest away from the membrane will be at the center of the cells and will have the highest values. Using a function to detect `local maxima`, we will find these high-value peaks and use them as seeds for our segmentation.One big advantage of this approach is that it will create two separate seeds even if two cells are connected by a hole in the membrane segmentation. Thus, under-segmentation artifacts will be reduced. Exercise Find seeds using the distance transform approach.This involves the following three steps:1. Run the distance transform on your membrane mask.2. Due to irregularities in the membrane shape, the distance transform may have some smaller local maxima in addition to those at the center of the cells. This will lead to additional seeds, which will lead to over-segmentation. To resolve this problem, smoothen the distance transform using Gaussian smoothing. 3. Find the seeds by detecting local maxima. Optimize the seeding by changing the amount of smoothing done in step 2, aiming to have exactly one seed for each cell (although this may not be perfectly achievable).Follow the instructions in the comments below.
###Code
# (i) Run a distance transform on the membrane mask
# Use the function 'ndi.distance_transform_edt'.
# You may need to invert your membrane mask so the distances are computed on
# the cells, not on the membranes.
### YOUR CODE HERE!
# (ii) Visualize the output and understand what you are seeing.
### YOUR CODE HERE!
# (iii) Smoothen the distance transform
# Use 'ndi.filters.gaussian_filter' to do so.
# You will have to optimize your choice of 'sigma' based on the outcome below.
### YOUR CODE HERE!
# (iv) Retrieve the local maxima (the 'peaks') from the distance transform
# Use the function 'peak_local_max' from the module 'skimage.feature'. By default, this function will return the
# indices of the pixels where the local maxima are. However, we instead need a boolean mask of the same shape
# as the original image, where all the local maximum pixels are labeled as `1` and everything else as `0`.
# This can be achieved by setting the keyword argument 'indices' to False.
### YOUR CODE HERE!
# (v) Visualize the output as an overlay on the raw (or smoothed) image
# If you just look at the local maxima image, it will simply look like a bunch of distributed dots.
# To get an idea if the seeds are well-placed, you will need to overlay these dots onto the original image.
# To do this, it is important to first understand a key point about how the 'pyplot' module works:
# every plotting command is slapped on top of the previous plotting commands, until everything is ultimately
# shown when 'plt.show' is called. Hence, you can first plot the raw (or smoothed) input image and then
# plot the seeds on top of it before showing both with 'plt.show'.
# As you can see if you try this, you will not get the desired result because the zero values in seed array
# are painted in black over the image you want in the background. To solve this problem, you need to mask
# these zero values before plotting the seeds. You can do this by creating an appropriately masked array
# using the function 'np.ma.array' with the keyword argument 'mask'.
# Check the docs or Stack Overflow to figure out how to do this.
# BONUS: As an additional improvement for the visualization, use 'ndi.filters.maximum_filter' to dilate the
# seeds a little bit, making them bigger and thus better visible.
### YOUR CODE HERE!
# (vi) Optimize the seeding
# Ideally, there should be exactly one seed for each cell.
# If you are not satisfied with your seeding, go back to the smoothing step above and optimize 'sigma'
# to get rid of additional maxima. You can also try using the keyword argument 'min_distance' in
# 'peak_local_max' to solve cases where there are multiple small seeds at the center of a cell. Note
# that good seeding is essential for a good segmentation with an expansion algorithm. However, no
# segmentation is perfect, so it's okay if a few cells end up being oversegmented.
# (vii) Label the seeds (optional)
# Use connected component labeling to give each cell seed a unique ID number.
### YOUR CODE HERE!
# Visualize the final result (the labeled seeds) as an overlay on the raw (or smoothed) image
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Expansion by Watershed BackgroundTo achieve a cell segmentation, the `seeds` now need to be expanded outward until they follow the outline of the cell. The most commonly used expansion algorithm is the `watershed`.Imagine the intensity in the raw/smoothed image as a topographical height profile; high-intensity regions are peaks, low-intensity regions are valleys. In this representation, cells are deep valleys (with the seeds at the center), enclosed by mountains. As the name suggests, the `watershed` algorithm can be understood as the gradual filling of this landscape with water, starting from the seed. As the water level rises, the seed expands - until it finally reaches the 'crest' of the cell membrane 'mountain range'. Here, the water would flow over into the neighboring valley, but since that valley is itself filled up with water from the neighboring cell's seed, the two water surfaces touch and the expansion stops. ExerciseExpand your seeds by means of a watershed expansion.Follow the instructions in the comments below.
###Code
# (i) Perform watershed
# Use the function 'watershed' from the module 'skimage.morphology'.
# Use the labeled cell seeds and the smoothed membrane image as input.
### YOUR CODE HERE!
# (ii) Show the result as transparent overlay over the smoothed input image
# Like the masked overlay of the seeds, this can be achieved by making two calls to 'imshow',
# one for the background image and one for the segmentation. Instead of masking away background,
# this time you simply make the segmentation image semi-transparent by adjusting the keyword
# argument 'alpha' of the 'imshow' function, which specifies opacity.
# Be sure to choose an appropriate colormap that allows you to distinguish the segmented cells
# even if cells with a very similar ID are next to each other (I would recommend 'prism').
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
*A Note on Segmentation Quality*This concludes the segmentation of the cells in the example image. Depending on the quality you achieved in each step along the way, the final segmentation may be of greater or lesser quality (in terms of over-/under-segmentation errors).It should be noted that the segmentation will likely *never* be perfect, as there is usually a trade-off between over- and undersegmentation.This raises an important question: ***When should I stop trying to optimize my segmentation?***There is no absolute answer to this question but the best answer is probably this: ***When you can use it to address your biological questions!****Importantly, this implies that you should already have relatively clear questions in mind when you are working on the segmentation!*
###Code
# (i) Create an array of the same size and data type as the segmentation but filled with only zeros
### YOUR CODE HERE!
# (ii) Iterate over the cell IDs
### YOUR CODE HERE!
# (iii) Erode the cell's mask by 1 pixel
# Hint: 'ndi.binary_erode'
### YOUR CODE HERE!
# (iv) Create the cell edge mask
# Hint: 'np.logical_xor'
### YOUR CODE HERE!
# (v) Add the cell edge mask to the empty array generated above, labeling it with the cell's ID
### YOUR CODE HERE!
# (vi) Visualize the result
# Note: Because the lines are so thin (1pxl wide), they may not be displayed correctly in small figures.
# You can 'zoom in' by showing a sub-region of the image which is then rendered bigger. You can
# also go back to the edge identification code and make the edges multiple pixels wide (but keep
# in mind that this will have an effect on your quantification results!).
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Extracting Quantitative Measurements BackgroundThe ultimate goal of image segmentation is of course the extraction of quantitative measurements, in this case on a single-cell level. Measures of interest can be based on intensity (in different channels) or on the size and shape of the cells.To exemplify how different properties of cells can be measured, we will extract the following:- Cell ID (so all other measurements can be traced back to the cell that was measured)- Mean intensity of each cell- Mean intensity at the membrane of each cell- The cell area, i.e. the number of pixels that make up the cell- The cell outline length, i.e. the number of pixels that make up the cell edge*Note: It makes sense to use smoothed/filtered/background-subtracted images for segmentation. When it comes to measurements, however, it's best to get back to the raw data!* ExerciseExtract the measurements listed above for each cell and collect them in a dictionary.Note: The ideal data structure for data like this is the `DataFrame` offered by the module `Pandas`. However, for the sake of simplicity, we will here stick with a dictionary of lists.Follow the instructions in the comments below.
###Code
# (i) Create a dictionary that contains a key-value pairing for each measurement
# The keys should be strings describing the type of measurement (e.g. 'intensity_mean') and
# the values should be empty lists. These empty lists will be filled with the results of the
# measurements.
### YOUR CODE HERE!
# (ii) Record the measurements for each cell
# Iterate over the segmented cells ('np.unique').
# Inside the loop, create a mask for the current cell and use it to extract the measurements listed above.
# Add them to the appropriate list in the dictionary using the 'append' method.
# Hint: Remember that you can get out all the values within a masked area by indexing the image
# with the mask. For example, 'np.mean(image[cell_mask])' will return the mean of all the
# intensity values of 'image' that are masked by 'cell_mask'!
### YOUR CODE HERE!
# (iii) Print the results and check that they make sense
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Simple Analysis & Visualisation BackgroundBy extracting quantitative measurements from an image we cross over from 'image analysis' to 'data analysis'. This section briefly explains how to do basic data analysis and plotting, including boxplots, scatterplots and linear fits. It also showcases how to map data back onto the image, creating an "image-based heatmap". ExerciseAnalyze and plot the extracted data in a variety of ways.Follow the instructions in the comments below.
###Code
# (i) Familiarize yourself with the data structure of the results dict and summarize the results
# Recall that dictionaries are unordered; a dataset of interest is accessed through its key.
# In our case, the datasets inside the dict are lists of values, ordered in the same order
# as the cell IDs.
# For each dataset in the results dict, print its name (the key) along with its mean, standard
# deviation, maximum, minimum, and median. The appropriate numpy methods (e.g. 'np.median') work
# with lists just as well as with arrays.
### YOUR CODE HERE!
# (ii) Create a box plot showing the mean cell and mean membrane intensities for both channels.
# Use the function 'plt.boxplot'. Use the 'label' keyword of 'plt.boxplot' to label the x axis with
# the corresponding key names. Feel free to play around with the various options of the boxplot
# function to make your plot look nicer. Remember that you can first call 'plt.figure' to adjust
# settings such as the size of the plot.
### YOUR CODE HERE!
# (iii) Create a scatter plot of cell outline length over cell area
# Use the function 'plt.scatter' for this. Be sure to properly label the
# plot using 'plt.xlabel' and 'plt.ylabel'.
### YOUR CODE HERE!
# BONUS: Do you understand why you are seeing the pattern this produces? Can you
# generate a 'null model' curve that assumes all cells to be circular? What is
# the result? Do you notice something odd about it? What could be the reason for
# this and how could it be fixed?
### YOUR CODE HERE!
# (iv) Perform a linear fit of membrane intensity over cell area
# Use the function 'linregress' from the module 'scipy.stats'. Be sure to read the docs to
# understand the output of this function. Print the output.
### YOUR CODE HERE!
# (v) Think about the result
# Note that the fit seems to return a highly significant p-value but a very low correlation
# coefficient (r-value). Based on prior knowledge, we would not expect a linear correlation of
# this sort to be present in our data.
#
# This should prompt several questions:
# 1) What does this p-value actually mean? Check the docs of 'linregress'!
# 2) Could there be artifacts in our segmentation that bias this analysis?
#
# In general, it's always good to be very careful when doing any kind of data analysis. Make sure you
# understand the functions you are using and always check for possible errors or sources of bias!
# (vi) Overlay the linear fit onto a scatter plot
# Recall that a linear function is defined by `y = slope * x + intercept`.
# To define the line you'd like to plot, you need two values of x (the starting point and
# and the end point of the line). What values of x make sense? Can you get them automatically?
### YOUR CODE HERE!
# When you have the x-values for the starting point and end point, get the corresponding y
# values from the fit through the equation above.
### YOUR CODE HERE!
# Plot the line with 'plt.plot'. Adjust the line's properties so it is well visible.
# Note: Remember that you have to create the scatterplot before plotting the line so that
# the line will be placed on top of the scatterplot.
### YOUR CODE HERE!
# Use 'plt.legend' to add information about the line to the plot.
### YOUR CODE HERE!
# Label the plot and finally show it with 'plt.show'.
### YOUR CODE HERE!
# (vii) Map the cell area back onto the image as a 'heatmap'
# Scale the cell area data to 8bit so that it can be used as pixel intensity values.
# Hint: if the largest cell area should correspond to the value 255 in uint8, then
# the other cell areas correspond to 'cell_area * 255 / largest_cell_area'.
# Hint: To perform an operation on all cell area values at once, convert the list
# of cell areas to a numpy array.
### YOUR CODE HERE!
# Initialize a new image array; all values should be zeros, the shape should be identical
# to the images we worked with before and the dtype should be uint8.
### YOUR CODE HERE!
# Iterate over the segmented cells. In addition to the cell IDs, the for-loop should
# also include a simple counter (starting from 0) with which the area measurement can be
# accessed by indexing.
### YOUR CODE HERE!
# Mask the current cell and assign the cell's (re-scaled) area value to the cell's pixels.
### YOUR CODE HERE!
# Visualize the result as a colored semi-transparent overlay over the raw/smoothed original input image.
# BONUS: See if you can exclude outliers to make the color mapping more informative!
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Writing Output to Files BackgroundThe final step of the pipeline shows how to write various outputs of the pipeline to files.Data can be saved to files in a human-readable format such as text files (e.g. to import into Excel), in a format readable for other programs such as tif-images (e.g. to view in Fiji) or in a language-specific file that makes it easy to reload the data into python in the future (e.g. for further analysis). Exercise Write the generated data into a variety of different output files.Follow the instructions in the comments below.
###Code
# (i) Write one or more of the images you produced to a tif file
# Use the function 'imsave' from the 'skimage.io' module. Make sure that the array you are
# writing is of integer type. If necessary, you can use the method 'astype' for conversions,
# e.g. 'some_array.astype(np.uint8)' or 'some_array.astype(np.uint16)'. Careful when
# converting a segmentation to uint8; if there are more than 255 cells, the 8bit format
# doesn't have sufficient bit-depth to represent all cell IDs!
#
# You can also try adding the segmentation to the original image, creating an image with
# two channels, one of them being the segmentation.
#
# After writing the file, load it into Fiji and check that everything worked as intended.
### YOUR CODE HERE!
# (ii) Write a figure to a png or pdf
# Recreate the scatter plot from above (with or without the regression line), then save the figure
# as a png using 'plt.savefig'. Alternatively, you can also save it to a pdf, which will create a
# vector graphic that can be imported into programs like Adobe Illustrator.
### YOUR CODE HERE!
# (iii) Save the segmentation as a numpy file
# Numpy files allow fast storage and reloading of numpy arrays. Use the function 'np.save'
# to save the array and reload it using 'np.load'.
### YOUR CODE HERE!
# (iv) Save the result dictionary as a pickle file
# Pickling is a way of generating generic files from almost any python object, which can easily
# be reloaded into python at a later point in time.
# You will need to open an empty file object using 'open' in write-bytes mode ('wb'). It's best to
# do so using the 'with'-statement (context manager) to make sure that the file object will be
# closed automatically when you are done with it.
# Use the function 'pickle.dump' from the 'pickle' module to write the results to the file.
# Hint: Refer to the python documention for input and output to understand how file objects are
# handled in python in general.
### YOUR CODE HERE!
## Note: Pickled files can be re-loaded again as follows:
#with open('my_filename.pkl', 'rb') as infile:
# reloaded = pickle.load(infile)
# (v) Write a tab-separated text file of the results dict
# The most generic way of saving numeric results is a simple text file. It can be imported into
# pretty much any other program.
# To write normal text files, open an empty file object in write mode ('w') using the 'with'-statement.
### YOUR CODE HERE!
# Use the 'file_object.write(string)' method to write strings to the file, one line at a time,
# First, write the header of the data (the result dict keys), separated by tabs ('\t').
# It makes sense to first generate a complete string with all the headers and then write this
# string to the file as one line. Note that you will need to explicitly write 'newline' characters
# ('\n') at the end of the line to switch to the next line.
# Hint: the string method 'join' is very useful here!
### YOUR CODE HERE!
# After writing the headers, iterate over all the cells and write the result data to the file line
# by line, by creating strings similar to the header string.
### YOUR CODE HERE!
# After writing the data, have a look at the output file in a text editor or in a spreadsheet
# program like Excel.
###Output
_____no_output_____
###Markdown
Batch Processing BackgroundIn practice, we never work with just a single image, so we would like to make it possible to run our analysis pipeline for multiple images and then collect and analyze all the results. This final section of the tutorial shows how to do just that. ExerciseTo run a pipeline multiple times, it needs to be packaged into a function or - even better - as a separate module. Jupyter notebook is not well suited for this, so if you're working in a notebook, first extract your code to a `.py` file (see instructions below). If you are not working in a notebook, create a copy of your pipeline; we will modify this copy into a function that can then be called repeatedly for different images.To export a jupyter notebook as a `.py` file, use `File > Download as > Python (.py)`, then save the file. Open the resulting python script in a text editor or in an IDE like PyCharm. Let's clean the script a bit:- Remove the line `%matplotlib [inline|notebook|qt]`. It is not valid python code outside of a Jupyter notebook.- Go through the script and comment out everything related to plotting; when running a pipeline for dozens or hundreds of images, we usually do not want to generate tons of plots. Similarly, it can make sense to remove some print statments if you have many of them.- Remove the sections `Manual Thresholding` and `Connected Components Labeling`; they are not used in the final segmentation.- Remove the sections `Simple Analysis and Visualization` and `Writing Output to Files`; we will collect the output for each image when running the pipeline in a loop. That way, everything can be analyzed at once at the end. - Note that, even though we skip it here, it is often very useful to store every input file's corresponding outputs in new files. When doing so, the output files should use the name of the input file modified with an additional suffix. For example, the results extracted when analyzing `img_1.tif` might best be stored as `img_1_results.pkl`. - You can implement this approach for saving the segmentations and/or the result dicts as a *bonus* exercise!- Feel free to delete some of the background information to make the script more concise. Converting the pipeline to a function:Convert the entire pipeline into a function that accepts a directory and a filename as input, runs everything, and returns the final segmentation and the results dictionary. To do this, you must:- Add the function definition statement at the beginning of the script (after the imports)- Replace the 'hard-coded' directory path and filename by variables that are accepted by the function- Indent all the code- Add a return statement at the end Importing the function and running it for multiple input files:To actually run the pipeline function for multiple input files, we need to do the following:- Import the pipeline function from the `.py` file- Iterate over all the filenames in a directory- For each filename, call the pipeline function- Collect the returned resultsOnce you have converted your pipeline into a function as described above, you can import and run it according to the instructions below.
###Code
# (i) Test if your pipeline function actually works
# Import your function using the normal python syntax for imports, like this:
# from your_module import your_function
# Run the function and visualize the resulting segmentation. Make sure everything works as intended.
### YOUR CODE HERE!
# (ii) Get all relevant filenames from the input directory
# Use the function 'listdir' from the module 'os' to get a list of all the files
# in a directory. Find a way to filter out only the relevant input files, namely
# "example_cells_1.tif" and "example_cells_2.tif". Of course, one would usually
# do this for many more images, otherwise it's not worth the effort.
# Hint: Loop over the filenames and use if statements to decide which ones to
# keep and which ones to throw away.
### YOUR CODE HERE!
# (iii) Iterate over the input filenames and run the pipeline function
# Be sure to collect the output of the pipeline function in a way that allows
# you to trace it back to the file it came from. You could for example use a
# dictionary with the filenames as keys.
### YOUR CODE HERE!
# (iv) Recreate one of the scatterplots from above but this time with all the cells
# You can color-code the dots to indicate which file they came from. Don't forget to
# add a corresponding legend.
### YOUR CODE HERE!
###Output
_____no_output_____
###Markdown
Analysis of Algorithms *Data Structures and Information Retrieval in Python*Copyright 2021 Allen DowneyLicense: [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/) **Analysis of algorithms** is a branch of computer science that studiesthe performance of algorithms, especially their run time and spacerequirements. See .The practical goal of algorithm analysis is to predict the performanceof different algorithms in order to guide design decisions.During the 2008 United States Presidential Campaign, candidate BarackObama was asked to perform an impromptu analysis when he visited Google.Chief executive Eric Schmidt jokingly asked him for "the most efficientway to sort a million 32-bit integers." Obama had apparently been tippedoff, because he quickly replied, "I think the bubble sort would be thewrong way to go." See .This is true: bubble sort is conceptually simple but slow for large datasets. The answer Schmidt was probably looking for is "radix sort"().But if you get a question like this in an interview, I think abetter answer is, "The fastest way to sort a million integers is touse whatever sort function is provided by the language I'm using.Its performance is good enough for the vast majority ofapplications, but if it turned out that my application was too slow,I would use a profiler to see where the time was being spent. If itlooked like a faster sort algorithm would have a significant effecton performance, then I would look around for a good implementationof radix sort." The goal of algorithm analysis is to make meaningful comparisons between algorithms, but there are some problems:- The relative performance of the algorithms might depend on characteristics of the hardware, so one algorithm might be faster on Machine A, another on Machine B. The usual solution to this problem is to specify a **machine model** and analyze the number of steps, or operations, an algorithm requires under a given model.- Relative performance might depend on the details of the dataset. For example, some sorting algorithms run faster if the data are already partially sorted; other algorithms run slower in this case. A common way to avoid this problem is to analyze the **worst case** scenario. It is sometimes useful to analyze average case performance, but that's usually harder, and it might not be obvious what set of cases to average over.- Relative performance also depends on the size of the problem. A sorting algorithm that is fast for small lists might be slow for long lists. The usual solution to this problem is to express run time (or number of operations) as a function of problem size, and group functions into categories depending on how quickly they grow as problem size increases. The good thing about this kind of comparison is that it lends itself tosimple classification of algorithms. For example, if I know that the runtime of Algorithm A tends to be proportional to the size of the input,$n$, and Algorithm B tends to be proportional to $n^2$, then I expect Ato be faster than B, at least for large values of $n$.This kind of analysis comes with some caveats, but we'll get to that later. Order of growthSuppose you have analyzed two algorithms and expressed their run times in terms of the size of the input: Algorithm A takes $100n+1$ steps to solve a problem with size $n$; Algorithm B takes $n^2 + n + 1$ steps.The following table shows the run time of these algorithms for different problem sizes:
###Code
import numpy as np
import pandas as pd
n = np.array([10, 100, 1000, 10000])
table = pd.DataFrame(index=n)
table['Algorithm A'] = 100 * n + 1
table['Algorithm B'] = n**2 + n + 1
table['Ratio (B/A)'] = table['Algorithm B'] / table['Algorithm A']
table
###Output
_____no_output_____
###Markdown
At $n=10$, Algorithm A looks pretty bad; it takes almost 10 times longerthan Algorithm B. But for $n=100$ they are about the same, and for larger values A is much better. The fundamental reason is that for large values of $n$, any functionthat contains an $n^2$ term will grow faster than a function whoseleading term is $n$. The **leading term** is the term with the highestexponent.For Algorithm A, the leading term has a large coefficient, 100, which iswhy B does better than A for small $n$. But regardless of thecoefficients, there will always be some value of $n$ where$a n^2 > b n$, for any values of $a$ and $b$.The same argument applies to the non-leading terms. Suppose the run timeof Algorithm C is $n+1000000$; it would still be better than AlgorithmB for sufficiently large $n$.
###Code
import numpy as np
import pandas as pd
n = np.array([10, 100, 1000, 10000])
table = pd.DataFrame(index=n)
table['Algorithm C'] = n + 1000000
table['Algorithm B'] = n**2 + n + 1
table['Ratio (C/B)'] = table['Algorithm B'] / table['Algorithm C']
table
###Output
_____no_output_____
###Markdown
In general, we expect an algorithm with a smaller leading term to be abetter algorithm for large problems, but for smaller problems, there maybe a **crossover point** where another algorithm is better. The following figure shows the run times (in arbitrary units) for the three algorithms over a range of problems sizes. For small problem sizes, Algorithm B is the fastest, but for large problems sizes, it is the worst.In the figure, we can see where the crossover points are.
###Code
import matplotlib.pyplot as plt
ns = np.arange(10, 1500)
ys = 100 * ns + 1
plt.plot(ns, ys, label='Algorithm A')
ys = ns**2 + ns + 1
plt.plot(ns, ys, label='Algorithm B')
ys = ns + 1_000_000
plt.plot(ns, ys, label='Algorithm C')
plt.yscale('log')
plt.xlabel('Problem size (n)')
plt.ylabel('Run time')
plt.legend();
###Output
_____no_output_____
###Markdown
The location of these crossover points depends on the details of the algorithms, theinputs, and the hardware, so it is usually ignored for purposes ofalgorithmic analysis. But that doesn't mean you can forget about it. Big O notationIf two algorithms have the same leading order term, it is hard to saywhich is better; again, the answer depends on the details. So foralgorithmic analysis, functions with the same leading term areconsidered equivalent, even if they have different coefficients.An **order of growth** is a set of functions whose growth behavior isconsidered equivalent. For example, $2n$, $100n$ and $n+1$ belong to thesame order of growth, which is written $O(n)$ in **Big-O notation** andoften called **linear** because every function in the set grows linearlywith $n$.All functions with the leading term $n^2$ belong to $O(n^2)$; they arecalled **quadratic**. The following table shows some of the orders of growth that appear mostcommonly in algorithmic analysis, in increasing order of badness. | Order of growth | Name ||-----------------|---------------------------|| $O(1)$ | constant || $O(\log_b n)$ | logarithmic (for any $b$) || $O(n)$ | linear || $O(n \log_b n)$ | linearithmic || $O(n^2)$ | quadratic || $O(n^3)$ | cubic || $O(c^n)$ | exponential (for any $c$) | For the logarithmic terms, the base of the logarithm doesn't matter;changing bases is the equivalent of multiplying by a constant, whichdoesn't change the order of growth. Similarly, all exponential functionsbelong to the same order of growth regardless of the base of theexponent. Exponential functions grow very quickly, so exponentialalgorithms are only useful for small problems. ExerciseRead the Wikipedia page on Big-O notation at and answer the followingquestions:1. What is the order of growth of $n^3 + n^2$? What about $1000000 n^3 + n^2$? What about $n^3 + 1000000 n^2$?2. What is the order of growth of $(n^2 + n) \cdot (n + 1)$? Before you start multiplying, remember that you only need the leading term.3. If $f$ is in $O(g)$, for some unspecified function $g$, what can we say about $af+b$, where $a$ and $b$ are constants?4. If $f_1$ and $f_2$ are in $O(g)$, what can we say about $f_1 + f_2$?5. If $f_1$ is in $O(g)$ and $f_2$ is in $O(h)$, what can we say about $f_1 + f_2$?6. If $f_1$ is in $O(g)$ and $f_2$ is in $O(h)$, what can we say about $f_1 \cdot f_2$? Programmers who care about performance often find this kind of analysishard to swallow. They have a point: sometimes the coefficients and thenon-leading terms make a real difference. Sometimes the details of thehardware, the programming language, and the characteristics of the inputmake a big difference. And for small problems, order of growth isirrelevant.But if you keep those caveats in mind, algorithmic analysis is a usefultool. At least for large problems, the "better" algorithm is usuallybetter, and sometimes it is *much* better. The difference between twoalgorithms with the same order of growth is usually a constant factor,but the difference between a good algorithm and a bad algorithm isunbounded! Example: Adding the elements of a listIn Python, most arithmetic operations are constant time; multiplication usually takes longer than addition and subtraction, and division takes even longer, but these run times don't depend on the magnitude of the operands. Very large integers are an exception; in that case the run time increases with the number of digits.A `for` loop that iterates a list is linear, as long as all of the operations in the body of the loop are constant time. For example, adding up the elements of a list is linear:
###Code
def compute_sum(t):
total = 0
for x in t:
total += x
return total
t = range(10)
compute_sum(t)
###Output
_____no_output_____
###Markdown
The built-in function `sum` is also linear because it does the same thing, but it tends to be faster because it is a more efficient implementation; in the language of algorithmic analysis, it has a smaller leading coefficient.
###Code
%timeit compute_sum(t)
%timeit sum(t)
###Output
_____no_output_____
###Markdown
Analysis of aggregated data
###Code
from pathlib import Path
import json
from math import pi
import pandas as pd
import numpy as np
from bokeh.io import output_file, show, output_notebook, export_png
from bokeh.palettes import Category20c
from bokeh.plotting import figure
from bokeh.transform import cumsum
output_notebook()
BASE_DIR = Path.cwd().parent
SETTINGS_PATH = BASE_DIR / 'config' / 'settings.json'
def get_settings():
if not SETTINGS_PATH.exists():
raise Exception('Settings file not found')
with open(str(SETTINGS_PATH)) as settings_file:
data = json.load(settings_file)
return data
def print_summary(label, data, is_pie=False, is_bar=False):
print('#################################### \n {} \n####################################'.format(label))
total = 0
plot_dict = {}
for key, value in data.items():
if isinstance(value, int):
tot = value
else:
tot = len(value['data'])
total += tot
print(key, ': ', tot)
plot_dict["{} ({})".format(key, tot)] = tot
if is_pie:
plot_pie_chart(plot_dict, label)
if is_bar:
plot_bar_chart(plot_dict, label)
print('\nTotal: ', total)
print('#################################### \n')
return total
def print_total_summary(dataset, sample):
print('Total: ', dataset, 'Percent: ', 100 * sample / dataset, " % ")
def plot_pie_chart(source, label):
data = pd.Series(source).reset_index(name='value').rename(columns={'index':'class'})
data['angle'] = data['value'] / data['value'].sum() * 2*pi
data['color'] = Category20c[len(source)]
p = figure(
plot_height=350, title="{} Class Distribution".format(label), toolbar_location=None,
tools="hover", tooltips="@class: @value", x_range=(-0.5, 1.0)
)
p.wedge(x=0, y=1, radius=0.4,
start_angle=cumsum('angle', include_zero=True), end_angle=cumsum('angle'),
line_color="white", fill_color='color', legend='class', source=data)
p.axis.axis_label=None
p.axis.visible=False
p.grid.grid_line_color = None
show(p)
export_png(p, '{} pie.png'.format(label))
def plot_bar_chart(source, label):
x = list(source.keys())
y = list(source.values())
p = figure(
x_range=x, title="{} Class Distribution".format(label),
toolbar_location=None, tools="", plot_width=800
)
p.vbar(x=x, top=y, width=0.9)
# p.hbar(y=y, left='Time_min', right='Time_max', height=0.4, source=source)
p.xgrid.grid_line_color = None
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xaxis.major_label_orientation = 1
show(p)
export_png(p, '{} bar.png'.format(label))
###Output
_____no_output_____
###Markdown
Activity Net
###Code
def get_activity_net():
SETTINGS = get_settings()
JSON_PATH = Path(SETTINGS['activity_net']['json'])
if not JSON_PATH.exists():
raise Exception('Activity Net JSON Path does not exist')
with open(str(JSON_PATH)) as settings_file:
data = json.load(settings_file)
assert data['version'], 'VERSION 1.3'
# Getting only dance taxonomies
dance_taxonomy = list(filter(lambda x: x.get('parentName') == 'Dancing', data['taxonomy']))
result = dict([ (taxonomy['nodeName'], {'meta': taxonomy, 'data': []}) for taxonomy in dance_taxonomy])
for key, value in data['database'].items():
for annotation in value['annotations']:
label = annotation['label']
if result.get(label):
result[label]['data'].append({ **value, 'key': key, 'source': 'activity_net' })
# Even if one annotation has the required key add the single item
continue
# Getting the stats break down
total = print_summary('Activity Net', result, is_pie=True)
print_total_summary(len(data['database'].keys()), total)
return result
###Output
_____no_output_____
###Markdown
Kinetics
###Code
def get_kinetics_individual(source):
SETTINGS = get_settings()
version = str(SETTINGS['kinetics']['default'])
JSON_PATH = Path(SETTINGS['kinetics'][version]['json']) / '{}.json'.format(source)
if not JSON_PATH.exists():
raise Exception('Kinetics Net JSON Path does not exist')
with open(str(JSON_PATH)) as settings_file:
data = json.load(settings_file)
return data
def get_kinetics_categories():
SETTINGS = get_settings()
JSON_PATH = Path(SETTINGS['kinetics']['categories'])
if not JSON_PATH.exists():
raise Exception('Categories does not exist')
with open(str(JSON_PATH)) as settings_file:
data = json.load(settings_file)
return data
def get_kinetics_classes():
SETTINGS = get_settings()
JSON_PATH = Path(SETTINGS['kinetics']['classes'])
if not JSON_PATH.exists():
raise Exception('Classes does not exist')
with open(str(JSON_PATH)) as settings_file:
data = json.load(settings_file)
return data
def get_kinetics_video_data():
val_data = get_kinetics_individual('val')
train_data = get_kinetics_individual('train')
test_data = get_kinetics_individual('test')
combined = { **val_data, **train_data, **test_data }
assert len(combined.keys()), len(val_data.keys()) + len(train_data.keys()) + len(test_data.keys())
# key = next(iter(combined.keys()))
# print(combined[key])
classes = set([value['annotations']['label'].lower() for key, value in combined.items()])
# excluded = set(default_classes) - set(classes)
# print(len(excluded))
# print(excluded)
return combined
def get_kinetics():
result = get_kinetics_video_data()
default_classes = get_kinetics_classes()
categories = get_kinetics_categories()
dance_dict = dict([(dance, { 'data': []}) for dance in categories['dancing']])
for key, value in result.items():
label = value['annotations']['label']
if dance_dict.get(label):
dance_dict[label]['data'].append({
**value,
'source': 'kinetics'
})
total = print_summary('Kinetics', dance_dict, is_bar=True)
print_total_summary(len(result.keys()), total)
return dance_dict
def run():
get_activity_net()
get_kinetics()
run()
###Output
####################################
Activity Net
####################################
Tango : 92
Cheerleading : 143
Cumbia : 86
Breakdancing : 107
Belly dance : 75
###Markdown
UCF-101
###Code
def get_ucf():
SETTINGS = get_settings()
DATA_PATH = Path(SETTINGS['ucf']['data'])
if not DATA_PATH.exists():
raise Exception('Activity Net Data Path does not exist')
SALSA_SPIN_PATH = DATA_PATH / 'SalsaSpin'
total_salsa_spin = len([x for x in SALSA_SPIN_PATH.iterdir()])
total = 0
for dir in DATA_PATH.iterdir():
if dir.is_dir():
t = len([x for x in dir.iterdir()])
total += t
# print(dir.name, t)
print(total_salsa_spin, total, 100 * total_salsa_spin / total)
get_ucf()
def get_lets_dance():
SETTINGS = get_settings()
DATA_PATH = Path(SETTINGS['lets_dance']['rgb_data'])
if not DATA_PATH.exists():
raise Exception('Lets Dance RGB Path does not exist')
original = ['ballet', 'flamenco', 'latin', 'square', 'tango', 'breakdancing', 'foxtrot', 'quickstep', 'swing', 'waltz']
recent = [x.name for x in DATA_PATH.iterdir()]
# print(sorted(recent), len(recent), sorted(original))
# print(set(recent) - set(original))
META_DATA_PATH = Path(SETTINGS['lets_dance']['meta'])
if not META_DATA_PATH.exists():
raise Exception('Lets Dance Meta Path does not exist')
data_dict = {}
with open(str(META_DATA_PATH)) as f:
data = [x.strip() for x in f.readlines() if '.jpg' in x]
print(len(data))
print(data[:5])
for index, item in enumerate(data):
_, dance, filename = item.split("/")
# print(filename)
if not data_dict.get(dance):
data_dict[dance] = {}
seg = filename.split("_")
z = seg.pop().split(".")[0]
y = seg.pop()
uuid = "_".join(seg)
uuid = "{}___{}".format(uuid, y)
x = seg[-1]
# print(uuid, y, z)
if not data_dict[dance].get(uuid):
data_dict[dance][uuid] = {'y': [], 'z': []}
if y not in data_dict[dance][uuid]['y']:
data_dict[dance][uuid]['y'].append(y)
data_dict[dance][uuid]['z'].append(z)
data_dict[dance][uuid]['z'] = sorted(data_dict[dance][uuid]['z'])
frame_len = []
data_display = {}
i = 0
for key, value in data_dict.items():
key_frame = []
y = []
for uuid, f in value.items():
frame_len.append(frame_len)
key_frame.append(len(f['z']))
frame_len.append(len(f['z']))
y.append(f['y'])
if len(f['y']) > 1:
print(len(f['y']))
data_display[key] = len(key_frame)
# print(key, len(value.keys()), 'Mean: ', int(np.mean(key_frame)), 'Std: ', int(np.std(key_frame)))
# if i > 2:
# break
if(i % 1000 == 0):
print(uuid)
i += 1
print_summary("Let's dance", data_display, is_bar=True)
total_list = [v for v in data_display.values()]
print(np.mean(total_list))
print(np.std(total_list))
print(np.sum(total_list))
get_lets_dance()
###Output
412668
['rgb/tap/5xxTkB5bGy4_046_0026.jpg', 'rgb/tap/Rl88sW_rtv0_115_0249.jpg', 'rgb/tap/7Tftcimjo5o_210_0168.jpg', 'rgb/tap/5xxTkB5bGy4_046_0081.jpg', 'rgb/tap/ISeCp56ud4I_056_0165.jpg']
OAfHveS6cMw___052
####################################
Let's dance
####################################
tap : 95
ballet : 89
break : 95
foxtrot : 79
tango : 80
jive : 106
square : 97
waltz : 80
swing : 95
latin : 90
rumba : 94
quickstep : 82
samba : 97
pasodoble : 98
flamenco : 88
cha : 98
###Markdown
Array API Comparison Notebook dependencies and initial setup...
###Code
import os
import pandas
import numpy as np
import matplotlib.pyplot as plt
# Adjust the default figure size:
plt.rcParams["figure.figsize"] = (20,10)
###Output
_____no_output_____
###Markdown
Find the root project directory...
###Code
# Determine the current working directory:
dir = os.getcwd()
# Walk the parent directories looking for a `package.json` file located in the root directory...
child = ''
while (child != dir):
spath = os.path.join(dir, 'package.json')
if (os.path.exists(spath)):
root_dir = dir
break
child = dir
dir = os.path.dirname(dir)
###Output
_____no_output_____
###Markdown
Resolve the directory containing data files...
###Code
data_dir = os.path.join(root_dir, 'data')
###Output
_____no_output_____
###Markdown
* * * Overview The following array libraries were initially analyzed:- [**NumPy**][numpy]: serves as the reference API against which all other array libraries are compared.- [**CuPy**][cupy]- [**Dask.array**][dask-array]- [**JAX**][jax]- [**MXNet**][mxnet]- [**PyTorch**][pytorch]- [**rnumpy**][rnumpy]: an opinionated curation of NumPy APIs, serving as an exercise in evaluating what is most "essential" (i.e., the smallest set of building block functionality on which most array functionality can be built).- [**PyData/Sparse**][pydata-sparse]- [**TensorFlow**][tensorflow]The data from this analysis can be found in the "join" dataset below.From the initial array library list, the following array libraries were subsequently analyzed in order to determine relatively common APIs:- [**NumPy**][numpy]- [**CuPy**][cupy]- [**Dask.array**][dask-array]- [**JAX**][jax]- [**MXNet**][mxnet]- [**PyTorch**][pytorch]- [**TensorFlow**][tensorflow][**PyData/Sparse**][pydata-sparse] was omitted due to insufficient and relatively nascent API coverage. [**rnumpy**][rnumpy] was omitted due to its nature as an intellectual exercise exploring what a minimal API could look like, rather than a ubiquitous library having widespread usage.In order to understand array API usage by downstream libraries, the following downstream libraries were analyzed (for additional information, see the [Python API Record][python-api-record] tooling repository):- [**Dask.array**][dask-array]- [**Matplotlib**][matplotlib]- [**pandas**][pandas]- [**scikit-image**][scikit-image] (alias: `skimage`)- [**xarray**][xarray][cupy]: https://docs-cupy.chainer.org/en/stable/reference/comparison.html[dask-array]: https://docs.dask.org/en/latest/array-api.html[jax]: https://jax.readthedocs.io/en/latest/[mxnet]: https://numpy.mxnet.io/api/deepnumpy[numpy]: https://docs.scipy.org/doc/numpy[pydata-sparse]: https://github.com/pydata/sparse[pytorch]: https://pytorch.org/docs/stable/[rnumpy]: https://github.com/Quansight-Labs/rnumpy[tensorflow]: https://www.tensorflow.org/api_docs/python[matplotlib]: https://matplotlib.org/[pandas]: https://pandas.pydata.org/[scikit-image]: https://scikit-image.org/ [xarray]: https://xarray.pydata.org/en/latest/[python-api-record]: https://github.com/data-apis/python-api-record * * * Datasets This notebook contains the following datasets... Categories Load a table mapping NumPy APIs to a usage "category"...
###Code
CATEGORIES = pandas.read_csv(os.path.join(data_dir, 'raw', 'numpy_categories.csv')).fillna(value='(other)')
###Output
_____no_output_____
###Markdown
Compute the number of rows, which will inform us as to the number of NumPy APIs...
###Code
NUM_APIS = len(CATEGORIES.index)
NUM_APIS
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
CATEGORIES.head()
###Output
_____no_output_____
###Markdown
In the above table, the first column corresponds to the NumPy API (arranged in alphabetical order). The second column corresponds to a high-level cateogory (as inspired by categorization found in [**rnumpy**][rnumpy]). The third column corresponds to a subcategory of the respective value in the second column. The categories are as follows:- `binary_ops`: APIs for performing bitwise operations- `creation`: APIs for array creation- `datetime`: APIs for manipulating dates and times- `indexing`: APIs for array indexing- `io`: APIs for loading and writing data- `linalg`: APIs for performing linear algebra operations (e.g., dot product, matrix multiplication, etc.)- `logical`: APIs for logical operations (e.g., element-wise comparisions)- `manipulation`: APIs for array manipulation (e.g., reshaping and joining arrays)- `math`: APIs for basic mathematical functions (e.g., element-wise elementary functions)- `polynomials`: APIs for evaluating polynomials- `random`: APIs for pseudorandom number generation- `sets`: APIs for performing set operations (e.g., union, intersection, complement, etc.)- `signal_processing`: APIs for performing signal processing (e.g., FFTs)- `sorting`: APIs for sorting array elements- `statistics`: APIs for computing statistics (e.g., reductions such as computing the mean, variance, and standard deviation)- `string`: APIs for operating on strings- `utilities`: general utilities (e.g., displaying an element's binary representation)- `(other)`: APIs not categorized (or subcategorized)API categorization was manually compiled based on personal judgment and is undoubtedly imperfect.[rnumpy]: https://github.com/Quansight-Labs/rnumpy
###Code
CATEGORY_NAMES = [
'(other)',
'binary_ops',
'creation',
'datetime',
'indexing',
'io',
'linalg',
'logical',
'manipulation',
'math',
'polynomials',
'random',
'sets',
'signal_processing',
'sorting',
'statistics',
'string',
'utilities'
]
###Output
_____no_output_____
###Markdown
Of the list of category names, we can define a subset of "core" categories (again, based on personal judgment)...
###Code
CORE_CATEGORY_NAMES = [
'creation',
'indexing',
'linalg',
'logical',
'manipulation',
'math',
'signal_processing', # mainly because of FFT
'sorting',
'statistics'
]
NON_CORE_CATEGORY_NAMES = np.setdiff1d(CATEGORY_NAMES, CORE_CATEGORY_NAMES).tolist()
###Output
_____no_output_____
###Markdown
From the category data above, we can determine the relative composition of the NumPy API...
###Code
category_breakdown = CATEGORIES.groupby(by=['category', 'subcategory']).count()
category_breakdown
###Output
_____no_output_____
###Markdown
We can visualize the relative composition for top-level categories as follows
###Code
category_count = CATEGORIES.loc[:,['name','category']].groupby(by='category').count().sort_values(by='name', ascending=True)
category_count.plot.barh()
###Output
_____no_output_____
###Markdown
If we omit functions which are not in "core" categories, we arrive at the following API frequency distribution...
###Code
# Compute the total number of non-"core" NumPy APIs:
non_core_categories_num_apis = category_count.loc[NON_CORE_CATEGORY_NAMES,:].sum()
# Create a DataFrame containing only NumPy APIs considered "core" and compute the empirical frequency distribution:
core_category_distribution = category_count.drop(index=NON_CORE_CATEGORY_NAMES) / (NUM_APIS-non_core_categories_num_apis)
core_category_distribution.sort_values(by='name', ascending=False)
core_category_distribution.sort_values(by='name', ascending=True).plot.barh()
###Output
_____no_output_____
###Markdown
NumPy Methods Load a table mapping NumPy `ndarray` methods to equivalent top-level NumPy APIs...
###Code
METHODS_TO_FUNCTIONS = pandas.read_csv(os.path.join(data_dir, 'raw', 'numpy_methods_to_functions.csv')).fillna(value='-')
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(METHODS_TO_FUNCTIONS.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
METHODS_TO_FUNCTIONS.head(10)
METHODS_TO_FUNCTIONS.tail(10)
###Output
_____no_output_____
###Markdown
Join Load API data for each array library as a single table, using NumPy as the reference API...
###Code
JOIN = pandas.read_csv(os.path.join(data_dir, 'join.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(JOIN.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
JOIN.head()
###Output
_____no_output_____
###Markdown
In the table above, the first column corresponds to the NumPy API (arranged in alphabetical order) and each subsequent column corresponds to an equivalent API in a respective array library. If an array library does not have an equivalent API, the row value for that array library is `NaN`. Intersection Load a table containing the API intersection (i.e., APIs implemented in **all** compared array libraries)...
###Code
INTERSECTION = pandas.read_csv(os.path.join(data_dir, 'intersection.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(INTERSECTION.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
INTERSECTION.head()
###Output
_____no_output_____
###Markdown
In the table above, the first column corresponds to the NumPy API (arranged in alphabetical order) and each subsequent column corresponds to an equivalent API in a respective array library.Using the API categorization data above, we can associate each NumPy API in the intersection with its respective category...
###Code
intersection_categories = pandas.merge(
INTERSECTION[['numpy']],
CATEGORIES[['name', 'category', 'subcategory']],
left_on='numpy',
right_on='name',
how='left'
)
intersection_categories.drop('numpy', axis=1, inplace=True)
intersection_categories.head()
###Output
_____no_output_____
###Markdown
From the previous table, we can compute the category composition of the intersection, which is as follows:
###Code
intersection_category_count = intersection_categories.loc[:,['name', 'category']].fillna(value='(other)').groupby(by='category').count().sort_values(by='name', ascending=False)
intersection_category_count
###Output
_____no_output_____
###Markdown
From which we can compute the empirical distribution...
###Code
intersection_category_distribution = intersection_category_count / intersection_category_count.sum()
intersection_category_distribution
###Output
_____no_output_____
###Markdown
whereby- `~50%` are basic element-wise mathematical functions, such as arithmetic and trigonometric functions- `~20%` are array creation and manipulation functions- `~5%` are linear algebra functions- `~12%` are indexing and statistics
###Code
intersection_category_count.sort_values(by='name', ascending=True).plot.barh()
###Output
_____no_output_____
###Markdown
SummaryArray libraries find the most agreement in providing APIs for (1) array creation and manipulation, (2) element-wise operations for evaluating elementary mathematical functions, (3) basic summary statistics, and (4) linear algebra operations. Complement (intersection) Load a table containing the API complement (i.e., APIs **not** included in the intersection above)...
###Code
COMPLEMENT = pandas.read_csv(os.path.join(data_dir, 'complement.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(COMPLEMENT.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
COMPLEMENT.head()
###Output
_____no_output_____
###Markdown
In the table above, the first column corresponds to the NumPy API (arranged in alphabetical order) and each subsequent column corresponds to an equivalent API in a respective array library. If an array library does not have an equivalent API, the row value for that array library is `NaN`.Using the API categorization data above, we can associate each NumPy API in the complement with its respective category...
###Code
complement_categories = pandas.merge(
COMPLEMENT[['numpy']],
CATEGORIES[['name', 'category', 'subcategory']],
left_on='numpy',
right_on='name',
how='left'
)
complement_categories.drop('numpy', axis=1, inplace=True)
complement_categories.head()
###Output
_____no_output_____
###Markdown
Common APIs Load a table containing (relatively) common APIs (where "common" is defined as existing in **at least** `5` of the `7` compared array libraries; this dataset may be considered a weaker and more inclusive intersection)...
###Code
COMMON_APIS = pandas.read_csv(os.path.join(data_dir, 'common_apis.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(COMMON_APIS.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
COMMON_APIS.head()
###Output
_____no_output_____
###Markdown
In the table above, the first column corresponds to the NumPy API (arranged in alphabetical order) and each subsequent column corresponds to an equivalent API in a respective array library. If an array library does not have an equivalent API, the row value for that array library is `NaN`.Using the API categorization data above, we can associate each NumPy API in the list of common APIs with its respective category...
###Code
common_apis_categories = pandas.merge(
COMMON_APIS[['numpy']],
CATEGORIES[['name', 'category', 'subcategory']],
left_on='numpy',
right_on='name',
how='left'
)
common_apis_categories.drop('numpy', axis=1, inplace=True)
common_apis_categories.head()
###Output
_____no_output_____
###Markdown
From the previous table, we can compute the category composition of the list of common APIs, which is as follows:
###Code
common_apis_category_count = common_apis_categories.loc[:,['name', 'category']].fillna(value='(other)').groupby(by='category').count().sort_values(by='name', ascending=False)
common_apis_category_count
###Output
_____no_output_____
###Markdown
From which we can compute the empirical distribution...
###Code
common_apis_category_distribution = common_apis_category_count / common_apis_category_count.sum()
common_apis_category_distribution
common_apis_category_count.sort_values(by='name', ascending=True).plot.barh()
###Output
_____no_output_____
###Markdown
SummaryIn addition to the categories discussed above in the `Intersection` section, array libraries find general agreement in providing APIs for (1) logical operations, (2) signal processing, and (3) indexing. Complement (common APIs) Load a table containing the complement of the above common APIs...
###Code
COMMON_COMPLEMENT = pandas.read_csv(os.path.join(data_dir, 'common_complement.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(COMMON_COMPLEMENT)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
COMMON_COMPLEMENT.head()
###Output
_____no_output_____
###Markdown
In the table above, the first column corresponds to the NumPy API (arranged in alphabetical order) and each subsequent column corresponds to an equivalent API in a respective array library. If an array library does not have an equivalent API, the row value for that array library is `NaN`.Using the API categorization data above, we can associate each NumPy API in the complement with its respective category...
###Code
common_complement_categories = pandas.merge(
COMMON_COMPLEMENT[['numpy']],
CATEGORIES[['name', 'category', 'subcategory']],
left_on='numpy',
right_on='name',
how='left'
)
common_complement_categories.drop('numpy', axis=1, inplace=True)
common_complement_categories.head()
###Output
_____no_output_____
###Markdown
Downstream Library Usage Downstream library usage was measured by running test suites for each respective downstream library and recording NumPy API calls. For further details, see the API record tooling [repository](https://github.com/data-apis/python-api-record).Load a table containing API usage data...
###Code
API_RECORD = pandas.read_csv(os.path.join(data_dir, 'vendor', 'record.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(API_RECORD.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
API_RECORD.head(10)
###Output
_____no_output_____
###Markdown
In the above table, the first column corresponds to the NumPy API (arranged in descending order according to line count), and the second column corresponds to the name of the downstream library. * * * Analysis Ranking (intersection) From the API record data, we can rank each API in the API intersection according to its relative usage and based on the following algorithm:- For each downstream library, compute the relative invocation frequency for each NumPy API based on the total number of NumPy API invocations for that library.- For each downstream library, rank NumPy APIs by invocation frequency in descending order (i.e., an API with a greater invocation frequency should have a higher rank).- For each NumPy API, use a [positional voting system](https://en.wikipedia.org/wiki/Borda_count) to tally library preferences. Here, we use a [Borda count](https://en.wikipedia.org/wiki/Borda_count) called the Dowdall system to assign points via a fractional weight scheme forming a harmonic progression. Note that this particular voting system favors APIs which have more first preferences. The assumption here is that lower relative ranks are more "noisy" and should contribute less weight to an API's ranking. Note that this can lead to scenarios where an API is used heavily by a single downstream library (and thus has a high ranking for that downstream library), but is rarely used (if at all) by other downstream libraries. In which case, that API may be ranked higher than other APIs which are used by all (or many) downstream libraries, but not heavily enough to garner enough points to rank higher. In practice, this situation does not appear common. APIs used heavily by one library are typically used heavily by several other libraries. In which case, the risk of assigning too much weight to a domain-specific use case should be minimal.The ranking data is available as a precomputed table.
###Code
INTERSECTION_RANKS = pandas.read_csv(os.path.join(data_dir, 'intersection_ranks.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(INTERSECTION_RANKS.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
INTERSECTION_RANKS.head(10)
INTERSECTION_RANKS.tail(10)
###Output
_____no_output_____
###Markdown
SummaryBased on the record data, the most commonly used NumPy APIs which are shared among **all** analyzed array libraries are those for (1) array creation (e.g., `zeros`, `ones`, etc.), (2) array manipulation (e.g., `reshape`), (3) element-wise evaluation of elementary mathematical functions (e.g., `sin`, `cos`, etc.), and (4) statistical reductions (e.g., `mean`, `var`, `std`, etc.). Ranking (common APIs) Similar to ranking the APIs found in the intersection, as done above, we can rank each API in the list of common APIs according to relative usage. The ranking data is available as a precomputed table.
###Code
COMMON_APIS_RANKS = pandas.read_csv(os.path.join(data_dir, 'common_apis_ranks.csv'))
###Output
_____no_output_____
###Markdown
Compute the number of rows...
###Code
len(COMMON_APIS_RANKS.index)
###Output
_____no_output_____
###Markdown
Preview table contents...
###Code
COMMON_APIS_RANKS.head(10)
COMMON_APIS_RANKS.tail(10)
###Output
_____no_output_____
###Markdown
SummaryBased on the record data, the most commonly used NumPy APIs which are common among analyzed array libraries are those for (1) array creation (e.g., `zeros`, `ones`, etc.), (2) array manipulation (e.g., `reshape`), (3) element-wise evaluation of elementary mathematical functions (e.g., `sin`, `cos`, etc.), and (4) statistical reductions (e.g., `amax`, `amin`, `mean`, `var`, `std`, etc.). Downstream API Usage Categories Load a precomputed table containing the API usage categories for the top `100` NumPy array APIs for each downstream library...
###Code
LIB_TOP_100_CATEGORY_STATS = pandas.read_csv(os.path.join(data_dir, 'lib_top_100_category_stats.csv'), index_col='category')
###Output
_____no_output_____
###Markdown
View table contents...
###Code
LIB_TOP_100_CATEGORY_STATS
groups = LIB_TOP_100_CATEGORY_STATS.index.values
fig, ax = plt.subplots()
index = np.arange(len(groups))
bar_width = 0.15
rects1 = plt.bar(index-(1*bar_width), LIB_TOP_100_CATEGORY_STATS['dask.array'], bar_width, label='dask.array')
rects2 = plt.bar(index-(0*bar_width), LIB_TOP_100_CATEGORY_STATS['matplotlib'], bar_width, label='matplotlib')
rects3 = plt.bar(index+(1*bar_width), LIB_TOP_100_CATEGORY_STATS['pandas'], bar_width, label='pandas')
rects4 = plt.bar(index+(2*bar_width), LIB_TOP_100_CATEGORY_STATS['skimage'], bar_width, label='skimage')
rects5 = plt.bar(index+(3*bar_width), LIB_TOP_100_CATEGORY_STATS['xarray'], bar_width, label='xarray')
plt.title('Array API Categories')
plt.xlabel('Categories')
plt.ylabel('Count')
plt.xticks(index + bar_width, groups)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Task 4
###Code
import pandas as pd
import seaborn as sns
from project_functions1 import load_and_process
dfKennedy = load_and_process("../data/processed/cleanedf1data.csv")
from project_functions2 import load_and_process
dfJordan = load_and_process("../data/processed/cleanedf1data.csv")
from project_functions3 import load_and_process
dfEvan = load_and_process("../data/processed/cleanedf1data.csv")
###Output
_____no_output_____
###Markdown
How has Point Distribution Differed Throughout the Seasons of F1?
###Code
p= sns.barplot(x="Year", y="Points", data=dfKennedy)
p.set(ylabel="Points", title="Point Distribution")
p.figure.set_size_inches(35,10)
s2=sns.stripplot(data=dfKennedy, x="Position", y="Points",order=['1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18','19','20','21','22','23','24','25','26','27','28','DQ'])
s2.set(xlabel="Position", ylabel="Points", title="Points per Position")
s2.figure.set_size_inches(10,5)
###Output
_____no_output_____
###Markdown
The first visualization shows a clear jump in point allocation in the year of 2010, where as up until then, there was a quite slow upward growing trend, implying that there is a larger number of points being given out per race in the years of 2010 and proceeding, than that prior to 2010. We can also see that 2020 appears to be an outlier, this can be due to the fact that at the time of this dataset, the 2020 F1 season had not yet concluded.The second visualization displays the total points allocated per position each year. It should be noted that since there is not a fixed number of racers every year, there are some positions that will not have scores each year. There is also notably a special consideration of a DQ, disqualification, one year. As expected, we can see that there are a higher number of points distributed to the higher positions which makes sense considering it is the reward for earning the position.It is interesting however, that there almost appears to be 2 unique patterns between position 1 and 10 in the second visualization. We can see the lower portion which is much more saturated and 1 is around 150, and an upper portion which is less saturated and 1 is around 400. Based on our first visualization which showed that after 2010, there was a higher point average, we can theorize that the lower portion of our scatter plot is the earlier years of F1, and the upper portion is years after 2010. The comparison of these two visualizations allows us to come to the conclusion that there are indeed more points being allocated per position starting in the year of 2010. What Nationality has Produced the Best Drivers?
###Code
s=sns.scatterplot(data=dfJordan, x="Nationality", y="Points")
s.set(ylabel="Points", title="Points by Nationality")
s.figure.set_size_inches(18,5)
c = sns.countplot(x="Nationality", data=dfJordan)
c.set(ylabel="Number of Drivers", title="Frequency of Drivers by Nationality")
c.figure.set_size_inches(15,5)
###Output
_____no_output_____
###Markdown
Highest Cummulative Points:
###Code
dfNation = dfJordan.groupby(["Nationality"]).Points.sum().reset_index()
dfNation[dfNation["Points"]==dfNation["Points"].max()]
###Output
_____no_output_____
###Markdown
Most Drivers:
###Code
dfDriver = dfJordan.groupby(["Nationality"]).Driver.count().reset_index()
dfDriver[dfDriver["Driver"]==dfDriver["Driver"].max()]
###Output
_____no_output_____
###Markdown
Looking at the "Points by Nationality", visualization, we can clearly see which nationalities have the highest scores. The top three scoring nationalities are GBR (Great Britian), GER (Germany), and FIN (Finland) but they also appear to have the most data points. Expanding on this, we wanted to explore how many data points existed for each Nationality. This is important as it indicates how many drivers raced for each nationality. We can see in the "Frequency of Drivers by Nationality" visual that similarly, Great Britian has the highest number of drivers with over 250 but Germany and Finland have a much lower number of drivers. Instead ITA (Italy), FRA (France), and USA (United States) have a much higher number of drivers in comparison.Looking into exactly how many points Great Britian has scored, we can see that they have scored nearly 9100 points in total and have done so with 285 drivers. The high cummulative score is largely contributed to their number of drivers rather than a dominance of their drivers since German and Finnish drivers have been able to achieve similar high scores. What Teams are the Highest Cumulative Scoring Throughout History? Highest Cumulative points by Teams
###Code
dfTeam = dfEvan.groupby(["Team"]).Points.sum().reset_index()
dfTeam[dfTeam["Points"]==dfTeam["Points"].max()]
###Output
_____no_output_____
###Markdown
Total Drivers in History
###Code
dfTime = dfEvan.groupby(["Team"]).Year.count().reset_index()
dfTime.sort_values("Year").reset_index()
dfTime[dfTime["Year"]==dfTime["Year"].max()]
b = sns.boxplot(x="Points", y="Team", data=dfEvan)
b = sns.stripplot(x="Points", y="Team", data=dfEvan, size=2, palette="dark:black")
b.set(title="Team Point Scorings")
b.figure.set_size_inches(35, 35)
h = sns.countplot(y="Team", data=dfEvan)
h.set(ylabel="Team Name", xlabel="Number of Drivers", title="Frequency of Drivers by Team")
h.figure.set_size_inches(30,30)
###Output
_____no_output_____
###Markdown
Overall, Ferrari has scored the most cumulative points in history, scoring a total of 9096 points. This is largely due to their long time existance in F1, over the 70 years of data, they have had nearly 200 drivers.Although these visualizations take a bit of zooming in to see, the first one clearly shows the distribution of the scores for each team well. We see Ferrari having the most data, while also having the most plots outside of their average range. This clearly indicates how they have experienced the benefits of the most total points while also not having the best overall driving, which plots their top drivers outside of their normal distribution. Where in the reverse, a team such as Mercedes, they have scored very high and continually place as a top team which boosts their overall total in points. Looking into the second visualization, we can confirm the assumption from the previous visualization, that team Ferrari has had the most drivers which has led their top drivers scores to fall outside of their normal distribution. We also see team Mercedes, which has a much higher normal distribution of score, with a lower number of drivers, which indicates how impressive they have been since their F1 debut. There are also several teams that score very low which is indicative of the turnover rate within F1 racing over the years. It indicates that if a team has a poor performance in their debut, and no quick improvement, they lose sponsors and get replaced relatively quickly. This can be seen with teams such as Prost Acer, Toro Rosso, and ZakSpeed, as they have very few drivers in history. As well, we see the high team turnover rates as there have been a significant number of teams in the history, many with few drivers to have competed and a very, very low normal point distribution. How has the Number of Drivers Varied each Season in F1? We want to compare the number of drivers per year, but the dataset didn't allow to directly do that, so below, we will be creating a new dataframe that consists of two columns; "Year" and "Driver Count". We will clean the data copied from f1data, and only keep the first presence (remove duplicates) of each year and it's distinct driver count so that we are able to create a visualization of the distinct count of driveres per year.
###Code
data = dfKennedy
data['Driver Count'] = data.groupby('Year')['Year'].transform('count')
data2 = [data["Year"], data["Driver Count"]]
headers = ["Year", "Driver Count"]
driver_count = pd.concat(data2, axis=1, keys=headers)
driver_count.drop_duplicates(subset ="Year", keep = "first", inplace = True)
driver_count = driver_count.reset_index()
driver_count = driver_count.drop(columns=["index"])
bar_dpy = sns.barplot(x="Year", y="Driver Count", data=driver_count)
bar_dpy.set(title = 'Count of Drivers per Year')
bar_dpy.figure.set_size_inches(35,10)
###Output
_____no_output_____
###Markdown
Consumer Price Index analysisBy Ben WelshA rudimentary analysis of the Consumer Price Index published by the U.S. Bureau of Labor Statistics. It was developed to verify the accuracy of the [cpi](https://github.com/datadesk/cpi) open-source Python wrapper that eases access to the official government data. Import Python tools
###Code
import os
import pandas as pd
import altair as alt
from datetime import datetime, timedelta
###Output
_____no_output_____
###Markdown
Import the development version of this library
###Code
import os
import sys
this_dir = os.path.dirname(os.getcwd())
sys.path.insert(0, this_dir)
import cpi
###Output
_____no_output_____
###Markdown
Match category analysis published by the BLSIn an October 2018 [post](https://www.bls.gov/opub/ted/2018/consumer-prices-up-2-point-3-percent-over-year-ended-september-2018.htm) the BLS published the following chart showing the month to month percentage change in the Consumer Price Index for All Urban Consumers across a select group of categories. We will replicate it below. Query the three data series charted by the BLS
###Code
all_items = cpi.series.get(seasonally_adjusted=False).to_dataframe()
energy = cpi.series.get(items="Energy", seasonally_adjusted=False).to_dataframe()
food = cpi.series.get(items="Food", seasonally_adjusted=False).to_dataframe()
###Output
_____no_output_____
###Markdown
Write a function to prepare each series for presentation
###Code
def prep_series(df):
# Trim down to monthly values
df = df[df.period_type == 'monthly']
# Calculate percentage change year to year
df['pct_change'] = df.value.pct_change(12)
# Trim down to the last 13 months
return df.sort_values("date").tail(12*10)
all_items_prepped = prep_series(all_items)
energy_prepped = prep_series(energy)
food_prepped = prep_series(food)
category_df = pd.concat([
all_items_prepped,
energy_prepped,
food_prepped
])
from datetime import date
base = alt.Chart(
category_df,
title="12-month percentage change, Consumer Price Index, selected categories"
).encode(
x=alt.X(
"date:T",
timeUnit="yearmonth",
axis=alt.Axis(
title=None,
labelAngle=0,
grid=False,
# A truly gnarly hack from https://github.com/altair-viz/altair/issues/187
values=list(pd.to_datetime([
'2008-10-01',
'2010-10-01',
'2012-10-01',
'2014-10-01',
'2016-10-01',
'2018-10-01'
]).astype(int) / 10 ** 6)
),
),
y=alt.Y(
"pct_change:Q",
axis=alt.Axis(title=None, format='%'),
scale=alt.Scale(domain=[-0.4, 0.3])
),
color=alt.Color(
"series_items_name:N",
legend=alt.Legend(title="Category"),
scale=alt.Scale(range=["#423a51", "#449cb0", "#d09972"])
)
)
all_items = base.transform_filter(
alt.datum.series_items_name == 'All items'
).mark_line(strokeDash=[3, 2])
other_items = base.transform_filter(
alt.datum.series_items_name != 'All items'
).mark_line()
(all_items + other_items).properties(width=600)
###Output
_____no_output_____
###Markdown
Match monthly analysis published by the BLS.In a July 2018 [press release](https://www.bls.gov/news.release/pdf/cpi.pdf) the BLS published the following chart showing the month to month percentage change in the Consumer Price Index for All Urban Consumers, also known as the CPI-U. We will replicate it below. Query the seasonally-adjusted CPI-U, which is the variation used by the BLS in its release.
###Code
adjusted_cpiu = cpi.series.get_by_id('CUSR0000SA0').to_dataframe()
###Output
_____no_output_____
###Markdown
Add a date field
###Code
adjusted_cpiu['date'] = pd.to_datetime(adjusted_cpiu.date)
###Output
_____no_output_____
###Markdown
Filter down to monthly values
###Code
adjusted_cpiu.head()
adjusted_cpiu.period_type.value_counts()
adjusted_cpiu = adjusted_cpiu[adjusted_cpiu.period_type == 'monthly']
###Output
_____no_output_____
###Markdown
Calculate the monthly percentage change.
###Code
adjusted_cpiu['pct_change'] = (adjusted_cpiu.value.pct_change()*100)
###Output
_____no_output_____
###Markdown
Round it in the same manner as the BLS.
###Code
adjusted_cpiu['pct_change_rounded'] = adjusted_cpiu['pct_change'].round(1)
###Output
_____no_output_____
###Markdown
Trim down to the 13 most recent months of data.
###Code
last_13 = adjusted_cpiu.sort_values("date").tail(13)
###Output
_____no_output_____
###Markdown
Draw the chart.
###Code
base = alt.Chart(
last_13,
title="One-month percent change in CPI for All Urban Consumers (CPI-U), seasonally adjusted"
).properties(width=700)
bars = base.mark_bar().encode(
x=alt.X(
"date:O",
timeUnit="yearmonth",
axis=alt.Axis(title=None, labelAngle=0),
),
y=alt.Y(
"pct_change_rounded:Q",
axis=alt.Axis(title=None),
scale=alt.Scale(domain=[
last_13['pct_change'].min()-0.1,
last_13['pct_change'].max()+0.05
])
)
)
text = base.encode(
x=alt.X("date:O", timeUnit="yearmonth"),
y="pct_change_rounded:Q",
text='pct_change_rounded'
)
textAbove = text.transform_filter(alt.datum.pct_change > 0).mark_text(
align='center',
baseline='middle',
fontSize=14,
dy=-10
)
textBelow = text.transform_filter(alt.datum.pct_change < 0).mark_text(
align='center',
baseline='middle',
fontSize=14,
dy=12
)
bars + textAbove + textBelow
###Output
_____no_output_____
###Markdown
Dump the file to JSON
###Code
last_13.to_json("./last_13.json")
###Output
_____no_output_____
###Markdown
Consumer Price Index analysisBy Ben WelshA rudimentary analysis of the Consumer Price Index published by the U.S. Bureau of Labor Statistics. It was developed to verify the accuracy of the [cpi](https://github.com/datadesk/cpi) open-source Python wrapper that eases access to the official government data. Import Python tools
###Code
import os
import json
import warnings
import pandas as pd
import altair as alt
from datetime import date, datetime, timedelta
import altair_latimes as lat
alt.themes.register('latimes', lat.theme)
alt.themes.enable('latimes')
warnings.simplefilter("ignore")
###Output
_____no_output_____
###Markdown
Import the development version of this library
###Code
import os
import sys
this_dir = os.path.dirname(os.getcwd())
sys.path.insert(0, this_dir)
import cpi
###Output
_____no_output_____
###Markdown
Top-level numbers for the latest month
###Code
def get_last13(**kwargs):
df = cpi.series.get(**kwargs).to_dataframe()
# Filter down to monthly values
df = df[df.period_type == 'monthly']
# Cut down to the last 13 months
df = df.sort_values("date").tail(14)
# Return it
return df
def analyze_last13(df):
# Calculate the monthly percentage change
df['pct_change'] = (df.value.pct_change()*100)
# Calculate the monthly percentage change
df['pct_change_rounded'] = df['pct_change'].round(1)
# Get latest months
latest_month, latest_change = df.sort_values("date", ascending=False)[['date', 'pct_change_rounded']].to_records(index=False)[0]
previous_month, previous_change = df.sort_values("date", ascending=False)[['date', 'pct_change_rounded']].to_records(index=False)[1]
# Pass it back
return dict(
latest_month=latest_month,
latest_change=latest_change,
previous_month=previous_month,
previous_change=previous_change,
)
###Output
_____no_output_____
###Markdown
Query the seasonally-adjusted CPI-U, which is the variation used by the BLS in its release.
###Code
adjusted_cpiu_last13 = get_last13(seasonally_adjusted=True)
adjusted_cpi_analysis = analyze_last13(adjusted_cpiu_last13)
adjusted_cpi_analysis
adjusted_food_analysis = analyze_last13(get_last13(seasonally_adjusted=True, items="Food"))
adjusted_food_analysis
adjusted_energy_analysis = analyze_last13(get_last13(seasonally_adjusted=True, items="Energy"))
adjusted_energy_analysis
adjusted_all_less_food_and_energy_analysis = analyze_last13(get_last13(seasonally_adjusted=True, items="All items less food and energy"))
adjusted_all_less_food_and_energy_analysis
base = alt.Chart(
adjusted_cpiu_last13,
title="One-month percent change in CPI for All Urban Consumers (CPI-U), seasonally adjusted"
).properties(width=700)
bars = base.mark_bar().encode(
x=alt.X(
"date:O",
timeUnit="utcyearmonth",
axis=alt.Axis(title=None, labelAngle=0),
),
y=alt.Y(
"pct_change_rounded:Q",
axis=alt.Axis(title=None),
scale=alt.Scale(domain=[
adjusted_cpiu_last13['pct_change'].min()-0.1,
adjusted_cpiu_last13['pct_change'].max()+0.05
])
)
)
text = base.encode(
x=alt.X("date:O", timeUnit="utcyearmonth"),
y="pct_change_rounded:Q",
text='pct_change_rounded'
)
textAbove = text.transform_filter(alt.datum.pct_change > 0).mark_text(
align='center',
baseline='middle',
fontSize=14,
dy=-10
)
textBelow = text.transform_filter(alt.datum.pct_change < 0).mark_text(
align='center',
baseline='middle',
fontSize=14,
dy=12
)
bars + textAbove + textBelow
###Output
_____no_output_____
###Markdown
Get the year over year change
###Code
unadjusted_cpiu = cpi.series.get(seasonally_adjusted=False).to_dataframe()
unadjusted_cpiu_monthly = unadjusted_cpiu[unadjusted_cpiu.period_type == 'monthly'].sort_values("date", ascending=False)
unadjusted_cpiu_monthly.head(13)[[
'date',
'value'
]]
lastest_unadjusted, one_year_ago_unadjusted = pd.concat([
unadjusted_cpiu_monthly.head(1),
unadjusted_cpiu_monthly.head(13).tail(1),
]).value.tolist()
lastest_unadjusted, one_year_ago_unadjusted
yoy_change = round(((lastest_unadjusted-one_year_ago_unadjusted)/one_year_ago_unadjusted)*100, 1)
yoy_change
with open("./latest.json", "w") as fp:
fp.write(json.dumps(dict(
all=adjusted_cpi_analysis,
food=adjusted_food_analysis,
energy=adjusted_energy_analysis,
less_food_and_energy=adjusted_all_less_food_and_energy_analysis,
yoy_change=yoy_change,
)))
adjusted_cpiu_last13[~pd.isnull(adjusted_cpiu_last13.pct_change_rounded)][[
'date',
'pct_change',
'pct_change_rounded'
]].to_csv("./cpi-mom.csv", index=False)
###Output
_____no_output_____
###Markdown
Match category analysis published by the BLSIn an October 2018 [post](https://www.bls.gov/opub/ted/2018/consumer-prices-up-2-point-3-percent-over-year-ended-september-2018.htm) the BLS published the following chart showing the month to month percentage change in the Consumer Price Index for All Urban Consumers across a select group of categories. We will replicate it below. Query the data series charted by the BLS
###Code
all_items = cpi.series.get(seasonally_adjusted=False).to_dataframe()
energy = cpi.series.get(items="Energy", seasonally_adjusted=False).to_dataframe()
food = cpi.series.get(items="Food", seasonally_adjusted=False).to_dataframe()
###Output
_____no_output_____
###Markdown
Write a function to prepare each series for presentation
###Code
def prep_yoy(df):
# Trim down to monthly values
df = df[df.period_type == 'monthly']
# Calculate percentage change year to year
df['pct_change'] = df.value.pct_change(12)
# Trim down to the last 13 months
return df.sort_values("date")
all_items_prepped = prep_yoy(all_items)
energy_prepped = prep_yoy(energy)
food_prepped = prep_yoy(food)
three_cats = pd.concat([
all_items_prepped.tail(12*10),
energy_prepped.tail(12*10),
food_prepped.tail(12*10)
])
base = alt.Chart(
three_cats,
title="12-month percentage change, Consumer Price Index, selected categories"
).encode(
x=alt.X(
"date:T",
timeUnit="yearmonth",
axis=alt.Axis(
title=None,
labelAngle=0,
grid=False,
# A truly gnarly hack from https://github.com/altair-viz/altair/issues/187
values=list(pd.to_datetime([
'2008-11-01',
'2010-11-01',
'2012-11-01',
'2014-11-01',
'2016-11-01',
'2018-11-01'
]).astype(int) / 10 ** 6)
),
),
y=alt.Y(
"pct_change:Q",
axis=alt.Axis(title=None, format='%'),
scale=alt.Scale(domain=[-0.4, 0.3])
),
color=alt.Color(
"series_items_name:N",
legend=alt.Legend(title="Category"),
# scale=alt.Scale(range=["#423a51", "#449cb0", "#d09972"])
)
)
all_items = base.transform_filter(
alt.datum.series_items_name == 'All items'
).mark_line(strokeDash=[3, 2])
other_items = base.transform_filter(
alt.datum.series_items_name != 'All items'
).mark_line()
(all_items + other_items).properties(width=600)
three_cats.to_csv("./three-categories-yoy.csv", index=False)
###Output
_____no_output_____
###Markdown
A similar chart with a shorter timeframe Here's another one. 
###Code
all_less_energy_and_food = cpi.series.get(items="All items less food and energy", seasonally_adjusted=False).to_dataframe()
all_less_energy_and_food_prepped = prep_yoy(all_less_energy_and_food)
two_cats = pd.concat([
all_items_prepped.tail(13),
all_less_energy_and_food_prepped.tail(13),
])
base = alt.Chart(
two_cats,
title="12-month percent change in CPI for All Urban Consumers (CPI-U), not seasonally adjusted"
).encode(
x=alt.X(
"date:T",
timeUnit="utcyearmonth",
axis=alt.Axis(
title=None,
labelAngle=0,
grid=False,
format="%b"
)
),
y=alt.Y(
"pct_change:Q",
axis=alt.Axis(title="Percent change", format='%'),
scale=alt.Scale(domain=[0.012, 0.03])
),
color=alt.Color(
"series_items_name:N",
legend=alt.Legend(title="Category"),
# scale=alt.Scale(range=["#336EFF", "#B03A2E",])
)
)
line = base.mark_line(strokeWidth=0.85)
exes = base.transform_filter(alt.datum.series_items_name == 'All items').mark_point(shape="triangle-down", size=25)
points = base.transform_filter(alt.datum.series_items_name == 'All items less food and energy').mark_point(size=25, fill="#B03A2E")
(line + exes + points).properties(width=600, height=225)
two_cats.to_csv("./two-categories-yoy.csv", index=False)
###Output
_____no_output_____
###Markdown
Consumer Price Index analysisBy Ben WelshA rudimentary analysis of the Consumer Price Index published by the U.S. Bureau of Labor Statistics. It was developed to verify the accuracy of the [cpi](https://github.com/datadesk/cpi) open-source Python wrapper that eases access to the official government data. Import Python tools
###Code
import os
import json
import warnings
import pandas as pd
import altair as alt
from datetime import date, datetime, timedelta
import altair_latimes as lat
alt.themes.register('latimes', lat.theme)
alt.themes.enable('latimes')
warnings.simplefilter("ignore")
###Output
_____no_output_____
###Markdown
Import the development version of this library
###Code
import os
import sys
this_dir = os.path.dirname(os.getcwd())
sys.path.insert(0, this_dir)
import cpi
###Output
_____no_output_____
###Markdown
Top-level numbers for the latest month
###Code
def get_last13(**kwargs):
df = cpi.series.get(**kwargs).to_dataframe()
# Filter down to monthly values
df = df[df.period_type == 'monthly']
# Cut down to the last 13 months
df = df.sort_values("date").tail(14)
# Return it
return df
def analyze_last13(df):
# Calculate the monthly percentage change
df['pct_change'] = (df.value.pct_change()*100)
# Calculate the monthly percentage change
df['pct_change_rounded'] = df['pct_change'].round(1)
# Get latest months
latest_month, latest_change = df.sort_values("date", ascending=False)[['date', 'pct_change_rounded']].to_records(index=False)[0]
previous_month, previous_change = df.sort_values("date", ascending=False)[['date', 'pct_change_rounded']].to_records(index=False)[1]
# Pass it back
return dict(
latest_month=latest_month,
latest_change=latest_change,
previous_month=previous_month,
previous_change=previous_change,
)
###Output
_____no_output_____
###Markdown
Query the seasonally-adjusted CPI-U, which is the variation used by the BLS in its release.
###Code
adjusted_cpiu_last13 = get_last13(seasonally_adjusted=True)
adjusted_cpi_analysis = analyze_last13(adjusted_cpiu_last13)
adjusted_cpi_analysis
adjusted_food_analysis = analyze_last13(get_last13(seasonally_adjusted=True, items="Food"))
adjusted_food_analysis
adjusted_energy_analysis = analyze_last13(get_last13(seasonally_adjusted=True, items="Energy"))
adjusted_energy_analysis
adjusted_all_less_food_and_energy_analysis = analyze_last13(get_last13(seasonally_adjusted=True, items="All items less food and energy"))
adjusted_all_less_food_and_energy_analysis
base = alt.Chart(
adjusted_cpiu_last13,
title="One-month percent change in CPI for All Urban Consumers (CPI-U), seasonally adjusted"
).properties(width=700)
bars = base.mark_bar().encode(
x=alt.X(
"date:O",
timeUnit="utcyearmonth",
axis=alt.Axis(title=None, labelAngle=0),
),
y=alt.Y(
"pct_change_rounded:Q",
axis=alt.Axis(title=None),
scale=alt.Scale(domain=[
adjusted_cpiu_last13['pct_change'].min()-0.1,
adjusted_cpiu_last13['pct_change'].max()+0.05
])
)
)
text = base.encode(
x=alt.X("date:O", timeUnit="utcyearmonth"),
y="pct_change_rounded:Q",
text='pct_change_rounded'
)
textAbove = text.transform_filter(alt.datum.pct_change > 0).mark_text(
align='center',
baseline='middle',
fontSize=14,
dy=-10
)
textBelow = text.transform_filter(alt.datum.pct_change < 0).mark_text(
align='center',
baseline='middle',
fontSize=14,
dy=12
)
bars + textAbove + textBelow
###Output
_____no_output_____
###Markdown
Get the year over year change
###Code
unadjusted_cpiu = cpi.series.get(seasonally_adjusted=False).to_dataframe()
unadjusted_cpiu_monthly = unadjusted_cpiu[unadjusted_cpiu.period_type == 'monthly'].sort_values("date", ascending=False)
unadjusted_cpiu_monthly.head(13)[[
'date',
'value'
]]
lastest_unadjusted, one_year_ago_unadjusted = pd.concat([
unadjusted_cpiu_monthly.head(1),
unadjusted_cpiu_monthly.head(13).tail(1),
]).value.tolist()
lastest_unadjusted, one_year_ago_unadjusted
yoy_change = round(((lastest_unadjusted-one_year_ago_unadjusted)/one_year_ago_unadjusted)*100, 1)
yoy_change
with open("./latest.json", "w") as fp:
fp.write(json.dumps(dict(
all=adjusted_cpi_analysis,
food=adjusted_food_analysis,
energy=adjusted_energy_analysis,
less_food_and_energy=adjusted_all_less_food_and_energy_analysis,
yoy_change=yoy_change,
)))
adjusted_cpiu_last13[~pd.isnull(adjusted_cpiu_last13.pct_change_rounded)][[
'date',
'pct_change',
'pct_change_rounded'
]].to_csv("./cpi-mom.csv", index=False)
###Output
_____no_output_____
###Markdown
Match category analysis published by the BLSIn an October 2018 [post](https://www.bls.gov/opub/ted/2018/consumer-prices-up-2-point-3-percent-over-year-ended-september-2018.htm) the BLS published the following chart showing the month to month percentage change in the Consumer Price Index for All Urban Consumers across a select group of categories. We will replicate it below. Query the data series charted by the BLS
###Code
all_items = cpi.series.get(seasonally_adjusted=False).to_dataframe()
energy = cpi.series.get(items="Energy", seasonally_adjusted=False).to_dataframe()
food = cpi.series.get(items="Food", seasonally_adjusted=False).to_dataframe()
###Output
_____no_output_____
###Markdown
Write a function to prepare each series for presentation
###Code
def prep_yoy(df):
# Trim down to monthly values
df = df[df.period_type == 'monthly']
# Calculate percentage change year to year
df['pct_change'] = df.value.pct_change(12)
# Trim down to the last 13 months
return df.sort_values("date")
all_items_prepped = prep_yoy(all_items)
energy_prepped = prep_yoy(energy)
food_prepped = prep_yoy(food)
three_cats = pd.concat([
all_items_prepped.tail(12*10),
energy_prepped.tail(12*10),
food_prepped.tail(12*10)
])
base = alt.Chart(
three_cats,
title="12-month percentage change, Consumer Price Index, selected categories"
).encode(
x=alt.X(
"date:T",
timeUnit="yearmonth",
axis=alt.Axis(
title=None,
labelAngle=0,
grid=False,
# A truly gnarly hack from https://github.com/altair-viz/altair/issues/187
values=list(pd.to_datetime([
'2008-11-01',
'2010-11-01',
'2012-11-01',
'2014-11-01',
'2016-11-01',
'2018-11-01'
]).astype(int) / 10 ** 6)
),
),
y=alt.Y(
"pct_change:Q",
axis=alt.Axis(title=None, format='%'),
scale=alt.Scale(domain=[-0.4, 0.3])
),
color=alt.Color(
"series_items_name:N",
legend=alt.Legend(title="Category"),
# scale=alt.Scale(range=["#423a51", "#449cb0", "#d09972"])
)
)
all_items = base.transform_filter(
alt.datum.series_items_name == 'All items'
).mark_line(strokeDash=[3, 2])
other_items = base.transform_filter(
alt.datum.series_items_name != 'All items'
).mark_line()
(all_items + other_items).properties(width=600)
three_cats.to_csv("./three-categories-yoy.csv", index=False)
###Output
_____no_output_____
###Markdown
A similar chart with a shorter timeframe Here's another one. 
###Code
all_less_energy_and_food = cpi.series.get(items="All items less food and energy", seasonally_adjusted=False).to_dataframe()
all_less_energy_and_food_prepped = prep_yoy(all_less_energy_and_food)
two_cats = pd.concat([
all_items_prepped.tail(13),
all_less_energy_and_food_prepped.tail(13),
])
base = alt.Chart(
two_cats,
title="12-month percent change in CPI for All Urban Consumers (CPI-U), not seasonally adjusted"
).encode(
x=alt.X(
"date:T",
timeUnit="utcyearmonth",
axis=alt.Axis(
title=None,
labelAngle=0,
grid=False,
format="%b"
)
),
y=alt.Y(
"pct_change:Q",
axis=alt.Axis(title="Percent change", format='%'),
scale=alt.Scale(domain=[0.012, 0.03])
),
color=alt.Color(
"series_items_name:N",
legend=alt.Legend(title="Category"),
# scale=alt.Scale(range=["#336EFF", "#B03A2E",])
)
)
line = base.mark_line(strokeWidth=0.85)
exes = base.transform_filter(alt.datum.series_items_name == 'All items').mark_point(shape="triangle-down", size=25)
points = base.transform_filter(alt.datum.series_items_name == 'All items less food and energy').mark_point(size=25, fill="#B03A2E")
(line + exes + points).properties(width=600, height=225)
two_cats.to_csv("./two-categories-yoy.csv", index=False)
###Output
_____no_output_____
###Markdown
Simple counts
###Code
from delicious_treat.analyser import Analyser
# Create an analyser for Pippy's messages
analyser = Analyser(pippy_messages)
analyser.analyse()
fd = analyser.freq_dist(pos=True)
fd.conditions()
fd[''].most_common(40)
analyser.filter_messages('sex')
import math
from datetime import datetime
import matplotlib.dates as mdate
def get_date_as_datetime(dt):
return datetime.combine(dt.date(), datetime.min.time())
def get_bins(times, duration):
first_date = get_date_as_datetime(times.min())
total_seconds = (get_date_as_datetime(times.max()) - first_date).total_seconds()
bin_count = math.ceil(total_seconds / duration.total_seconds())
return [mdate.epoch2num(datetime.timestamp(first_date + step * duration)) for step in range(bin_count)]
import matplotlib.pyplot as plt
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
fig, axes = plt.subplots(2, 1)
# Get bins from all the messages
bins = get_bins(analyser.messages.time, timedelta(days=1))
# Plot a filtered subset
total, _, _ = axes[0].hist(analyser.messages.time, bins=bins)
sub, _, _ = axes[0].hist(analyser.filter_messages('sex').time, bins=bins)
# Plot scaled
axes[1].bar(bins[:-1], sub / total * 100)
###Output
_____no_output_____
###Markdown
Analysis for Senior Undergradute Thesis
###Code
import sys
sys.path.append("/Users/joshua/Developer/CognitiveSubtypes")
from src.data.build import DataBuilder
from src.data.dataset import Dataset
from src.models.cluster import BestKMeans
from src.models.classify import ClassifierSearch, get_feature_importances
from src.utils import get_array_counts
import src.visualization.figures as figures
import src.visualization.tables as tables
import numpy as np
# set random seed to ensure replicability of analyses
np.random.seed(0)
###Output
_____no_output_____
###Markdown
1 - Create Dataset
###Code
db = DataBuilder()
db.write_csv()
###Output
_____no_output_____
###Markdown
2 - Preprocessing and Preliminary Analysis 2.1 - Load Dataset
###Code
data = Dataset.load_preprocess()
###Output
_____no_output_____
###Markdown
2.2 - Compare Patients and Controls (Table 1)
###Code
table1 = tables.KWTestsPvC(data)
table1.get()
table1.save('table1.csv')
###Output
_____no_output_____
###Markdown
3 - Clustering 3.1 - Fit K-Means Models
###Code
clu = BestKMeans()
clu.fit(data.cognitive)
###Output
_____no_output_____
###Markdown
3.2 - Plot Metrics (Figure 2)
###Code
fig2 = figures.KMeansScores(clu)
fig2.plot()
fig2.save("figure2.jpg")
###Output
_____no_output_____
###Markdown
3.3 - Assign Target From Selected Model
###Code
data.train.target = clu.predict(data.train.cognitive, k=2)
data.test.target = clu.predict(data.test.cognitive, k=2)
get_array_counts(data.target)
###Output
_____no_output_____
###Markdown
3.4 - Compare Clusters (Table 2)
###Code
table2 = tables.KWTestsClusters(data)
table2.get()
table2.save("table2.csv")
###Output
_____no_output_____
###Markdown
4 - Predict Clusters 4.1 - Fit Classifiers
###Code
cs = ClassifierSearch()
cs.fit(data)
###Output
Begin fitting best classifier for model: KNeighborsClassifier
Done!
roc_auc: 0.8813706740933707
Begin fitting best classifier for model: RidgeClassifier
Done!
roc_auc: 0.8224161424855133
Begin fitting best classifier for model: RandomForestClassifier
Done!
roc_auc: 0.9008564619258926
###Markdown
4.2 - Plot Classifier Performance (Figure 3)
###Code
fig3 = figures.AUCScores(cs)
fig3.plot()
fig3.save("figure3.jpg")
###Output
_____no_output_____
###Markdown
4.3 - Plot ROC Curve (Figure 4)
###Code
fig4 = figures.ROCCurve(cs, data)
fig4.plot()
fig4.save("figure4.jpg")
###Output
_____no_output_____
###Markdown
4.4 - Plot Feature Importances (Figure 5)
###Code
feature_importances = get_feature_importances(cs.best_classifier, data.imaging_feature_names)
feature_importances.to_csv("/Users/joshua/Developer/CognitiveSubtypes/data/rois.csv", index=True, index_label='label')
!Rscript /Users/joshua/Developer/CognitiveSubtypes/src/visualization/ggseg.R
###Output
[?25h[?25h── [1mAttaching packages[22m ─────────────────────────────────────── tidyverse 1.3.1 ──
[32m✔[39m [34mtibble [39m 3.1.6 [32m✔[39m [34mdplyr [39m 1.0.8
[32m✔[39m [34mtidyr [39m 1.2.0 [32m✔[39m [34mstringr[39m 1.4.0
[32m✔[39m [34mreadr [39m 2.1.2 [32m✔[39m [34mforcats[39m 0.5.1
[32m✔[39m [34mpurrr [39m 0.3.4
── [1mConflicts[22m ────────────────────────────────────────── tidyverse_conflicts() ──
[31m✖[39m [34mdplyr[39m::[32mfilter()[39m masks [34mstats[39m::filter()
[31m✖[39m [34mdplyr[39m::[32mlag()[39m masks [34mstats[39m::lag()
[?25h[?25h[?25h[?25hmerging atlas and data by 'label'
[?25hSaving 7 x 7 in image
merging atlas and data by 'label'
[?25h[?25h
###Markdown
5 - Appendices 5.1 - Distributions of Cognitive Variables (Appendix A)
###Code
appendix_a = figures.Transforms()
appendix_a.plot()
appendix_a.save("appendix_a.jpg")
###Output
_____no_output_____
###Markdown
5.2 - Diagnoses (Appendix B)
###Code
appendix_b = tables.Diagnoses()
appendix_b.get()
appendix_b.save("appendix_b.csv")
###Output
_____no_output_____
###Markdown
5.3 - Compare SSD and Affective (Appendix C)
###Code
appendix_c = tables.KWTestsDX(data)
appendix_c.get()
appendix_c.save("appendix_c.csv")
###Output
_____no_output_____
###Markdown
5.4 - Violin Plot (Appendix D)
###Code
appendix_d = figures.ViolinPlot(data)
appendix_d.plot()
appendix_d.save('appendix_d.jpg')
###Output
_____no_output_____
###Markdown
5.3 - Feature Importances (Appendix E)
###Code
appendix_e = figures.TopFeatures(cs.best_classifier, data)
appendix_e.plot()
appendix_e.save('appendix_e.jpg')
###Output
_____no_output_____ |
ch1/1.2/ex.1.19.ipynb | ###Markdown
変換$T_{pq}$を行列で表すと、$$T_{pq} = \begin{pmatrix}(p+q) && q \\q &&p\end{pmatrix}$$これを用いて、$$\begin{pmatrix}a \\b\end{pmatrix}\leftarrowT_{pq} \begin{pmatrix}a \\b\end{pmatrix}$$$T_{pq}$を二回繰り返すと、$$T_{p'q'}=T_{pq}^2=\begin{pmatrix}(p^2+q^2)+(2pq+q^2) && (2pq+q^2) \\(2pq+q^2) && (p^2+q^2)\end{pmatrix}$$となる。 したがって、$$\begin{cases}p'=(p^2+q^2)\\q'=(2pq+q^2)\end{cases} $$
###Code
(define (square x) (* x x))
(define (fib n)
(fib-iter 1 0 0 1 n))
(define (fib-iter a b p q count)
(cond ((= count 0) b)
((even? count)
(fib-iter a
b
(+ (square p) (square q)) ; compute p'
(+ (* 2 p q) (square q)) ; compute q'
(/ count 2)))
(else (fib-iter (+ (* b q) (* a q) (* a p))
(+ (* b p) (* a q))
p
q
(- count 1)))))
;確認
(display (format "~a\n" (fib 0)))
(display (format "~a\n" (fib 1)))
(display (format "~a\n" (fib 2)))
(display (format "~a\n" (fib 3)))
(display (format "~a\n" (fib 4)))
(display (format "~a\n" (fib 5)))
(display (format "~a\n" (fib 6)))
(display (format "~a\n" (fib 7)))
(display (format "~a\n" (fib 8)))
(display (format "~a\n" (fib 9)))
(display (format "~a\n" (fib 10)))
(display (format "~a\n" (fib 11)))
(display (format "~a\n" (fib 12)))
###Output
0
1
1
2
3
5
8
13
21
34
55
89
144
###Markdown
$$T = \begin{pmatrix}p && q \\r &&s\end{pmatrix}$$として、$$\begin{pmatrix}a \\b\end{pmatrix}\leftarrow T \begin{pmatrix}a \\b\end{pmatrix}\\$$$$T^2 = \begin{pmatrix}p^2+qr && pq+qs \\rp+sr && rq+s^2\end{pmatrix} \\$$を利用する。 問題文の方式では変数が2個だが、こちらは4個なので多少遅い。 とは言え、行列の冪の問題に行き着くので、整数のべきと同じ考えが適用できるのでわかりやすいし、 思いつくのはこちらのほうが容易いと考えられる。
###Code
;行列バージョン((p,q),(r,s))の積で考える
(define (fib n)
(fib-iter 1 0 1 1 1 0 n))
(define (fib-iter a b p q r s count)
(cond ((= count 0) b)
((even? count)
(fib-iter a
b
(+ (* p p) (* q r)) ; compute p
(+ (* p q) (* q s)) ; compute q
(+ (* r p) (* s r)) ; compute r
(+ (* r q) (* s s)) ; compute s
(/ count 2)))
(else (fib-iter (+ (* p a) (* q b))
(+ (* r a) (* s b))
p
q
r
s
(- count 1)))))
;確認
(display (format "~a\n" (fib 0)))
(display (format "~a\n" (fib 1)))
(display (format "~a\n" (fib 2)))
(display (format "~a\n" (fib 3)))
(display (format "~a\n" (fib 4)))
(display (format "~a\n" (fib 5)))
(display (format "~a\n" (fib 6)))
(display (format "~a\n" (fib 7)))
(display (format "~a\n" (fib 8)))
(display (format "~a\n" (fib 9)))
(display (format "~a\n" (fib 10)))
(display (format "~a\n" (fib 11)))
(display (format "~a\n" (fib 12)))
###Output
0
1
1
2
3
5
8
13
21
34
55
89
144
|
lec/lec10.ipynb | ###Markdown
Lecture 10 Apply
###Code
staff = Table().with_columns(
'Employee', make_array('Jim', 'Dwight', 'Michael', 'Creed'),
'Birth Year', make_array(1985, 1988, 1967, 1904)
)
staff
def greeting(person):
return 'Dunder Mifflin, this is ' + person
greeting('Pam')
greeting('Erin')
staff.apply(greeting, 'Employee')
def name_and_age(name, year):
age = 2019 - year
return name + ' is ' + str(age)
staff.apply(name_and_age, 'Employee', 'Birth Year')
###Output
_____no_output_____
###Markdown
Prediction
###Code
galton = Table.read_table('galton.csv')
galton
galton.scatter('midparentHeight', 'childHeight')
galton.scatter('midparentHeight', 'childHeight')
plots.plot([67.5, 67.5], [50, 85], color='red', lw=2)
plots.plot([68.5, 68.5], [50, 85], color='red', lw=2);
nearby = galton.where('midparentHeight', are.between(67.5, 68.5))
nearby_mean = nearby.column('childHeight').mean()
nearby_mean
galton.scatter('midparentHeight', 'childHeight')
plots.plot([67.5, 67.5], [50, 85], color='red', lw=2)
plots.plot([68.5, 68.5], [50, 85], color='red', lw=2)
plots.scatter(68, nearby_mean, color='red', s=50);
def predict(h):
nearby = galton.where('midparentHeight', are.between(h - 1/2, h + 1/2))
return nearby.column('childHeight').mean()
predict(68)
predict(70)
predict(73)
predicted_heights = galton.apply(predict, 'midparentHeight')
predicted_heights
galton = galton.with_column('predictedHeight', predicted_heights)
galton.select(
'midparentHeight', 'childHeight', 'predictedHeight').scatter('midparentHeight')
###Output
_____no_output_____
###Markdown
Prediction Accuracy
###Code
def difference(x, y):
return x - y
pred_errs = galton.apply(difference, 'predictedHeight', 'childHeight')
pred_errs
galton = galton.with_column('errors',pred_errs)
galton
galton.hist('errors')
galton.hist('errors', group='gender')
###Output
_____no_output_____
###Markdown
Discussion Question
###Code
def predict_smarter(h, g):
nearby = galton.where('midparentHeight', are.between(h - 1/2, h + 1/2))
nearby_same_gender = nearby.where('gender', g)
return nearby_same_gender.column('childHeight').mean()
predict_smarter(68, 'female')
predict_smarter(68, 'male')
smarter_predicted_heights = galton.apply(predict_smarter, 'midparentHeight', 'gender')
galton = galton.with_column('smartPredictedHeight', smarter_predicted_heights)
smarter_pred_errs = galton.apply(difference, 'childHeight', 'smartPredictedHeight')
galton = galton.with_column('smartErrors', smarter_pred_errs)
galton.hist('smartErrors', group='gender')
###Output
_____no_output_____
###Markdown
Grouping by One Column
###Code
cones = Table.read_table('cones.csv')
cones
cones.group('Flavor')
cones.drop('Color').group('Flavor', np.average)
cones.drop('Color').group('Flavor', min)
###Output
_____no_output_____
###Markdown
Grouping By One Column: Welcome Survey
###Code
survey = Table.read_table('welcome_survey_v2.csv')
survey.group('Year', np.average)
by_extra = survey.group('Extraversion', np.average)
by_extra
by_extra.select(0,2,3).plot('Extraversion') # Drop the 'Years average' column
by_extra.select(0,3).plot('Extraversion')
###Output
_____no_output_____
###Markdown
Lists
###Code
[1, 5, 'hello', 5.0]
[1, 5, 'hello', 5.0, make_array(1,2,3)]
###Output
_____no_output_____
###Markdown
Grouping by Two Columns
###Code
survey = Table.read_table('welcome_survey_v3.csv')
survey.group(['Handedness','Sleep position']).show()
###Output
_____no_output_____
###Markdown
Pivot Tables
###Code
survey.pivot('Sleep position', 'Handedness')
survey.pivot('Sleep position', 'Handedness', values='Extraversion', collect=np.average)
survey.group('Handedness', np.average)
###Output
_____no_output_____
###Markdown
Lecture 10 Prediction
###Code
galton = Table.read_table('galton.csv')
galton
galton.scatter('midparentHeight', 'childHeight')
galton.scatter('midparentHeight', 'childHeight')
plots.plot([67.5, 67.5], [50, 85], color='red', lw=2)
plots.plot([68.5, 68.5], [50, 85], color='red', lw=2);
nearby = galton.where('midparentHeight', are.between(67.5, 68.5))
nearby_mean = nearby.column('childHeight').mean()
nearby_mean
galton.scatter('midparentHeight', 'childHeight')
plots.plot([67.5, 67.5], [50, 85], color='red', lw=2)
plots.plot([68.5, 68.5], [50, 85], color='red', lw=2)
plots.scatter(68, nearby_mean, color='gold', s=50);
def predict(h):
nearby = galton.where('midparentHeight', are.between(h - 1/2, h + 1/2))
return nearby.column('childHeight').mean()
predict(68)
predict(70)
predict(73)
predicted_heights = galton.apply(predict, 'midparentHeight')
predicted_heights
galton = galton.with_column('predictedHeight', predicted_heights)
galton.select(
'midparentHeight', 'childHeight', 'predictedHeight').scatter('midparentHeight')
###Output
_____no_output_____
###Markdown
Prediction Accuracy
###Code
def difference(x, y):
return x - y
pred_errs = galton.apply(difference, 'childHeight', 'predictedHeight')
pred_errs
galton.hist('errors')
galton = galton.with_column('errors',pred_errs)
galton
galton.hist('errors', group='gender')
###Output
_____no_output_____
###Markdown
Discussion Question
###Code
def predict_smarter(h, g):
nearby = galton.where('midparentHeight', are.between(h - 1/2, h + 1/2))
nearby_same_gender = nearby.where('gender', g)
return nearby_same_gender.column('childHeight').mean()
predict_smarter(68, 'female')
predict_smarter(68, 'male')
smarter_predicted_heights = galton.apply(predict_smarter, 'midparentHeight', 'gender')
galton = galton.with_column('smartPredictedHeight', smarter_predicted_heights)
smarter_pred_errs = galton.apply(difference, 'childHeight', 'smartPredictedHeight')
galton = galton.with_column('smartErrors', smarter_pred_errs)
galton.hist('smartErrors', group='gender')
###Output
_____no_output_____
###Markdown
Grouping by One Column
###Code
cones = Table.read_table('cones.csv')
cones
cones.group('Flavor')
cones.drop('Color').group('Flavor', np.average)
cones.drop('Color').group('Flavor', min)
###Output
_____no_output_____
###Markdown
Grouping By One Column: Welcome Survey
###Code
survey = Table.read_table('welcome_survey_v2.csv')
survey.group('Year', np.average)
by_extra = survey.group('Extraversion', np.average)
by_extra
by_extra.select(0,2,3).plot('Extraversion') # Drop the 'Years average' column
by_extra.select(0,3).plot('Extraversion')
###Output
_____no_output_____
###Markdown
Lists
###Code
[1, 5, 'hello', 5.0]
[1, 5, 'hello', 5.0, make_array(1,2,3)]
###Output
_____no_output_____
###Markdown
Grouping by Two Columns
###Code
survey = Table.read_table('welcome_survey_v3.csv')
survey.group(['Handedness','Sleep Side']).show()
###Output
_____no_output_____
###Markdown
Pivot Tables
###Code
survey.pivot('Sleep Side', 'Handedness')
survey.pivot('Sleep Side', 'Handedness', values='Extraversion', collect=np.average)
survey.group('Handedness', np.average)
###Output
_____no_output_____
###Markdown
Lecture 10 Apply
###Code
staff = Table().with_columns(
'Employee', make_array('Jim', 'Dwight', 'Michael', 'Creed'),
'Birth Year', make_array(1985, 1988, 1967, 1904)
)
staff
def greeting(person):
return 'Dunder Mifflin, this is ' + person
greeting('Pam')
greeting('Erin')
staff.apply(greeting, 'Employee')
def name_and_age(name, year):
age = 2019 - year
return name + ' is ' + str(age)
staff.apply(name_and_age, 'Employee', 'Birth Year')
###Output
_____no_output_____
###Markdown
Prediction
###Code
galton = Table.read_table('galton.csv')
galton
galton.scatter('midparentHeight', 'childHeight')
galton.scatter('midparentHeight', 'childHeight')
plots.plot([67.5, 67.5], [50, 85], color='red', lw=2)
plots.plot([68.5, 68.5], [50, 85], color='red', lw=2);
nearby = galton.where('midparentHeight', are.between(67.5, 68.5))
nearby_mean = nearby.column('childHeight').mean()
nearby_mean
galton.scatter('midparentHeight', 'childHeight')
plots.plot([67.5, 67.5], [50, 85], color='red', lw=2)
plots.plot([68.5, 68.5], [50, 85], color='red', lw=2)
plots.scatter(68, nearby_mean, color='red', s=50);
def predict(h):
nearby = galton.where('midparentHeight', are.between(h - 1/2, h + 1/2))
return nearby.column('childHeight').mean()
predict(68)
predict(70)
predict(73)
predicted_heights = galton.apply(predict, 'midparentHeight')
predicted_heights
galton = galton.with_column('predictedHeight', predicted_heights)
galton.select(
'midparentHeight', 'childHeight', 'predictedHeight').scatter('midparentHeight')
###Output
_____no_output_____
###Markdown
Prediction Accuracy
###Code
def difference(x, y):
return x - y
pred_errs = galton.apply(difference, 'predictedHeight', 'childHeight')
pred_errs
galton = galton.with_column('errors',pred_errs)
galton
galton.hist('errors')
galton.hist('errors', group='gender')
###Output
_____no_output_____
###Markdown
Discussion Question
###Code
def predict_smarter(h, g):
nearby = galton.where('midparentHeight', are.between(h - 1/2, h + 1/2))
nearby_same_gender = nearby.where('gender', g)
return nearby_same_gender.column('childHeight').mean()
predict_smarter(68, 'female')
predict_smarter(68, 'male')
smarter_predicted_heights = galton.apply(predict_smarter, 'midparentHeight', 'gender')
galton = galton.with_column('smartPredictedHeight', smarter_predicted_heights)
smarter_pred_errs = galton.apply(difference, 'childHeight', 'smartPredictedHeight')
galton = galton.with_column('smartErrors', smarter_pred_errs)
galton.hist('smartErrors', group='gender')
###Output
_____no_output_____
###Markdown
Grouping by One Column
###Code
cones = Table.read_table('cones.csv')
cones
cones.group('Flavor')
cones.drop('Color').group('Flavor', np.average)
cones.drop('Color').group('Flavor', min)
###Output
_____no_output_____
###Markdown
Grouping By One Column: Welcome Survey
###Code
survey = Table.read_table('welcome_survey_v2.csv')
survey.group('Year', np.average)
by_extra = survey.group('Extraversion', np.average)
by_extra
by_extra.select(0,2,3).plot('Extraversion') # Drop the 'Years average' column
by_extra.select(0,3).plot('Extraversion')
###Output
_____no_output_____
###Markdown
Lists
###Code
[1, 5, 'hello', 5.0]
[1, 5, 'hello', 5.0, make_array(1,2,3)]
###Output
_____no_output_____
###Markdown
Grouping by Two Columns
###Code
survey = Table.read_table('welcome_survey_v3.csv')
survey.group(['Handedness','Sleep position']).show()
###Output
_____no_output_____
###Markdown
Pivot Tables
###Code
survey.pivot('Sleep position', 'Handedness')
survey.pivot('Sleep position', 'Handedness', values='Extraversion', collect=np.average)
survey.group('Handedness', np.average)
###Output
_____no_output_____ |
examples/clustering/KMeans_CB4_64_16D_STK_1.ipynb | ###Markdown
KMeans Clustering - CB4_64_16D_STK_1This document presents an example of spectral clustering in the CBERS4 collection V1 (CB4_64_16D_STK_1) of the BDC.> This simple example aims to present how to clustering the data from the BDC stored inside the ODC. To know all the possible products, use [BDC-STAC](http://brazildatacube.dpi.inpe.br/stac/).
###Code
import datacube
import numpy as np
import matplotlib.pyplot as plt
dc = datacube.Datacube(app='datacube')
PRODUCT_NAME = "CB4_64_16D_STK_1"
###Output
_____no_output_____
###Markdown
**Load CB4_64_16D_STK_v1 product** Initially, an entire scene will be loaded, in a range of specific dates
###Code
cb4_64_16d_ftile = dc.load(PRODUCT_NAME, measurements = ['red', 'green', 'blue', 'nir'],
time = ("2019-12-19", "2019-12-31"),
resolution = (64, -64), limit = 1)
cb4_64_16d_ftile
###Output
_____no_output_____
###Markdown
The example will use only a portion of the data that was uploaded. If necessary, in your analysis you can use the whole scene that was uploaded.
###Code
cb4_64_16d_stile = cb4_64_16d_ftile.isel(x = slice(0, 1500), y = slice(0, 1500))
cb4_64_16d_stile
###Output
_____no_output_____
###Markdown
Viewing the selected region
###Code
from utils.data_cube_utilities.dc_rgb import rgb
rgb(cb4_64_16d_stile, figsize = (12, 12), x_coord = 'x', y_coord = 'y')
###Output
_____no_output_____
###Markdown
Clustering with KMeansIn this section, the clustering using KMeans is performed
###Code
from sklearn.cluster import KMeans
from utils.data_cube_utilities.dc_clustering import clustering_pre_processing
###Output
_____no_output_____
###Markdown
Below is the definition of the bands and the preparation of the data for clustering
###Code
bands = ['red', 'green', 'nir']
cb4_64_16d_stilec = cb4_64_16d_stile.copy()
cb4_64_16d_stilec_rgb = cb4_64_16d_stilec[bands]
cb4_64_16d_stilec_rgb = cb4_64_16d_stilec_rgb.sel(time = '2019-12-25')
###Output
_____no_output_____
###Markdown
Clustering!
###Code
features = clustering_pre_processing(cb4_64_16d_stilec_rgb, bands)
kmodel = KMeans(3).fit(features)
###Output
_____no_output_____
###Markdown
Setting the output to display
###Code
shape = cb4_64_16d_stilec_rgb[bands[0]].values.shape
classification = np.full(shape, -1)
classification = kmodel.labels_
###Output
_____no_output_____
###Markdown
Viewing the result
###Code
res = classification.reshape((1500, 1500))
plt.figure(figsize = (10, 10))
plt.imshow(res)
###Output
_____no_output_____ |
software_engineering/5_Test/notebook.ipynb | ###Markdown
To execute before running the slides
###Code
import unittest
def apply_jupyter_patch():
"""Monkey patch unittest to be able to run it in the notebook"""
def jupyter_unittest_main(**kwargs):
if "argv" not in kwargs:
kwargs["argv"] = ['ignored']
kwargs["exit"] = False
jupyter_unittest_main._original(**kwargs)
if unittest.main.__module__ != "unittest.main":
# Restiture the previous state, in case
unittest.main = unittest.main._original
# Apply the patch
jupyter_unittest_main._original = unittest.main
unittest.main = jupyter_unittest_main
apply_jupyter_patch()
def polynom(a, b, c):
"""The function that will be tested."""
delta = (b**2.0) - 4.0 * a * c
solutions = []
if delta > 0:
solutions.append((-b + (delta**0.5)) / (2.0 * a))
solutions.append((-b - (delta**0.5)) / (2.0 * a))
elif delta == 0:
solutions.append(-b / (2.0 * a))
return solutions
try:
from PyQt5 import Qt
qapp = Qt.QApplication.instance()
if Qt.QApplication.instance() is None:
qapp = Qt.QApplication([])
class PolynomSolver(Qt.QMainWindow):
def __init__(self, parent=None):
super(PolynomSolver, self).__init__(parent=parent)
self.initGui()
def initGui(self):
self.setWindowTitle("Polygon Solver")
self._inputLine = Qt.QLineEdit(self)
self._processButton = Qt.QPushButton(self)
self._processButton.setText(u"Solve ax² + bx + c = 0")
self._processButton.clicked.connect(self.processing)
self._resultWidget = Qt.QLabel(self)
widget = Qt.QWidget()
layout = Qt.QFormLayout(widget)
layout.addRow("Coefs a b c:", self._inputLine)
layout.addRow("Solutions:", self._resultWidget)
layout.addRow(self._processButton)
self.setCentralWidget(widget)
def getCoefs(self):
text = self._inputLine.text()
data = [float(i) for i in text.split()]
a, b, c = data
return a, b, c
def processing(self):
try:
a, b, c = self.getCoefs()
except Exception as e:
Qt.QMessageBox.critical(self, "Error while reaching polygon coefs", str(e))
return
try:
result = polynom(a, b, c)
except Exception as e:
Qt.QMessageBox.critical(self, "Error while computing the polygon solution", str(e))
return
if len(result) == 0:
text = "No solution"
else:
text = ["%0.3f" % x for x in result]
text = " ".join(text)
self._resultWidget.setText(text)
except ImportError as e:
print(str(e))
###Output
_____no_output_____
###Markdown
Testing=======- Introduction- Python `unittest` module- Estimate tests' quality- Continuous integration What is it?- Part of the software quality- A task consisting of checking that the **program** is working as expected- Manually written **tests** which can be automatically executed Presenter Notes- A test injects input to the program, and checks output- It answers if the code is valid or not (for a specific usecase) Different methodologies- Test-driven development: Always and before anything else- Harry J.W. Percival (2014). [Test-Driven Development with Python. O'Reilly](https://www.oreilly.com/library/view/test-driven-development-with/9781449365141/) Why testing?| Benefits | Disadvantage ||-----------------------------------------|---------------------------------------------|| Find problems early | Extra work (to write and execute) || Globally reduce the cost | Maintain test environments || To validate the code to specifications | Does not mean it's bug-free || Safer to changes of the code with | More difficult to change the code behaviour || Improve the software design | || It's part of documentation and examples | | Presenter Notes- 30% percent of the time of a project- Cost reduction: If you find a problem late (at deployment for example) the cost can be very hight- Automated tests (in CI) reduce the cost of execution, and help code review- Having the structure set-up for testing encourages writing tests What kinds of tests?- **Unit tests**: Tests independant pieces of code- **Integration tests**: Tests components together- **System tests**: Tests a completely integrated application- **Acceptance tests**: Tests the application with the customer Presenter NotesThe test pyramid is a concept developed by Mike Cohn, described in his book "Succeeding with Agile"- Unit tests (dev point of view, fast, low cost)- Integration tests- System tests- Acceptance tests (customer point of view, but slow, and expensive, can't be automated)- Cost: unit << integration (not always true) << system- Fast to execute: unit >> integration >> system Where to put the tests?Separate tests from the source code:- Run the test from the command line.- Separate tests and code distributing.- [...](https://docs.python.org/3/library/unittest.htmlorganizing-test-code)Folder structure:- In a separate `test/` folder.- In `test` sub-packages in each Python package/sub-package, so that tests remain close to the source code. Tests are installed with the package and can be run from the installation.- A `test_*.py` for each module and script (an more if needed).- Consider separating tests that are long to run from the others. Where to put the tests?- `project` - `setup.py` - `run_tests.py` - `package/` - `__init__.py` - `module1.py` - `test/` - `__init__.py` - `test_module1.py` - `subpackage/` - `__init__.py` - `module1.py` - `module2.py` - `test/` - `__init__.py` - `test_module1.py` - `test_module2.py` `unittest` Python module[unittest](https://docs.python.org/3/library/unittest.html) is the default Python module for testing.It provides features to:- Write tests- Discover tests- Run those testsOther frameworks exists:- [pytest](http://pytest.org/) Write and run testsThe classe `unittest.TestCase` is the base class for writting tests forPython code.The function `unittest.main()` provides a command line interface todiscover and run the tests.
###Code
import unittest
class TestMyTestCase(unittest.TestCase):
def test_my_test(self):
# Code to test
a = round(3.1415)
# Expected result
b = 3
self.assertEqual(a, b, msg="")
if __name__ == "__main__":
unittest.main()
###Output
_____no_output_____
###Markdown
Assertion functions- Argument(s) to compare/evaluate.- An additional error message.- `assertEqual(a, b)` checks that `a == b`- `assertNotEqual(a, b)` checks that `a != b`- `assertTrue(x)` checks that `bool(x) is True`- `assertFalse(x)`checks that `bool(x) is False`- `assertIs(a, b)` checks that `a is b`- `assertIsNone(x)` checks that `x is None`- `assertIn(a, b)` checks that `a in b`- `assertIsInstance(a, b)` checks that `isinstance(a, b)`There's more, see [unittest TestCase documentation](https://docs.python.org/3/library/unittest.htmlunittest.TestCase>)or [Numpy testing documentation](http://docs.scipy.org/doc/numpy/reference/routines.testing.html). ExampleTest the `polynom` function provided in the `pypolynom` sample project.It solves the equation $ax^2 + bx + c = 0$.
###Code
import unittest
class TestPolynom(unittest.TestCase):
def test_0_roots(self):
result = polynom(2, 0, 1)
self.assertEqual(len(result), 0)
def test_1_root(self):
result = polynom(2, 0, 0)
self.assertEqual(len(result), 1)
self.assertEqual(result, [0])
def test_2_root(self):
result = polynom(4, 0, -4)
self.assertEqual(len(result), 2)
self.assertEqual(set(result), set([-1, 1]))
if __name__ == "__main__":
unittest.main(defaultTest="TestPolynom")
# unittest.main(verbosity=2, defaultTest="TestPolynom")
###Output
_____no_output_____
###Markdown
Run from command line arguments Auto discover tests of the current path
###Code
$ python3 -m unittest
###Output
_____no_output_____
###Markdown
Running a specific `TestCase`:
###Code
$ python3 -m unittest myproject.test.TestMyTrueRound
$ python3 test_builtin_round.py TestMyTrueRound
###Output
_____no_output_____
###Markdown
Running a specific test method:
###Code
$ python3 -m unittest myproject.test.TestMyTrueRound.test_positive
$ python3 test_builtin_round.py TestMyTrueRound.test_positive
###Output
_____no_output_____
###Markdown
FixtureTests might need to share some common initialisation/finalisation (e.g., create a temporary directory).This can be implemented in ``setUp`` and ``tearDown`` methods of ``TestCase``.Those methods are called before and after each test.
###Code
class TestCaseWithFixture(unittest.TestCase):
def setUp(self):
self.file = open("img/test-pyramid.svg", "rb")
print("open file")
def tearDown(self):
self.file.close()
print("close file")
def test_1(self):
foo = self.file.read()
# do some test on foo
print("test 1")
def test_2(self):
foo = self.file.read()
# do some test on foo
print("test 2")
if __name__ == "__main__":
unittest.main(defaultTest='TestCaseWithFixture')
###Output
_____no_output_____
###Markdown
Testing exception
###Code
class TestPolynom(unittest.TestCase):
def test_argument_error(self):
try:
polynom(0, 0, 0)
self.fail()
except ZeroDivisionError:
self.assertTrue(True)
def test_argument_error__better_way(self):
with self.assertRaises(ZeroDivisionError):
result = polynom(0, 0, 0)
if __name__ == "__main__":
unittest.main(defaultTest='TestPolynom')
###Output
_____no_output_____
###Markdown
`TestCase.assertRaisesRegexp` also checks the message of the exception. Parametric testsRunning the same test with multiple valuesProblems:- The first failure stops the test, remaining test values are not processed.- There is no information on the value for which the test has failed.
###Code
class TestPolynom(unittest.TestCase):
TESTCASES = {
(2, 0, 1): [],
(2, 0, 0): [0],
(4, 0, -4): [1, -1]
}
def test_all(self):
for arguments, expected in self.TESTCASES.items():
self.assertEqual(polynom(*arguments), expected)
def test_all__better_way(self):
for arguments, expected in self.TESTCASES.items():
with self.subTest(arguments=arguments, expected=expected):
self.assertEqual(polynom(*arguments), expected)
if __name__ == "__main__":
unittest.main(defaultTest='TestPolynom')
###Output
_____no_output_____
###Markdown
Class fixture
###Code
class TestSample(unittest.TestCase):
@classmethod
def setUpClass(cls):
# Called before all the tests of this class
pass
@classmethod
def tearDownClass(cls):
# Called after all the tests of this class
pass
###Output
_____no_output_____
###Markdown
Module fixture
###Code
def setUpModule():
# Called before all the tests of this module
pass
def tearDownModule():
# Called after all the tests of this module
pass
###Output
_____no_output_____
###Markdown
Skipping testsIf tests requires a specific OS, device, library...
###Code
import unittest, os, sys
def is_gui_available():
# Is there a display
if sys.platform.startswith('linux'):
if os.environ.get('DISPLAY', '') == '':
return False
# Is there the optional library
try:
import PyQt8
except:
return False
return True
@unittest.skipUnless(is_gui_available(), 'GUI not available')
class TestPolynomGui(unittest.TestCase):
def setUp(self):
if not is_gui_available():
self.skipTest('GUI not available')
def test_1(self):
if not is_gui_available():
self.skipTest('GUI not available')
@unittest.skipUnless(is_gui_available() is None, 'GUI not available')
def test_2(self):
pass
if __name__ == "__main__":
unittest.main(defaultTest='TestPolynomGui')
###Output
_____no_output_____
###Markdown
Test numpyNumpy provides modules for unittests. See the [Numpy testing documentation](http://docs.scipy.org/doc/numpy/reference/routines.testing.html).
###Code
import numpy
class TestNumpyArray(unittest.TestCase):
def setUp(self):
self.data1 = numpy.array([1, 2, 3, 4, 5, 6, 7])
self.data2 = numpy.array([1, 2, 3, 4, 5, 6, 7.00001])
# def test_equal__cant_work(self):
# self.assertEqual(self.data1, self.data2)
# self.assertTrue((self.data1 == self.data2).all())
def test_equal(self):
self.assertTrue(numpy.allclose(self.data1, self.data2, atol=0.0001))
def test_equal__even_better(self):
numpy.testing.assert_allclose(self.data1, self.data2, atol=0.0001)
if __name__ == "__main__":
unittest.main(defaultTest='TestNumpyArray')
###Output
_____no_output_____
###Markdown
Test resourcesHow to handle test data?Need to separate (possibly huge) test data from python package.Download test data and store it in a temporary directory during the tests if not available.Example: [silx.utils.ExternalResources](https://github.com/silx-kit/silx/blob/master/silx/utils/utilstest.py) QTestFor GUI based on `PyQt`, `PySide` it is possible to use Qt's [QTest](http://doc.qt.io/qt-5/qtest.html).It provides the basic functionalities for GUI testing.It allows to send keyboard and mouse events to widgets.
###Code
from PyQt5.QtTest import QTest
class TestPolynomGui(unittest.TestCase):
def test_type_and_process(self):
widget = PolynomSolver()
QTest.qWaitForWindowExposed(widget)
QTest.keyClicks(widget._inputLine, '2.000 0 -1', delay=100) # Wait 100ms
QTest.mouseClick(widget._processButton, Qt.Qt.LeftButton, pos=Qt.QPoint(1, 1))
self.assertEqual(widget._resultWidget.text(), "0.707 -0.707")
if __name__ == "__main__":
unittest.main(defaultTest='TestPolynomGui')
###Output
_____no_output_____
###Markdown
Tighly coupled with the code it tests.It needs to know the widget's instance and hard coded position of mouse events. Chaining testsHow-to run tests from many ``TestCase`` and many files at once:- Explicit: Full control, boilerplate code.- Automatic: No control- Mixing approachThe [TestSuite](https://docs.python.org/3/library/unittest.htmlunittest.TestSuite) class aggregates test cases and test suites through:- Allow to test specific use cases- Full control of the test sequence- But requires some boilerplate code Chaining tests example
###Code
def suite_without_gui():
loadTests = unittest.defaultTestLoader.loadTestsFromTestCase
suite = unittest.TestSuite()
suite.addTest(loadTests(TestPolynom))
return suite
def suite_with_gui():
loadTests = unittest.defaultTestLoader.loadTestsFromTestCase
suite = unittest.TestSuite()
suite.addTest(suite_without_gui())
suite.addTest(loadTests(TestPolynomGui))
return suite
if __name__ == "__main__":
# unittest.main(defaultTest='suite_without_gui')
unittest.main(defaultTest='suite_with_gui')
###Output
_____no_output_____
###Markdown
Estimate tests' qualityUsing [`coverage`](https://coverage.readthedocs.org) to gather coverage statistics while running the tests (`pip install coverage`).
###Code
$ python -m coverage run -m unittest
$ python -m coverage report
Name Stmts Miss Cover
----------------------------------------------------
pypolynom\__init__.py 1 0 100%
pypolynom\polynom.py 19 2 89%
pypolynom\test\__init__.py 0 0 100%
pypolynom\test\test_polynom.py 29 0 100%
----------------------------------------------------
TOTAL 49 2 96%
###Output
_____no_output_____
###Markdown
Estimate tests' qualityExecute the tests and generate an output file per module with annotations per lines.
###Code
$ python -m coverage annotate
$ ls pypolynom
30/03/2019 19:15 1,196 polynom.py
30/03/2019 19:17 1,294 polynom.py,cover
> def polynom(a, b, c):
> delta = pow2(b) - 4.0 * a * c
> solutions = []
> if delta > 0:
! solutions.append((-b + sqrt(delta)) / (2.0 * a))
! solutions.append((-b - sqrt(delta)) / (2.0 * a))
> elif delta == 0:
> solutions.append(-b/(2.0*a))
> return solutions
###Output
_____no_output_____
###Markdown
Continuous integrationAutomatically testing a software for each changes applied to the source code.Benefits:- Be aware of problems early - Before merging a change on the code - On third-party library update (sometimes before the release) - Reduce the cost in case of problem- Improve contributions and team workCosts:- Set-up and maintenance- Test needs to be automated Continuous integration- [Travis-CI](https://travis-ci.org/) (Linux and MacOS), [AppVeyor](http://www.appveyor.com/) (Windows), gitlab-CI (https://gitlab.esrf.fr)...- A `.yml` file to describing environment, build, installation, test process Continuous integration: ConfigurationExample of configuration with Travis
###Code
language: python
matrix:
include:
- python: 3.6
- python: 3.7
before_install: # Upgrade distribution modules
- python -m pip install --upgrade pip
- pip install --upgrade setuptools wheel
install: # Generate source archive and wheel
- python setup.py bdist_wheel
before_script: # Install wheel package
- pip install --pre dist/pypolynom*.whl
script: # Run the tests from the installed module
- mkdir tmp ; cd tmp
- python -m unittest pypolynom.test.suite_without_gui
###Output
_____no_output_____ |
notebooks/covidz-01.2020-04-21.ipynb | ###Markdown
Starting from scratch on Covid-19 data analysis
###Code
import geopandas as gpd
import numpy as np
import pandas as pd
import sys
sys.path
import seaborn as sns
import folium
import branca.colormap as cm
import matplotlib.pyplot as plt
from pathlib import Path
projdir = Path.cwd().parent
if str(projdir) not in sys.path:
sys.path.append(str(projdir))
# from src.common import loadenv as const
projdir
fips_fn = projdir / 'data' / 'raw' / 'fips' / 'all-geocodes-v2018.xlsx'
fips_fn
path_str = fips_fn.as_posix()
path_str
###Output
_____no_output_____
###Markdown
Read an Excel file into a pandas DataFrame.
###Code
dff = pd.read_excel(path_str, skiprows=4)
dff.shape
dff
dff.info
###Output
_____no_output_____
###Markdown
Import county population from census bureau data
###Code
cp_fn = projdir / 'data' / 'raw' / 'censusBurPop' / 'co-est2019-alldata.csv'
cp_fn
path_str = cp_fn.as_posix()
path_str
cp_df = pd.read_csv(path_str, dtype={'STATE': float}, encoding='ISO-8859–1')
cp_df.dtypes
cp_df
cp_df.shape
for col in cp_df.columns:
print(col)
###Output
SUMLEV
REGION
DIVISION
STATE
COUNTY
STNAME
CTYNAME
CENSUS2010POP
ESTIMATESBASE2010
POPESTIMATE2010
POPESTIMATE2011
POPESTIMATE2012
POPESTIMATE2013
POPESTIMATE2014
POPESTIMATE2015
POPESTIMATE2016
POPESTIMATE2017
POPESTIMATE2018
POPESTIMATE2019
NPOPCHG_2010
NPOPCHG_2011
NPOPCHG_2012
NPOPCHG_2013
NPOPCHG_2014
NPOPCHG_2015
NPOPCHG_2016
NPOPCHG_2017
NPOPCHG_2018
NPOPCHG_2019
BIRTHS2010
BIRTHS2011
BIRTHS2012
BIRTHS2013
BIRTHS2014
BIRTHS2015
BIRTHS2016
BIRTHS2017
BIRTHS2018
BIRTHS2019
DEATHS2010
DEATHS2011
DEATHS2012
DEATHS2013
DEATHS2014
DEATHS2015
DEATHS2016
DEATHS2017
DEATHS2018
DEATHS2019
NATURALINC2010
NATURALINC2011
NATURALINC2012
NATURALINC2013
NATURALINC2014
NATURALINC2015
NATURALINC2016
NATURALINC2017
NATURALINC2018
NATURALINC2019
INTERNATIONALMIG2010
INTERNATIONALMIG2011
INTERNATIONALMIG2012
INTERNATIONALMIG2013
INTERNATIONALMIG2014
INTERNATIONALMIG2015
INTERNATIONALMIG2016
INTERNATIONALMIG2017
INTERNATIONALMIG2018
INTERNATIONALMIG2019
DOMESTICMIG2010
DOMESTICMIG2011
DOMESTICMIG2012
DOMESTICMIG2013
DOMESTICMIG2014
DOMESTICMIG2015
DOMESTICMIG2016
DOMESTICMIG2017
DOMESTICMIG2018
DOMESTICMIG2019
NETMIG2010
NETMIG2011
NETMIG2012
NETMIG2013
NETMIG2014
NETMIG2015
NETMIG2016
NETMIG2017
NETMIG2018
NETMIG2019
RESIDUAL2010
RESIDUAL2011
RESIDUAL2012
RESIDUAL2013
RESIDUAL2014
RESIDUAL2015
RESIDUAL2016
RESIDUAL2017
RESIDUAL2018
RESIDUAL2019
GQESTIMATESBASE2010
GQESTIMATES2010
GQESTIMATES2011
GQESTIMATES2012
GQESTIMATES2013
GQESTIMATES2014
GQESTIMATES2015
GQESTIMATES2016
GQESTIMATES2017
GQESTIMATES2018
GQESTIMATES2019
RBIRTH2011
RBIRTH2012
RBIRTH2013
RBIRTH2014
RBIRTH2015
RBIRTH2016
RBIRTH2017
RBIRTH2018
RBIRTH2019
RDEATH2011
RDEATH2012
RDEATH2013
RDEATH2014
RDEATH2015
RDEATH2016
RDEATH2017
RDEATH2018
RDEATH2019
RNATURALINC2011
RNATURALINC2012
RNATURALINC2013
RNATURALINC2014
RNATURALINC2015
RNATURALINC2016
RNATURALINC2017
RNATURALINC2018
RNATURALINC2019
RINTERNATIONALMIG2011
RINTERNATIONALMIG2012
RINTERNATIONALMIG2013
RINTERNATIONALMIG2014
RINTERNATIONALMIG2015
RINTERNATIONALMIG2016
RINTERNATIONALMIG2017
RINTERNATIONALMIG2018
RINTERNATIONALMIG2019
RDOMESTICMIG2011
RDOMESTICMIG2012
RDOMESTICMIG2013
RDOMESTICMIG2014
RDOMESTICMIG2015
RDOMESTICMIG2016
RDOMESTICMIG2017
RDOMESTICMIG2018
RDOMESTICMIG2019
RNETMIG2011
RNETMIG2012
RNETMIG2013
RNETMIG2014
RNETMIG2015
RNETMIG2016
RNETMIG2017
RNETMIG2018
RNETMIG2019
###Markdown
import covid-19 stats
###Code
cvd_fn = projdir / 'data' / 'raw' / 'covid-19-data' / 'nyt-covid-19-data-master-us-counties.csv'
cvd_fn
path_str = cvd_fn.as_posix()
path_str
cvd_df = pd.read_csv(path_str, )
cvd_df.shape
cvd_df
dff.dtypes
cvd_df.dtypes
cvd_df = pd.read_csv(path_str, parse_dates=['date'], dtype={'fips':str})
cvd_df
cvd_df.shape
cvd_df.dtypes
###Output
_____no_output_____
###Markdown
Import county shape files
###Code
ctyshp_fp = projdir / 'data' / 'raw' / 'tl_2019_us_county' / 'tl_2019_us_county.shp'
ctyshp_str = ctyshp_fp.as_posix()
ctyshp_str
cty_dropcols = [ 'STATEFP',
# 'COUNTYFP',
'COUNTYNS',
# 'GEOID',
# 'NAME',
'NAMELSAD',
'LSAD',
'CLASSFP',
'MTFCC',
'CSAFP',
'CBSAFP',
# 'METDIVFP',
'FUNCSTAT',
'ALAND',
'AWATER',
'INTPTLAT',
'INTPTLON'
# 'geometry'
]
cty_dropcols
cty_gdf = gpd.read_file(ctyshp_fp)
cty_gdf.shape
cty_gdf.drop(columns=cty_dropcols, inplace=True)
cty_gdf.shape
cty_gdf.dtypes
cty_gdf
cty_gdf = cty_gdf.rename(columns={'GEOID': 'fips'})
unknowns_df = cvd_df.loc[cvd_df['county'].str.contains('Unknown')].copy()
cvd_df.drop(unknowns_df.index, inplace=True)
cvd_df
cvd_nofips_df = cvd_df.loc[cvd_df['fips'].isnull()].copy()
cvd_nofips_df.shape
cvd_nofips_df
###Output
_____no_output_____
###Markdown
set fips for NYC to '1', Kansas City to '2' since they are NAN in original data set
###Code
cvd_df.loc[cvd_df['county'] == 'New York City', 'fips'] = '1'
cvd_df.loc[(cvd_df['county'] == 'Kansas City') & (cvd_df['state'] == 'Missouri'), 'fips'] = '2'
cvd_df.loc[cvd_df['county'] == 'New York City']
cvd_df.loc[(cvd_df['county'] == 'Kansas City') & (cvd_df['state'] == 'Missouri')]
###Output
_____no_output_____
###Markdown
merge corona virus county data set with county shape files
###Code
comb_df = cty_gdf.merge(cvd_df, on='fips')
comb_df.shape
comb_df.dtypes
comb_df
###Output
_____no_output_____ |
Sat_pre_training.ipynb | ###Markdown
데이터 준비
###Code
import os
import random
import numpy as np
import nltk
nltk.download("punkt")
from nltk.tokenize import word_tokenize
import torch
from torchtext.legacy.data import Field
from torchtext.legacy.data import TabularDataset
from torchtext.legacy.data import BucketIterator
from torchtext.legacy.data import Iterator
# Nondeterministic 한 작업 피하기
RANDOM_SEED = 2022
random.seed(RANDOM_SEED)
np.random.seed(RANDOM_SEED)
torch.manual_seed(RANDOM_SEED)
torch.backends.cudnn.deterministic = True # Deterministic 한 알고리즘만 사용하기
torch.backends.cudnn.benchmark = False # Cudnn benchmark 해제
torch.cuda.manual_seed_all(RANDOM_SEED) # if use multi-GPU
os.environ['PYTHONHASHSEED'] = str(RANDOM_SEED)
DATA_PATH = "/content/drive/Othercomputers/내 컴퓨터/Sat_english/data/processed"
###Output
_____no_output_____
###Markdown
필드 정의
###Code
# 문장 필드
TEXT = Field(
sequential=True, # 문장 입력
use_vocab=True,
tokenize=word_tokenize, # nltk의 word_tokenize로 트큰화
lower=True, # 모두 소문자 처리
batch_first=True,
)
# 정답 필드
LABEL = Field(
sequential=False,
use_vocab=False,
batch_first=True,
)
###Output
_____no_output_____
###Markdown
데이터 불러오기
###Code
# CoLA 데이터 = 사전 학습 데이터
cola_train_data, cola_valid_data, cola_test_data = TabularDataset.splits(
path=DATA_PATH,
train="cola_train.tsv",
validation="cola_valid.tsv",
test="cola_test.tsv",
format="tsv",
fields=[("text", TEXT), ("label", LABEL)],
skip_header=1, # column명이 있는 1열 생략
)
TEXT.build_vocab(cola_train_data, min_freq=2) # CoLA 데이터로 사전학습할 단어장 생성(2번 이상 나온 단어만)
# 수능 데이터 = 추가 학습 데이터
sat_train_data, sat_valid_data, sat_test_data = TabularDataset.splits(
path=DATA_PATH,
train="sat_train.tsv",
validation="sat_valid.tsv",
test="sat_test.tsv",
format="tsv",
fields=[("text", TEXT), ("label", LABEL)],
skip_header=1,
)
###Output
_____no_output_____
###Markdown
DataLoader 정의
###Code
# CoLA 데이터
cola_train_iterator, cola_valid_iterator, cola_test_iterator = BucketIterator.splits(
(cola_train_data, cola_valid_data, cola_test_data),
batch_size=32,
device=None,
sort=False,
)
# 수능 데이터
sat_train_iterator, sat_valid_iterator, sat_test_iterator = BucketIterator.splits(
(sat_train_data, sat_valid_data, sat_test_data),
batch_size=8,
device=None,
sort=False,
)
###Output
_____no_output_____
###Markdown
네트워크 구성
###Code
import torch
import torch.nn as nn
class LSTM_Model(nn.Module):
def __init__(self, num_embeddings, embedding_dim, hidden_size, num_layers, pad_idx):
super().__init__()
# Embedding Layer
self.embed_layer = nn.Embedding(
num_embeddings=num_embeddings,
embedding_dim=embedding_dim,
padding_idx=pad_idx
)
# LSTM Layer
self.lstm_layer = nn.LSTM(
input_size=embedding_dim,
hidden_size=hidden_size,
num_layers=num_layers,
batch_first = True,
bidirectional=True, # 양방향 LSTM
dropout=0.5
)
# Fully-connetcted Layer
self.fc_layer1 = nn.Sequential(
nn.Linear(hidden_size * 2, hidden_size), # 양방향 LSTM의 출력은 입력의 2배
nn.Dropout(0.5),
nn.LeakyReLU() # f(x)=max(0.01x, x)로 dying ReLU 방지
)
self.fc_layer2 = nn.Sequential(
nn.Linear(hidden_size, 1)
)
def forward(self, x):
embed_x = self.embed_layer(x)
output, (_, _) = self.lstm_layer(embed_x) # hidden, cell state의 출력값 사용 안함
output = output[:, -1, :] # (batch_size, seq_length, 2*hidden_size) -> (batch_size, 2*hidden_size)
output = self.fc_layer1(output)
output = self.fc_layer2(output)
return output
###Output
_____no_output_____
###Markdown
모델 학습 및 검증 파라미터 정의
###Code
USE_CUDA = torch.cuda.is_available()
DEVICE = torch.device("cuda" if USE_CUDA else "cpu") # GPU 존재시 GPU 실행(CUDA)
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] # 동일한 크기를 맞추기 위한 패딩문자를 숫자 식별자에 매칭 -> 숫자 식별자=index
lstm = LSTM_Model(
num_embeddings=len(TEXT.vocab),
embedding_dim=100,
hidden_size=200,
num_layers=4,
pad_idx=PAD_IDX
).to(DEVICE)
n_epochs = 20
learning_rate = 0.001
optimizer = torch.optim.Adam(lstm.parameters(), lr=learning_rate)
criterion = nn.BCEWithLogitsLoss() # Sigmoid + BCELoss
###Output
_____no_output_____
###Markdown
훈련 데이터로 학습하여 모델화
###Code
def train(model, iterator, optimizer):
train_loss = 0
model.train() # 모델을 train모드로 설정(Dropout 적용)
for _, batch in enumerate(iterator):
optimizer.zero_grad() # optimizer 초기화(Gradient)
text = batch.text # 해당 Batch의 text 속성 불러오기
label = batch.label.type(torch.FloatTensor) # 해당 Batch의 label 속성 불러오기(32-bit float)
text = text.to(DEVICE)
label = label.to(DEVICE)
output = model(text).flatten() # output은 [batch_size, 1], label은 [batch_size]
loss = criterion(output, label)
loss.backward() # 역전파로 Gradient를 계산 후 파라미터에 할당
optimizer.step() # 파라미터 업데이트
train_loss += loss.item() # Loss 값 누적
# Loss 값을 Batch 값으로 나누어 미니 배치마다의 Loss 값의 평균을 구함
return train_loss/len(iterator)
###Output
_____no_output_____
###Markdown
모델 검증
###Code
def evaluate(model, iterator):
valid_loss = 0
model.eval() # 모델을 eval모드로 설정(Dropout 미적용)
with torch.no_grad(): # Gradient 계산 비활성화 (모델 평가에는 파라미터 업데이트 X)
for _, batch in enumerate(iterator):
text = batch.text
label = batch.label.type(torch.FloatTensor)
text = text.to(DEVICE)
label = label.to(DEVICE)
output = model(text).flatten()
loss = criterion(output, label)
valid_loss += loss.item()
return valid_loss/len(iterator)
###Output
_____no_output_____
###Markdown
CoLA 데이터 사전학습
###Code
import time
def epoch_time(start_time: int, end_time: int): # epoch 시간
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
loss_tr = []
loss_val = []
for epoch in range(n_epochs):
start_time = time.time()
train_loss = train(lstm, cola_train_iterator, optimizer)
valid_loss = evaluate(lstm, cola_valid_iterator)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
print(f"Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s")
print(f"Train Loss: {train_loss:.5f}")
print(f" Val. Loss: {valid_loss:.5f}")
print('----------------------------------')
# overfitting 확인하기 위함
loss_tr.append(train_loss)
loss_val.append(valid_loss)
import numpy as np
import matplotlib.pyplot as plt
np1 = np.array(loss_tr)
np2 = np.array(loss_val)
plt.figure(figsize=(10, 10))
plt.xlabel('epoch')
plt.ylabel('loss')
plt.plot(np1, label='Loss of train')
plt.plot(np2, label='Loss of Validation')
plt.legend() # 라벨표시를 위한 범례
plt.show()
from copy import deepcopy
# 사전학습 모델
before_tuning_lstm = deepcopy(lstm)
###Output
_____no_output_____
###Markdown
수능 데이터를 이용해 추가 학습 (Fine-Tune)
###Code
loss_tr_tune = []
loss_val_tune = []
for epoch in range(n_epochs):
start_time = time.time()
train_loss = train(lstm, sat_train_iterator, optimizer)
valid_loss = evaluate(lstm, sat_valid_iterator)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
print(f"Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s")
print(f"\tTrain Loss: {train_loss:.5f}")
print(f"\t Val. Loss: {valid_loss:.5f}")
print('----------------------------------')
# overfitting 확인하기 위함
loss_tr_tune.append(train_loss)
loss_val_tune.append(valid_loss)
np1 = np.array(loss_tr_tune)
np2 = np.array(loss_val_tune)
plt.figure(figsize=(10, 10))
plt.xlabel('epoch')
plt.ylabel('loss')
plt.plot(np1, label='Loss of train')
plt.plot(np2, label='Loss of Validation')
plt.legend() # 라벨표시를 위한 범례
plt.show()
###Output
_____no_output_____
###Markdown
모델 성능 테스트
###Code
import dill
from sklearn.metrics import roc_curve, auc
def test(model, iterator, device):
model.eval()
with torch.no_grad():
y_real = []
y_pred = []
for batch in iterator:
text = batch.text
label = batch.label.type(torch.FloatTensor)
text = text.to(device)
output = model(text).flatten().cpu() # roc_curve의 입력 형태는 ndarray의 형태
# 각 Batch의 예측값 list화
y_real += [label]
y_pred += [output]
y_real = torch.cat(y_real)
y_pred = torch.cat(y_pred)
fpr, tpr, _ = roc_curve(y_real, y_pred)
auroc = auc(fpr, tpr)
return auroc
_ = before_tuning_lstm.cpu()
lstm_sat_test_auroc = test(before_tuning_lstm, sat_test_iterator, "cpu")
_ = lstm.cpu()
lstm_tuned_test_auroc = test(lstm, sat_test_iterator, "cpu")
print(f"Before fine-tuning SAT Dataset Test AUROC: {lstm_sat_test_auroc:.5f}")
print(f"After fine-tuning SAT Dataset Test AUROC: {lstm_tuned_test_auroc:.5f}")
with open("before_tuning_model.dill", "wb") as f:
model = {
"TEXT": TEXT,
"LABEL": LABEL,
"classifier": before_tuning_lstm
}
dill.dump(model, f)
_ = lstm.cpu()
with open("after_tuning_model.dill", "wb") as f:
model = {
"TEXT": TEXT,
"LABEL": LABEL,
"classifier": lstm
}
dill.dump(model, f)
###Output
_____no_output_____ |
archived-datasets/uniprot/process-uniprot.ipynb | ###Markdown
Process UniProt DataJupyter Notebook to download and preprocess files to transform to BioLink RDF. Download filesThe download can be defined:* in this Jupyter Notebook using Python* as a Bash script in the `download/download.sh` file, and executed using `d2s download uniprot`
###Code
import os
import glob
import requests
import functools
import shutil
import pandas as pd
# Use Pandas, load file in memory
def convert_tsv_to_csv(tsv_file):
csv_table=pd.read_table(tsv_file,sep='\t')
csv_table.to_csv(tsv_file[:-4] + '.csv',index=False)
# Variables and path for the dataset
dataset_id = 'uniprot'
dsri_flink_pod_id = 'flink-jobmanager-###'
input_folder = '/notebooks/workspace/input/' + dataset_id
mapping_folder = '/notebooks/datasets/' + dataset_id + '/mapping'
os.makedirs(input_folder, exist_ok=True)
# Use input folder as working folder
os.chdir(input_folder)
files_to_download = [
'https://raw.githubusercontent.com/MaastrichtU-IDS/d2s-scripts-repository/master/resources/cohd-sample/concepts.tsv'
]
# Download each file and uncompress them if needed
# Use Bash because faster and more reliable than Python
for download_url in files_to_download:
os.system('wget -N ' + download_url)
os.system('find . -name "*.tar.gz" -exec tar -xzvf {} \;')
os.system('unzip -o \*.zip')
# Rename .txt to .tsv
listing = glob.glob('*.txt')
for filename in listing:
os.rename(filename, filename[:-4] + '.tsv')
## Convert TSV to CSV to be processed with the RMLStreamer
# use Pandas (load in memory)
convert_tsv_to_csv('concepts.tsv')
# Use Bash
# cmd_convert_csv = """sed -e 's/"/\\"/g' -e 's/\t/","/g' -e 's/^/"/' -e 's/$/"/' -e 's/\r//' concepts.tsv > concepts.csv"""
# os.system(cmd_convert_csv)
###Output
_____no_output_____ |
openfl-tutorials/interactive_api/PyTorch_Huggingface_transformers_SUPERB/workspace/PyTorch_Huggingface_transformers_SUPERB.ipynb | ###Markdown
Federated Audio Classification tutorial with 🤗 Transformers
###Code
!pip install "datasets==1.14" "transformers==4.11.3" "librosa" "torch" "ipywidgets" "numpy==1.21.5"
###Output
_____no_output_____
###Markdown
Connect to the Federation
###Code
from openfl.interface.interactive_api.federation import Federation
client_id = "frontend"
director_node_fqdn = "localhost"
director_port = 50050
federation = Federation(
client_id=client_id,
director_node_fqdn=director_node_fqdn,
director_port=director_port,
tls=False,
)
shard_registry = federation.get_shard_registry()
shard_registry
federation.target_shape
###Output
_____no_output_____
###Markdown
Creating a FL experiment using Interactive API
###Code
from openfl.interface.interactive_api.experiment import (
DataInterface,
FLExperiment,
ModelInterface,
TaskInterface,
)
###Output
_____no_output_____
###Markdown
Register dataset
###Code
import datasets
import numpy as np
import torch
from torch.utils.data import Dataset
from transformers import (
AutoFeatureExtractor,
AutoModelForAudioClassification,
Trainer,
TrainingArguments,
)
model_checkpoint = "facebook/wav2vec2-base"
labels = [
"yes",
"no",
"up",
"down",
"left",
"right",
"on",
"off",
"stop",
"go",
"_silence_",
"_unknown_",
]
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = str(i)
id2label[str(i)] = label
feature_extractor = AutoFeatureExtractor.from_pretrained(model_checkpoint)
max_duration = 1.0
def preprocess_function(pre_processed_data):
audio_arrays = pre_processed_data
inputs = feature_extractor(
audio_arrays,
sampling_rate=feature_extractor.sampling_rate,
max_length=int(feature_extractor.sampling_rate * max_duration),
truncation=True,
)
return inputs
class SuperbShardDataset(Dataset):
def __init__(self, dataset):
self._dataset = dataset
def __getitem__(self, index):
x, y = self._dataset[index]
x = preprocess_function(x)
return {"input_values": x["input_values"][0], "labels": y}
def __len__(self):
return len(self._dataset)
class SuperbFedDataset(DataInterface):
def __init__(self, **kwargs):
super().__init__(**kwargs)
@property
def shard_descriptor(self):
return self._shard_descriptor
@shard_descriptor.setter
def shard_descriptor(self, shard_descriptor):
"""
Describe per-collaborator procedures for sharding.
This method will be called during a collaborator initialization.
Local shard_descriptor will be set by Envoy.
"""
self._shard_descriptor = shard_descriptor
self.train_set = SuperbShardDataset(
self._shard_descriptor.get_dataset("train"),
)
self.valid_set = SuperbShardDataset(
self._shard_descriptor.get_dataset("val"),
)
self.test_set = SuperbShardDataset(
self._shard_descriptor.get_dataset("test"),
)
def __getitem__(self, index):
return self.shard_descriptor[index]
def __len__(self):
return len(self.shard_descriptor)
def get_train_loader(self):
return self.train_set
def get_valid_loader(self):
return self.valid_set
def get_train_data_size(self):
return len(self.train_set)
def get_valid_data_size(self):
return len(self.valid_set)
fed_dataset = SuperbFedDataset()
###Output
_____no_output_____
###Markdown
Describe a model and optimizer
###Code
"""
Download the pretrained model and fine-tune it. For classification we use the AutoModelForAudioClassification class.
"""
num_labels = len(id2label)
model = AutoModelForAudioClassification.from_pretrained(
model_checkpoint,
num_labels=num_labels,
label2id=label2id,
id2label=id2label,
)
from transformers import AdamW
params_to_update = []
for param in model.parameters():
if param.requires_grad == True:
params_to_update.append(param)
optimizer = AdamW(params_to_update, lr=3e-5)
###Output
_____no_output_____
###Markdown
Register model
###Code
framework_adapter = (
"openfl.plugins.frameworks_adapters.pytorch_adapter.FrameworkAdapterPlugin"
)
MI = ModelInterface(
model=model, optimizer=optimizer, framework_plugin=framework_adapter
)
###Output
_____no_output_____
###Markdown
Define and register FL tasks
###Code
batch_size = 16
args = TrainingArguments(
"finetuned_model",
save_strategy="epoch",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
num_train_epochs=1,
warmup_ratio=0.1,
logging_steps=10,
push_to_hub=False,
)
from datasets import load_metric
metric = load_metric("accuracy")
def compute_metrics(eval_pred):
"""Computes accuracy on a batch of predictions"""
predictions = np.argmax(eval_pred.predictions, axis=1)
return metric.compute(predictions=predictions, references=eval_pred.label_ids)
TI = TaskInterface()
import torch.nn as nn
import tqdm
@TI.register_fl_task(
model="model", data_loader="train_loader", device="device", optimizer="optimizer"
)
def train(model, train_loader, optimizer, device):
print(f"\n\n TASK TRAIN GOT DEVICE {device}\n\n")
trainer = Trainer(
model.to(device),
args,
train_dataset=train_loader,
tokenizer=feature_extractor,
optimizers=(optimizer, None),
compute_metrics=compute_metrics,
)
train_metrics = trainer.train()
return {"train_loss": train_metrics.metrics["train_loss"]}
@TI.register_fl_task(model="model", data_loader="val_loader", device="device")
def validate(model, val_loader, device):
print(f"\n\n TASK VALIDATE GOT DEVICE {device}\n\n")
trainer = Trainer(
model.to(device),
args,
eval_dataset=val_loader,
tokenizer=feature_extractor,
compute_metrics=compute_metrics,
)
eval_metrics = trainer.evaluate()
return {"eval_accuracy": eval_metrics["eval_accuracy"]}
###Output
_____no_output_____
###Markdown
Time to start a federated learning experiment
###Code
experiment_name = "HF_audio_test_experiment"
fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name)
fl_experiment.start(
model_provider=MI,
task_keeper=TI,
data_loader=fed_dataset,
rounds_to_train=2,
opt_treatment="CONTINUE_GLOBAL",
device_assignment_policy="CUDA_PREFERRED",
)
fl_experiment.stream_metrics()
###Output
_____no_output_____ |
convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb | ###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below.
###Code
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision
import torch
###Output
tcmalloc: large alloc 1073750016 bytes == 0x5884a000 @ 0x7f388022a2a4 0x591a07 0x5b5d56 0x502e9a 0x506859 0x502209 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x507641 0x502209 0x502f3d 0x506859 0x504c28 0x502540 0x502f3d 0x507641 0x504c28 0x502540 0x502f3d 0x507641
###Markdown
Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data/cifar-10-python.tar.gz
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.686030 Validation Loss: 0.358223
Validation loss decreased (inf --> 0.358223). Saving model ...
Epoch: 2 Training Loss: 1.349165 Validation Loss: 0.307346
Validation loss decreased (0.358223 --> 0.307346). Saving model ...
Epoch: 3 Training Loss: 1.216118 Validation Loss: 0.281871
Validation loss decreased (0.307346 --> 0.281871). Saving model ...
Epoch: 4 Training Loss: 1.128092 Validation Loss: 0.267563
Validation loss decreased (0.281871 --> 0.267563). Saving model ...
Epoch: 5 Training Loss: 1.058477 Validation Loss: 0.244597
Validation loss decreased (0.267563 --> 0.244597). Saving model ...
Epoch: 6 Training Loss: 1.002594 Validation Loss: 0.237893
Validation loss decreased (0.244597 --> 0.237893). Saving model ...
Epoch: 7 Training Loss: 0.941602 Validation Loss: 0.215911
Validation loss decreased (0.237893 --> 0.215911). Saving model ...
Epoch: 8 Training Loss: 0.892041 Validation Loss: 0.207631
Validation loss decreased (0.215911 --> 0.207631). Saving model ...
Epoch: 9 Training Loss: 0.851250 Validation Loss: 0.201307
Validation loss decreased (0.207631 --> 0.201307). Saving model ...
Epoch: 10 Training Loss: 0.818369 Validation Loss: 0.190010
Validation loss decreased (0.201307 --> 0.190010). Saving model ...
Epoch: 11 Training Loss: 0.790613 Validation Loss: 0.184944
Validation loss decreased (0.190010 --> 0.184944). Saving model ...
Epoch: 12 Training Loss: 0.757040 Validation Loss: 0.180804
Validation loss decreased (0.184944 --> 0.180804). Saving model ...
Epoch: 13 Training Loss: 0.735242 Validation Loss: 0.167090
Validation loss decreased (0.180804 --> 0.167090). Saving model ...
Epoch: 14 Training Loss: 0.713015 Validation Loss: 0.167156
Epoch: 15 Training Loss: 0.692718 Validation Loss: 0.162530
Validation loss decreased (0.167090 --> 0.162530). Saving model ...
Epoch: 16 Training Loss: 0.681105 Validation Loss: 0.159103
Validation loss decreased (0.162530 --> 0.159103). Saving model ...
Epoch: 17 Training Loss: 0.664508 Validation Loss: 0.157283
Validation loss decreased (0.159103 --> 0.157283). Saving model ...
Epoch: 18 Training Loss: 0.649954 Validation Loss: 0.161225
Epoch: 19 Training Loss: 0.632150 Validation Loss: 0.149274
Validation loss decreased (0.157283 --> 0.149274). Saving model ...
Epoch: 20 Training Loss: 0.621994 Validation Loss: 0.146372
Validation loss decreased (0.149274 --> 0.146372). Saving model ...
Epoch: 21 Training Loss: 0.610832 Validation Loss: 0.147811
Epoch: 22 Training Loss: 0.594228 Validation Loss: 0.143662
Validation loss decreased (0.146372 --> 0.143662). Saving model ...
Epoch: 23 Training Loss: 0.584557 Validation Loss: 0.142820
Validation loss decreased (0.143662 --> 0.142820). Saving model ...
Epoch: 24 Training Loss: 0.576725 Validation Loss: 0.145113
Epoch: 25 Training Loss: 0.566297 Validation Loss: 0.140178
Validation loss decreased (0.142820 --> 0.140178). Saving model ...
Epoch: 26 Training Loss: 0.553700 Validation Loss: 0.137218
Validation loss decreased (0.140178 --> 0.137218). Saving model ...
Epoch: 27 Training Loss: 0.546982 Validation Loss: 0.135299
Validation loss decreased (0.137218 --> 0.135299). Saving model ...
Epoch: 28 Training Loss: 0.543173 Validation Loss: 0.133716
Validation loss decreased (0.135299 --> 0.133716). Saving model ...
Epoch: 29 Training Loss: 0.534489 Validation Loss: 0.134369
Epoch: 30 Training Loss: 0.523680 Validation Loss: 0.132735
Validation loss decreased (0.133716 --> 0.132735). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.676522
Test Accuracy of airplane: 80% (800/1000)
Test Accuracy of automobile: 90% (902/1000)
Test Accuracy of bird: 65% (651/1000)
Test Accuracy of cat: 50% (502/1000)
Test Accuracy of deer: 79% (790/1000)
Test Accuracy of dog: 67% (670/1000)
Test Accuracy of frog: 87% (879/1000)
Test Accuracy of horse: 77% (776/1000)
Test Accuracy of ship: 86% (864/1000)
Test Accuracy of truck: 81% (819/1000)
Test Accuracy (Overall): 76% (7653/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 2
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data/cifar-10-python.tar.gz
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 2.064211 Validation Loss: 1.761515
Validation loss decreased (inf --> 1.761515). Saving model ...
Epoch: 2 Training Loss: 1.660468 Validation Loss: 1.560094
Validation loss decreased (1.761515 --> 1.560094). Saving model ...
Epoch: 3 Training Loss: 1.489196 Validation Loss: 1.369475
Validation loss decreased (1.560094 --> 1.369475). Saving model ...
Epoch: 4 Training Loss: 1.388617 Validation Loss: 1.281986
Validation loss decreased (1.369475 --> 1.281986). Saving model ...
Epoch: 5 Training Loss: 1.305705 Validation Loss: 1.210143
Validation loss decreased (1.281986 --> 1.210143). Saving model ...
Epoch: 6 Training Loss: 1.239662 Validation Loss: 1.163721
Validation loss decreased (1.210143 --> 1.163721). Saving model ...
Epoch: 7 Training Loss: 1.176221 Validation Loss: 1.078836
Validation loss decreased (1.163721 --> 1.078836). Saving model ...
Epoch: 8 Training Loss: 1.126307 Validation Loss: 1.050761
Validation loss decreased (1.078836 --> 1.050761). Saving model ...
Epoch: 9 Training Loss: 1.077295 Validation Loss: 1.040278
Validation loss decreased (1.050761 --> 1.040278). Saving model ...
Epoch: 10 Training Loss: 1.038051 Validation Loss: 0.979989
Validation loss decreased (1.040278 --> 0.979989). Saving model ...
Epoch: 11 Training Loss: 1.000759 Validation Loss: 0.928531
Validation loss decreased (0.979989 --> 0.928531). Saving model ...
Epoch: 12 Training Loss: 0.970272 Validation Loss: 0.898299
Validation loss decreased (0.928531 --> 0.898299). Saving model ...
Epoch: 13 Training Loss: 0.938357 Validation Loss: 0.873495
Validation loss decreased (0.898299 --> 0.873495). Saving model ...
Epoch: 14 Training Loss: 0.913281 Validation Loss: 0.847316
Validation loss decreased (0.873495 --> 0.847316). Saving model ...
Epoch: 15 Training Loss: 0.885292 Validation Loss: 0.878665
Epoch: 16 Training Loss: 0.866795 Validation Loss: 0.823176
Validation loss decreased (0.847316 --> 0.823176). Saving model ...
Epoch: 17 Training Loss: 0.845517 Validation Loss: 0.812109
Validation loss decreased (0.823176 --> 0.812109). Saving model ...
Epoch: 18 Training Loss: 0.829938 Validation Loss: 0.802247
Validation loss decreased (0.812109 --> 0.802247). Saving model ...
Epoch: 19 Training Loss: 0.810250 Validation Loss: 0.790545
Validation loss decreased (0.802247 --> 0.790545). Saving model ...
Epoch: 20 Training Loss: 0.794639 Validation Loss: 0.761051
Validation loss decreased (0.790545 --> 0.761051). Saving model ...
Epoch: 21 Training Loss: 0.778902 Validation Loss: 0.746980
Validation loss decreased (0.761051 --> 0.746980). Saving model ...
Epoch: 22 Training Loss: 0.766747 Validation Loss: 0.745301
Validation loss decreased (0.746980 --> 0.745301). Saving model ...
Epoch: 23 Training Loss: 0.753324 Validation Loss: 0.747124
Epoch: 24 Training Loss: 0.738846 Validation Loss: 0.732334
Validation loss decreased (0.745301 --> 0.732334). Saving model ...
Epoch: 25 Training Loss: 0.724490 Validation Loss: 0.735533
Epoch: 26 Training Loss: 0.718561 Validation Loss: 0.722541
Validation loss decreased (0.732334 --> 0.722541). Saving model ...
Epoch: 27 Training Loss: 0.702438 Validation Loss: 0.727484
Epoch: 28 Training Loss: 0.693557 Validation Loss: 0.701721
Validation loss decreased (0.722541 --> 0.701721). Saving model ...
Epoch: 29 Training Loss: 0.680966 Validation Loss: 0.691138
Validation loss decreased (0.701721 --> 0.691138). Saving model ...
Epoch: 30 Training Loss: 0.670703 Validation Loss: 0.692802
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.705815
Test Accuracy of airplane: 79% (799/1000)
Test Accuracy of automobile: 87% (874/1000)
Test Accuracy of bird: 60% (609/1000)
Test Accuracy of cat: 53% (538/1000)
Test Accuracy of deer: 73% (730/1000)
Test Accuracy of dog: 70% (702/1000)
Test Accuracy of frog: 86% (863/1000)
Test Accuracy of horse: 79% (794/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 82% (824/1000)
Test Accuracy (Overall): 75% (7585/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
c:\programdata\anaconda3\envs\deep-learning\lib\site-packages\ipykernel_launcher.py:10: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
# Remove the CWD from sys.path while we load stuff.
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 2.119972 Validation Loss: 1.856073
Validation loss decreased (inf --> 1.856073). Saving model ...
Epoch: 2 Training Loss: 1.716409 Validation Loss: 1.568614
Validation loss decreased (1.856073 --> 1.568614). Saving model ...
Epoch: 3 Training Loss: 1.525164 Validation Loss: 1.450788
Validation loss decreased (1.568614 --> 1.450788). Saving model ...
Epoch: 4 Training Loss: 1.423677 Validation Loss: 1.397732
Validation loss decreased (1.450788 --> 1.397732). Saving model ...
Epoch: 5 Training Loss: 1.342260 Validation Loss: 1.270766
Validation loss decreased (1.397732 --> 1.270766). Saving model ...
Epoch: 6 Training Loss: 1.267944 Validation Loss: 1.178002
Validation loss decreased (1.270766 --> 1.178002). Saving model ...
Epoch: 7 Training Loss: 1.201501 Validation Loss: 1.171120
Validation loss decreased (1.178002 --> 1.171120). Saving model ...
Epoch: 8 Training Loss: 1.145049 Validation Loss: 1.071912
Validation loss decreased (1.171120 --> 1.071912). Saving model ...
Epoch: 9 Training Loss: 1.100157 Validation Loss: 1.035417
Validation loss decreased (1.071912 --> 1.035417). Saving model ...
Epoch: 10 Training Loss: 1.054594 Validation Loss: 1.026506
Validation loss decreased (1.035417 --> 1.026506). Saving model ...
Epoch: 11 Training Loss: 1.017969 Validation Loss: 0.976097
Validation loss decreased (1.026506 --> 0.976097). Saving model ...
Epoch: 12 Training Loss: 0.978421 Validation Loss: 0.936046
Validation loss decreased (0.976097 --> 0.936046). Saving model ...
Epoch: 13 Training Loss: 0.949794 Validation Loss: 0.927263
Validation loss decreased (0.936046 --> 0.927263). Saving model ...
Epoch: 14 Training Loss: 0.921709 Validation Loss: 0.887280
Validation loss decreased (0.927263 --> 0.887280). Saving model ...
Epoch: 15 Training Loss: 0.898048 Validation Loss: 0.876876
Validation loss decreased (0.887280 --> 0.876876). Saving model ...
Epoch: 16 Training Loss: 0.867541 Validation Loss: 0.842790
Validation loss decreased (0.876876 --> 0.842790). Saving model ...
Epoch: 17 Training Loss: 0.846852 Validation Loss: 0.861555
Epoch: 18 Training Loss: 0.830327 Validation Loss: 0.842768
Validation loss decreased (0.842790 --> 0.842768). Saving model ...
Epoch: 19 Training Loss: 0.806609 Validation Loss: 0.860695
Epoch: 20 Training Loss: 0.794269 Validation Loss: 0.782999
Validation loss decreased (0.842768 --> 0.782999). Saving model ...
Epoch: 21 Training Loss: 0.780822 Validation Loss: 0.787927
Epoch: 22 Training Loss: 0.768194 Validation Loss: 0.769228
Validation loss decreased (0.782999 --> 0.769228). Saving model ...
Epoch: 23 Training Loss: 0.749033 Validation Loss: 0.756946
Validation loss decreased (0.769228 --> 0.756946). Saving model ...
Epoch: 24 Training Loss: 0.738612 Validation Loss: 0.745380
Validation loss decreased (0.756946 --> 0.745380). Saving model ...
Epoch: 25 Training Loss: 0.725895 Validation Loss: 0.741522
Validation loss decreased (0.745380 --> 0.741522). Saving model ...
Epoch: 26 Training Loss: 0.712221 Validation Loss: 0.724920
Validation loss decreased (0.741522 --> 0.724920). Saving model ...
Epoch: 27 Training Loss: 0.699068 Validation Loss: 0.736928
Epoch: 28 Training Loss: 0.689836 Validation Loss: 0.724190
Validation loss decreased (0.724920 --> 0.724190). Saving model ...
Epoch: 29 Training Loss: 0.685232 Validation Loss: 0.714735
Validation loss decreased (0.724190 --> 0.714735). Saving model ...
Epoch: 30 Training Loss: 0.668873 Validation Loss: 0.714472
Validation loss decreased (0.714735 --> 0.714472). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.696878
Test Accuracy of airplane: 72% (720/1000)
Test Accuracy of automobile: 84% (844/1000)
Test Accuracy of bird: 70% (707/1000)
Test Accuracy of cat: 58% (589/1000)
Test Accuracy of deer: 77% (775/1000)
Test Accuracy of dog: 59% (599/1000)
Test Accuracy of frog: 84% (844/1000)
Test Accuracy of horse: 77% (776/1000)
Test Accuracy of ship: 86% (864/1000)
Test Accuracy of truck: 85% (852/1000)
Test Accuracy (Overall): 75% (7570/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx].cpu())
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
c:\programdata\anaconda3\envs\deep-learning\lib\site-packages\ipykernel_launcher.py:19: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer 1
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# max pooling layer 1
self.pool1 = nn.MaxPool2d(2, 2)
# conv layer 2
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# max pool layer 2
self.pool2 = nn.MaxPool2d(2, 2)
# conv layer 3
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pool layer 3
self.pool3 = nn.MaxPool2d(2, 2)
# drop out layer
self.dropout = nn.Dropout(0.2)
# linear 1
self.fc1 = nn.Linear(1024, 512)
# linear 2
self.fc2 = nn.Linear(512,256)
# linear 3
self.fc3 = nn.Linear(256, 10)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool1(F.relu(self.conv1(x)))
x = self.dropout(x)
x = self.pool2(F.relu(self.conv2(x)))
x = self.dropout(x)
x = self.pool3(F.relu(self.conv3(x)))
# flatten
#x.resize_(x.shape[0], 1024)
x = x.view(-1,1024)
# FC Classifier
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.dropout(x)
x = self.fc3(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(dropout): Dropout(p=0.2)
(fc1): Linear(in_features=1024, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=256, bias=True)
(fc3): Linear(in_features=256, out_features=10, bias=True)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.Adam(model.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.255377 Validation Loss: 0.252739
Validation loss decreased (inf --> 0.252739). Saving model ...
Epoch: 2 Training Loss: 1.005428 Validation Loss: 0.259221
Epoch: 3 Training Loss: 0.898902 Validation Loss: 0.217469
Validation loss decreased (0.252739 --> 0.217469). Saving model ...
Epoch: 4 Training Loss: 0.840361 Validation Loss: 0.198862
Validation loss decreased (0.217469 --> 0.198862). Saving model ...
Epoch: 5 Training Loss: 0.799840 Validation Loss: 0.192497
Validation loss decreased (0.198862 --> 0.192497). Saving model ...
Epoch: 6 Training Loss: 0.772472 Validation Loss: 0.190164
Validation loss decreased (0.192497 --> 0.190164). Saving model ...
Epoch: 7 Training Loss: 0.747666 Validation Loss: 0.178104
Validation loss decreased (0.190164 --> 0.178104). Saving model ...
Epoch: 8 Training Loss: 0.736520 Validation Loss: 0.182220
Epoch: 9 Training Loss: 0.717783 Validation Loss: 0.172282
Validation loss decreased (0.178104 --> 0.172282). Saving model ...
Epoch: 10 Training Loss: 0.706201 Validation Loss: 0.168155
Validation loss decreased (0.172282 --> 0.168155). Saving model ...
Epoch: 11 Training Loss: 0.693486 Validation Loss: 0.165464
Validation loss decreased (0.168155 --> 0.165464). Saving model ...
Epoch: 12 Training Loss: 0.685488 Validation Loss: 0.169744
Epoch: 13 Training Loss: 0.680690 Validation Loss: 0.163065
Validation loss decreased (0.165464 --> 0.163065). Saving model ...
Epoch: 14 Training Loss: 0.671358 Validation Loss: 0.167852
Epoch: 15 Training Loss: 0.658454 Validation Loss: 0.157953
Validation loss decreased (0.163065 --> 0.157953). Saving model ...
Epoch: 16 Training Loss: 0.657172 Validation Loss: 0.161652
Epoch: 17 Training Loss: 0.655414 Validation Loss: 0.157128
Validation loss decreased (0.157953 --> 0.157128). Saving model ...
Epoch: 18 Training Loss: 0.649229 Validation Loss: 0.163439
Epoch: 19 Training Loss: 0.644145 Validation Loss: 0.157231
Epoch: 20 Training Loss: 0.636254 Validation Loss: 0.156668
Validation loss decreased (0.157128 --> 0.156668). Saving model ...
Epoch: 21 Training Loss: 0.633594 Validation Loss: 0.164706
Epoch: 22 Training Loss: 0.632842 Validation Loss: 0.153217
Validation loss decreased (0.156668 --> 0.153217). Saving model ...
Epoch: 23 Training Loss: 0.629659 Validation Loss: 0.155349
Epoch: 24 Training Loss: 0.626865 Validation Loss: 0.156579
Epoch: 25 Training Loss: 0.622864 Validation Loss: 0.154366
Epoch: 26 Training Loss: 0.616685 Validation Loss: 0.154984
Epoch: 27 Training Loss: 0.616530 Validation Loss: 0.158305
Epoch: 28 Training Loss: 0.618614 Validation Loss: 0.152199
Validation loss decreased (0.153217 --> 0.152199). Saving model ...
Epoch: 29 Training Loss: 0.613604 Validation Loss: 0.154333
Epoch: 30 Training Loss: 0.614361 Validation Loss: 0.149635
Validation loss decreased (0.152199 --> 0.149635). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.729718
Test Accuracy of airplane: 80% (806/1000)
Test Accuracy of automobile: 85% (855/1000)
Test Accuracy of bird: 57% (575/1000)
Test Accuracy of cat: 49% (492/1000)
Test Accuracy of deer: 71% (719/1000)
Test Accuracy of dog: 64% (641/1000)
Test Accuracy of frog: 88% (889/1000)
Test Accuracy of horse: 79% (795/1000)
Test Accuracy of ship: 89% (894/1000)
Test Accuracy of truck: 82% (829/1000)
Test Accuracy (Overall): 74% (7495/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is not available. Training on CPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data/cifar-10-python.tar.gz
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting. Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.003)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# not sure why data augmenation makes the model much worse
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.429576 Validation Loss: 0.342924
Validation loss decreased (inf --> 0.342924). Saving model ...
Epoch: 2 Training Loss: 1.426770 Validation Loss: 0.343579
Epoch: 3 Training Loss: 1.426600 Validation Loss: 0.345241
Epoch: 4 Training Loss: 1.424345 Validation Loss: 0.344848
Epoch: 5 Training Loss: 1.427292 Validation Loss: 0.344868
Epoch: 6 Training Loss: 1.422299 Validation Loss: 0.343448
Epoch: 7 Training Loss: 1.423572 Validation Loss: 0.342499
Validation loss decreased (0.342924 --> 0.342499). Saving model ...
Epoch: 8 Training Loss: 1.424229 Validation Loss: 0.342372
Validation loss decreased (0.342499 --> 0.342372). Saving model ...
Epoch: 9 Training Loss: 1.422735 Validation Loss: 0.344054
Epoch: 10 Training Loss: 1.423262 Validation Loss: 0.342952
Epoch: 11 Training Loss: 1.425862 Validation Loss: 0.344860
Epoch: 12 Training Loss: 1.423580 Validation Loss: 0.343631
Epoch: 13 Training Loss: 1.424658 Validation Loss: 0.342446
Epoch: 14 Training Loss: 1.426670 Validation Loss: 0.343678
Epoch: 15 Training Loss: 1.423545 Validation Loss: 0.342836
Epoch: 16 Training Loss: 1.420091 Validation Loss: 0.342850
Epoch: 17 Training Loss: 1.421856 Validation Loss: 0.341345
Validation loss decreased (0.342372 --> 0.341345). Saving model ...
Epoch: 18 Training Loss: 1.419028 Validation Loss: 0.342407
Epoch: 19 Training Loss: 1.422458 Validation Loss: 0.343618
Epoch: 20 Training Loss: 1.425628 Validation Loss: 0.341324
Validation loss decreased (0.341345 --> 0.341324). Saving model ...
Epoch: 21 Training Loss: 1.426779 Validation Loss: 0.342321
Epoch: 22 Training Loss: 1.425296 Validation Loss: 0.341068
Validation loss decreased (0.341324 --> 0.341068). Saving model ...
Epoch: 23 Training Loss: 1.422402 Validation Loss: 0.343446
Epoch: 24 Training Loss: 1.423574 Validation Loss: 0.341815
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 1.696241
Test Accuracy of airplane: 46% (461/1000)
Test Accuracy of automobile: 51% (515/1000)
Test Accuracy of bird: 9% (92/1000)
Test Accuracy of cat: 26% (265/1000)
Test Accuracy of deer: 46% (461/1000)
Test Accuracy of dog: 34% (349/1000)
Test Accuracy of frog: 19% (196/1000)
Test Accuracy of horse: 38% (386/1000)
Test Accuracy of ship: 43% (435/1000)
Test Accuracy of truck: 51% (515/1000)
Test Accuracy (Overall): 36% (3675/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx] if not train_on_gpu else images[idx].cpu())
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is not available. Training on CPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.691594 Validation Loss: 0.368257
Validation loss decreased (inf --> 0.368257). Saving model ...
Epoch: 2 Training Loss: 1.366392 Validation Loss: 0.304367
Validation loss decreased (0.368257 --> 0.304367). Saving model ...
Epoch: 3 Training Loss: 1.215555 Validation Loss: 0.282656
Validation loss decreased (0.304367 --> 0.282656). Saving model ...
Epoch: 4 Training Loss: 1.133317 Validation Loss: 0.263694
Validation loss decreased (0.282656 --> 0.263694). Saving model ...
Epoch: 5 Training Loss: 1.069666 Validation Loss: 0.245035
Validation loss decreased (0.263694 --> 0.245035). Saving model ...
Epoch: 6 Training Loss: 1.012479 Validation Loss: 0.253128
Epoch: 7 Training Loss: 0.961046 Validation Loss: 0.221501
Validation loss decreased (0.245035 --> 0.221501). Saving model ...
Epoch: 8 Training Loss: 0.920183 Validation Loss: 0.214656
Validation loss decreased (0.221501 --> 0.214656). Saving model ...
Epoch: 9 Training Loss: 0.886675 Validation Loss: 0.201458
Validation loss decreased (0.214656 --> 0.201458). Saving model ...
Epoch: 10 Training Loss: 0.851851 Validation Loss: 0.194988
Validation loss decreased (0.201458 --> 0.194988). Saving model ...
Epoch: 11 Training Loss: 0.823825 Validation Loss: 0.188037
Validation loss decreased (0.194988 --> 0.188037). Saving model ...
Epoch: 12 Training Loss: 0.798454 Validation Loss: 0.183874
Validation loss decreased (0.188037 --> 0.183874). Saving model ...
Epoch: 13 Training Loss: 0.771164 Validation Loss: 0.179418
Validation loss decreased (0.183874 --> 0.179418). Saving model ...
Epoch: 14 Training Loss: 0.753441 Validation Loss: 0.175129
Validation loss decreased (0.179418 --> 0.175129). Saving model ...
Epoch: 15 Training Loss: 0.736738 Validation Loss: 0.168084
Validation loss decreased (0.175129 --> 0.168084). Saving model ...
Epoch: 16 Training Loss: 0.719660 Validation Loss: 0.168367
Epoch: 17 Training Loss: 0.696935 Validation Loss: 0.168378
Epoch: 18 Training Loss: 0.680390 Validation Loss: 0.160132
Validation loss decreased (0.168084 --> 0.160132). Saving model ...
Epoch: 19 Training Loss: 0.665381 Validation Loss: 0.157457
Validation loss decreased (0.160132 --> 0.157457). Saving model ...
Epoch: 20 Training Loss: 0.653547 Validation Loss: 0.158073
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.800586
Test Accuracy of airplane: 77% (779/1000)
Test Accuracy of automobile: 81% (815/1000)
Test Accuracy of bird: 52% (527/1000)
Test Accuracy of cat: 56% (569/1000)
Test Accuracy of deer: 70% (705/1000)
Test Accuracy of dog: 55% (556/1000)
Test Accuracy of frog: 87% (870/1000)
Test Accuracy of horse: 72% (722/1000)
Test Accuracy of ship: 80% (800/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7184/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import sys
try:
import torch
except:
import os
os.environ['TCMALLOC_LARGE_ALLOC_REPORT_THRESHOLD']='2000000000'
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!{sys.executable} -m pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision >/dev/null
import torch
import numpy as np
! wget 'https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/convolutional-neural-networks/cifar-cnn/model_augmented.pt' >/dev/null 2>&1
! wget 'https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/convolutional-neural-networks/cifar-cnn/model_cifar.pt' >/dev/null 2>&1
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
_____no_output_____
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
_____no_output_____
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
_____no_output_____
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
_____no_output_____
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
_____no_output_____
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 2.144066 Validation Loss: 1.873401
Validation loss decreased (inf --> 1.873401). Saving model ...
Epoch: 2 Training Loss: 1.747764 Validation Loss: 1.570753
Validation loss decreased (1.873401 --> 1.570753). Saving model ...
Epoch: 3 Training Loss: 1.541049 Validation Loss: 1.431901
Validation loss decreased (1.570753 --> 1.431901). Saving model ...
Epoch: 4 Training Loss: 1.426174 Validation Loss: 1.368464
Validation loss decreased (1.431901 --> 1.368464). Saving model ...
Epoch: 5 Training Loss: 1.346291 Validation Loss: 1.257371
Validation loss decreased (1.368464 --> 1.257371). Saving model ...
Epoch: 6 Training Loss: 1.278071 Validation Loss: 1.194146
Validation loss decreased (1.257371 --> 1.194146). Saving model ...
Epoch: 7 Training Loss: 1.214636 Validation Loss: 1.132866
Validation loss decreased (1.194146 --> 1.132866). Saving model ...
Epoch: 8 Training Loss: 1.165992 Validation Loss: 1.120825
Validation loss decreased (1.132866 --> 1.120825). Saving model ...
Epoch: 9 Training Loss: 1.120550 Validation Loss: 1.038929
Validation loss decreased (1.120825 --> 1.038929). Saving model ...
Epoch: 10 Training Loss: 1.081401 Validation Loss: 1.002236
Validation loss decreased (1.038929 --> 1.002236). Saving model ...
Epoch: 11 Training Loss: 1.039682 Validation Loss: 1.004945
Epoch: 12 Training Loss: 1.003427 Validation Loss: 0.965627
Validation loss decreased (1.002236 --> 0.965627). Saving model ...
Epoch: 13 Training Loss: 0.975610 Validation Loss: 0.904002
Validation loss decreased (0.965627 --> 0.904002). Saving model ...
Epoch: 14 Training Loss: 0.942710 Validation Loss: 0.884734
Validation loss decreased (0.904002 --> 0.884734). Saving model ...
Epoch: 15 Training Loss: 0.917000 Validation Loss: 0.869566
Validation loss decreased (0.884734 --> 0.869566). Saving model ...
Epoch: 16 Training Loss: 0.885167 Validation Loss: 0.843114
Validation loss decreased (0.869566 --> 0.843114). Saving model ...
Epoch: 17 Training Loss: 0.863789 Validation Loss: 0.816864
Validation loss decreased (0.843114 --> 0.816864). Saving model ...
Epoch: 18 Training Loss: 0.844417 Validation Loss: 0.814849
Validation loss decreased (0.816864 --> 0.814849). Saving model ...
Epoch: 19 Training Loss: 0.823265 Validation Loss: 0.801741
Validation loss decreased (0.814849 --> 0.801741). Saving model ...
Epoch: 20 Training Loss: 0.806573 Validation Loss: 0.794189
Validation loss decreased (0.801741 --> 0.794189). Saving model ...
Epoch: 21 Training Loss: 0.786029 Validation Loss: 0.758231
Validation loss decreased (0.794189 --> 0.758231). Saving model ...
Epoch: 22 Training Loss: 0.775716 Validation Loss: 0.748606
Validation loss decreased (0.758231 --> 0.748606). Saving model ...
Epoch: 23 Training Loss: 0.763115 Validation Loss: 0.749253
Epoch: 24 Training Loss: 0.741318 Validation Loss: 0.721984
Validation loss decreased (0.748606 --> 0.721984). Saving model ...
Epoch: 25 Training Loss: 0.733436 Validation Loss: 0.731079
Epoch: 26 Training Loss: 0.722464 Validation Loss: 0.710618
Validation loss decreased (0.721984 --> 0.710618). Saving model ...
Epoch: 27 Training Loss: 0.710074 Validation Loss: 0.695963
Validation loss decreased (0.710618 --> 0.695963). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
_____no_output_____
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is not available. Training on CPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data\cifar-10-python.tar.gz
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
C:\Users\furyx\AppData\Local\Temp/ipykernel_19180/1524662445.py:10: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
---
Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)
This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:
* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.
* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.
* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.
A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer.
TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.
The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting.
It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure.
Output volume for a convolutional layer
To compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):
> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`.
For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 2.134911 Validation Loss: 1.887886
Validation loss decreased (inf --> 1.887886). Saving model ...
Epoch: 2 Training Loss: 1.747990 Validation Loss: 1.569019
Validation loss decreased (1.887886 --> 1.569019). Saving model ...
Epoch: 3 Training Loss: 1.526377 Validation Loss: 1.401778
Validation loss decreased (1.569019 --> 1.401778). Saving model ...
Epoch: 4 Training Loss: 1.413397 Validation Loss: 1.306932
Validation loss decreased (1.401778 --> 1.306932). Saving model ...
Epoch: 5 Training Loss: 1.330549 Validation Loss: 1.246391
Validation loss decreased (1.306932 --> 1.246391). Saving model ...
Epoch: 6 Training Loss: 1.259991 Validation Loss: 1.145304
Validation loss decreased (1.246391 --> 1.145304). Saving model ...
Epoch: 7 Training Loss: 1.194441 Validation Loss: 1.127781
Validation loss decreased (1.145304 --> 1.127781). Saving model ...
Epoch: 8 Training Loss: 1.142476 Validation Loss: 1.062931
Validation loss decreased (1.127781 --> 1.062931). Saving model ...
Epoch: 9 Training Loss: 1.101819 Validation Loss: 1.010611
Validation loss decreased (1.062931 --> 1.010611). Saving model ...
Epoch: 10 Training Loss: 1.059557 Validation Loss: 0.978834
Validation loss decreased (1.010611 --> 0.978834). Saving model ...
Epoch: 11 Training Loss: 1.022299 Validation Loss: 0.950946
Validation loss decreased (0.978834 --> 0.950946). Saving model ...
Epoch: 12 Training Loss: 0.990866 Validation Loss: 0.919233
Validation loss decreased (0.950946 --> 0.919233). Saving model ...
Epoch: 13 Training Loss: 0.962558 Validation Loss: 0.889107
Validation loss decreased (0.919233 --> 0.889107). Saving model ...
Epoch: 14 Training Loss: 0.942383 Validation Loss: 0.896670
Epoch: 15 Training Loss: 0.914245 Validation Loss: 0.868033
Validation loss decreased (0.889107 --> 0.868033). Saving model ...
Epoch: 16 Training Loss: 0.894737 Validation Loss: 0.853224
Validation loss decreased (0.868033 --> 0.853224). Saving model ...
Epoch: 17 Training Loss: 0.869837 Validation Loss: 0.833513
Validation loss decreased (0.853224 --> 0.833513). Saving model ...
Epoch: 18 Training Loss: 0.848971 Validation Loss: 0.814471
Validation loss decreased (0.833513 --> 0.814471). Saving model ...
Epoch: 19 Training Loss: 0.833432 Validation Loss: 0.789981
Validation loss decreased (0.814471 --> 0.789981). Saving model ...
Epoch: 20 Training Loss: 0.810129 Validation Loss: 0.800159
Epoch: 21 Training Loss: 0.793779 Validation Loss: 0.778651
Validation loss decreased (0.789981 --> 0.778651). Saving model ...
Epoch: 22 Training Loss: 0.781174 Validation Loss: 0.751621
Validation loss decreased (0.778651 --> 0.751621). Saving model ...
Epoch: 23 Training Loss: 0.772982 Validation Loss: 0.762542
Epoch: 24 Training Loss: 0.752103 Validation Loss: 0.760039
Epoch: 25 Training Loss: 0.741696 Validation Loss: 0.737961
Validation loss decreased (0.751621 --> 0.737961). Saving model ...
Epoch: 26 Training Loss: 0.723770 Validation Loss: 0.732093
Validation loss decreased (0.737961 --> 0.732093). Saving model ...
Epoch: 27 Training Loss: 0.714425 Validation Loss: 0.739650
Epoch: 28 Training Loss: 0.705487 Validation Loss: 0.713657
Validation loss decreased (0.732093 --> 0.713657). Saving model ...
Epoch: 29 Training Loss: 0.690834 Validation Loss: 0.731190
Epoch: 30 Training Loss: 0.676627 Validation Loss: 0.716757
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.723716
Test Accuracy of airplane: 82% (827/1000)
Test Accuracy of automobile: 84% (843/1000)
Test Accuracy of bird: 63% (632/1000)
Test Accuracy of cat: 54% (541/1000)
Test Accuracy of deer: 67% (674/1000)
Test Accuracy of dog: 67% (678/1000)
Test Accuracy of frog: 83% (836/1000)
Test Accuracy of horse: 80% (804/1000)
Test Accuracy of ship: 86% (864/1000)
Test Accuracy of truck: 82% (826/1000)
Test Accuracy (Overall): 75% (7525/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx] if not train_on_gpu else images[idx].cpu())
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
C:\Users\furyx\AppData\Local\Temp/ipykernel_19180/3500747034.py:19: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(30),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
def calc_w_conv_out(conv, pool_stride = 1):
return (((conv["W"] - conv["F"] + (2*conv["P"])) / conv["S"]) + 1) / pool_stride
conv1_w_in = 32
conv1 = {"W": conv1_w_in, "D": 3, "K": 16, "F": 3, "P": 1, "S": 1}
conv1_w_out = calc_w_conv_out(conv1)
conv2 = {"W": conv1_w_out, "D": conv1["K"], "K": 32, "F": 3, "P": 1, "S": 1}
conv2_w_out = calc_w_conv_out(conv2, 2)
conv3 = {"W": conv2_w_out, "D": conv2["K"], "K": 64, "F": 3, "P": 1, "S": 1}
conv3_w_out = calc_w_conv_out(conv3, 2)
conv4 = {"W": conv3_w_out, "D": conv3["K"], "K": 128, "F": 3, "P": 1, "S": 1}
conv4_w_out = calc_w_conv_out(conv4, 2)
conv5 = {"W": conv4_w_out, "D": conv4["K"], "K": 256, "F": 3, "P": 1, "S": 1}
conv5_w_out = calc_w_conv_out(conv5, 2)
conv_features_out = conv5_w_out**2 * conv5["K"]
print(conv1_w_out, conv2_w_out, conv3_w_out, conv4_w_out, conv5_w_out, conv_features_out)
def make_nn_conv(conv):
return nn.Conv2d(conv["D"], conv["K"], conv["F"], padding=conv["P"], stride=conv["S"])
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer
self.conv1 = make_nn_conv(conv1)
self.conv2 = make_nn_conv(conv2)
self.conv3 = make_nn_conv(conv3)
self.conv4 = make_nn_conv(conv4)
self.conv5 = make_nn_conv(conv5)
self.fc1 = nn.Linear(int(conv_features_out), 512)
self.fc2 = nn.Linear(512, 256)
self.fc3 = nn.Linear(256, 128)
self.fc4 = nn.Linear(128, 10)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = F.dropout(F.relu(self.conv1(x)), 0.3)
x = F.dropout(self.pool(F.relu(self.conv2(x))), 0.3)
x = F.dropout(self.pool(F.relu(self.conv3(x))), 0.3)
x = F.dropout(self.pool(F.relu(self.conv4(x))), 0.4)
x = F.dropout(self.pool(F.relu(self.conv5(x))), 0.4)
#x = x.flatten(start_dim=1)
x = x.view(x.shape[0], -1)
x = F.dropout(F.relu(self.fc1(x)), 0.3)
x = F.dropout(F.relu(self.fc2(x)), 0.3)
x = F.dropout(F.relu(self.fc3(x)), 0.3)
x = self.fc4(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
32.0 16.0 8.0 4.0 2.0 1024.0
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fc1): Linear(in_features=1024, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=256, bias=True)
(fc3): Linear(in_features=256, out_features=128, bias=True)
(fc4): Linear(in_features=128, out_features=10, bias=True)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.842369 Validation Loss: 0.460542
Validation loss decreased (inf --> 0.460542). Saving model ...
Epoch: 2 Training Loss: 1.841968 Validation Loss: 0.460415
Validation loss decreased (0.460542 --> 0.460415). Saving model ...
Epoch: 3 Training Loss: 1.841213 Validation Loss: 0.459980
Validation loss decreased (0.460415 --> 0.459980). Saving model ...
Epoch: 4 Training Loss: 1.818823 Validation Loss: 0.436779
Validation loss decreased (0.459980 --> 0.436779). Saving model ...
Epoch: 5 Training Loss: 1.645625 Validation Loss: 0.397554
Validation loss decreased (0.436779 --> 0.397554). Saving model ...
Epoch: 6 Training Loss: 1.535857 Validation Loss: 0.371830
Validation loss decreased (0.397554 --> 0.371830). Saving model ...
Epoch: 7 Training Loss: 1.443071 Validation Loss: 0.351403
Validation loss decreased (0.371830 --> 0.351403). Saving model ...
Epoch: 8 Training Loss: 1.341515 Validation Loss: 0.317446
Validation loss decreased (0.351403 --> 0.317446). Saving model ...
Epoch: 9 Training Loss: 1.256214 Validation Loss: 0.313534
Validation loss decreased (0.317446 --> 0.313534). Saving model ...
Epoch: 10 Training Loss: 1.186209 Validation Loss: 0.282538
Validation loss decreased (0.313534 --> 0.282538). Saving model ...
Epoch: 11 Training Loss: 1.123944 Validation Loss: 0.285028
Epoch: 12 Training Loss: 1.069461 Validation Loss: 0.268265
Validation loss decreased (0.282538 --> 0.268265). Saving model ...
Epoch: 13 Training Loss: 1.022509 Validation Loss: 0.259054
Validation loss decreased (0.268265 --> 0.259054). Saving model ...
Epoch: 14 Training Loss: 0.979384 Validation Loss: 0.240430
Validation loss decreased (0.259054 --> 0.240430). Saving model ...
Epoch: 15 Training Loss: 0.933381 Validation Loss: 0.230951
Validation loss decreased (0.240430 --> 0.230951). Saving model ...
Epoch: 16 Training Loss: 0.894887 Validation Loss: 0.224066
Validation loss decreased (0.230951 --> 0.224066). Saving model ...
Epoch: 17 Training Loss: 0.849832 Validation Loss: 0.219511
Validation loss decreased (0.224066 --> 0.219511). Saving model ...
Epoch: 18 Training Loss: 0.811087 Validation Loss: 0.209774
Validation loss decreased (0.219511 --> 0.209774). Saving model ...
Epoch: 19 Training Loss: 0.776547 Validation Loss: 0.200563
Validation loss decreased (0.209774 --> 0.200563). Saving model ...
Epoch: 20 Training Loss: 0.737993 Validation Loss: 0.188758
Validation loss decreased (0.200563 --> 0.188758). Saving model ...
Epoch: 21 Training Loss: 0.710253 Validation Loss: 0.184878
Validation loss decreased (0.188758 --> 0.184878). Saving model ...
Epoch: 22 Training Loss: 0.678808 Validation Loss: 0.184000
Validation loss decreased (0.184878 --> 0.184000). Saving model ...
Epoch: 23 Training Loss: 0.653889 Validation Loss: 0.179801
Validation loss decreased (0.184000 --> 0.179801). Saving model ...
Epoch: 24 Training Loss: 0.631791 Validation Loss: 0.171466
Validation loss decreased (0.179801 --> 0.171466). Saving model ...
Epoch: 25 Training Loss: 0.602468 Validation Loss: 0.168670
Validation loss decreased (0.171466 --> 0.168670). Saving model ...
Epoch: 26 Training Loss: 0.583024 Validation Loss: 0.167797
Validation loss decreased (0.168670 --> 0.167797). Saving model ...
Epoch: 27 Training Loss: 0.568920 Validation Loss: 0.168170
Epoch: 28 Training Loss: 0.548347 Validation Loss: 0.165003
Validation loss decreased (0.167797 --> 0.165003). Saving model ...
Epoch: 29 Training Loss: 0.530549 Validation Loss: 0.159624
Validation loss decreased (0.165003 --> 0.159624). Saving model ...
Epoch: 30 Training Loss: 0.515848 Validation Loss: 0.161675
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.797105
Test Accuracy of airplane: 72% (720/1000)
Test Accuracy of automobile: 86% (868/1000)
Test Accuracy of bird: 62% (622/1000)
Test Accuracy of cat: 53% (538/1000)
Test Accuracy of deer: 62% (626/1000)
Test Accuracy of dog: 59% (596/1000)
Test Accuracy of frog: 78% (788/1000)
Test Accuracy of horse: 77% (776/1000)
Test Accuracy of ship: 88% (884/1000)
Test Accuracy of truck: 81% (816/1000)
Test Accuracy (Overall): 72% (7234/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer 1 (in_channels, out_channels, kernel_size, stride=1, padding=0)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer 2
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer 3
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.maxpool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(64 * 4 * 4, 256) #these are larger (32x32x3) images: 8 channels X 32/2/2 X 32/2/2
self.fc2 = nn.Linear(256, 256)
self.fc3 = nn.Linear(256, 10)
self.dropout = nn.Dropout(0.25) #with drouput 72% --> 74% (7404/10000)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.maxpool(F.relu(self.conv1(x)))
x = self.maxpool(F.relu(self.conv2(x)))
# flatten image output
#x = F.relu(self.conv3(x)) #71% without pooling final conv layer
x = self.maxpool(F.relu(self.conv3(x))) #with pooling final layer: Test Accuracy (Overall): 72% (7230/10000)
x = x.view(-1, 64 * 4 * 4)
x = self.dropout(x)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=256, bias=True)
(fc2): Linear(in_features=256, out_features=256, bias=True)
(fc3): Linear(in_features=256, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
#optimizer = optim.SGD(model.parameters(), lr=0.01)
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 2.286917 Validation Loss: 2.143078
Validation loss decreased (inf --> 2.143078). Saving model ...
Epoch: 2 Training Loss: 1.928334 Validation Loss: 1.696470
Validation loss decreased (2.143078 --> 1.696470). Saving model ...
Epoch: 3 Training Loss: 1.644718 Validation Loss: 1.522746
Validation loss decreased (1.696470 --> 1.522746). Saving model ...
Epoch: 4 Training Loss: 1.513010 Validation Loss: 1.405471
Validation loss decreased (1.522746 --> 1.405471). Saving model ...
Epoch: 5 Training Loss: 1.418064 Validation Loss: 1.347402
Validation loss decreased (1.405471 --> 1.347402). Saving model ...
Epoch: 6 Training Loss: 1.340760 Validation Loss: 1.261190
Validation loss decreased (1.347402 --> 1.261190). Saving model ...
Epoch: 7 Training Loss: 1.271703 Validation Loss: 1.196018
Validation loss decreased (1.261190 --> 1.196018). Saving model ...
Epoch: 8 Training Loss: 1.210313 Validation Loss: 1.143695
Validation loss decreased (1.196018 --> 1.143695). Saving model ...
Epoch: 9 Training Loss: 1.161010 Validation Loss: 1.101129
Validation loss decreased (1.143695 --> 1.101129). Saving model ...
Epoch: 10 Training Loss: 1.117192 Validation Loss: 1.054165
Validation loss decreased (1.101129 --> 1.054165). Saving model ...
Epoch: 11 Training Loss: 1.070019 Validation Loss: 1.012483
Validation loss decreased (1.054165 --> 1.012483). Saving model ...
Epoch: 12 Training Loss: 1.033801 Validation Loss: 0.958924
Validation loss decreased (1.012483 --> 0.958924). Saving model ...
Epoch: 13 Training Loss: 1.002177 Validation Loss: 0.932641
Validation loss decreased (0.958924 --> 0.932641). Saving model ...
Epoch: 14 Training Loss: 0.975769 Validation Loss: 0.936330
Epoch: 15 Training Loss: 0.949931 Validation Loss: 0.900279
Validation loss decreased (0.932641 --> 0.900279). Saving model ...
Epoch: 16 Training Loss: 0.919159 Validation Loss: 0.884030
Validation loss decreased (0.900279 --> 0.884030). Saving model ...
Epoch: 17 Training Loss: 0.902461 Validation Loss: 0.873163
Validation loss decreased (0.884030 --> 0.873163). Saving model ...
Epoch: 18 Training Loss: 0.877331 Validation Loss: 0.848397
Validation loss decreased (0.873163 --> 0.848397). Saving model ...
Epoch: 19 Training Loss: 0.862081 Validation Loss: 0.843278
Validation loss decreased (0.848397 --> 0.843278). Saving model ...
Epoch: 20 Training Loss: 0.843668 Validation Loss: 0.819001
Validation loss decreased (0.843278 --> 0.819001). Saving model ...
Epoch: 21 Training Loss: 0.823870 Validation Loss: 0.806820
Validation loss decreased (0.819001 --> 0.806820). Saving model ...
Epoch: 22 Training Loss: 0.807746 Validation Loss: 0.780897
Validation loss decreased (0.806820 --> 0.780897). Saving model ...
Epoch: 23 Training Loss: 0.799853 Validation Loss: 0.776063
Validation loss decreased (0.780897 --> 0.776063). Saving model ...
Epoch: 24 Training Loss: 0.780205 Validation Loss: 0.771039
Validation loss decreased (0.776063 --> 0.771039). Saving model ...
Epoch: 25 Training Loss: 0.762368 Validation Loss: 0.765014
Validation loss decreased (0.771039 --> 0.765014). Saving model ...
Epoch: 26 Training Loss: 0.755807 Validation Loss: 0.737047
Validation loss decreased (0.765014 --> 0.737047). Saving model ...
Epoch: 27 Training Loss: 0.743255 Validation Loss: 0.736304
Validation loss decreased (0.737047 --> 0.736304). Saving model ...
Epoch: 28 Training Loss: 0.733901 Validation Loss: 0.746849
Epoch: 29 Training Loss: 0.721818 Validation Loss: 0.734837
Validation loss decreased (0.736304 --> 0.734837). Saving model ...
Epoch: 30 Training Loss: 0.711724 Validation Loss: 0.725897
Validation loss decreased (0.734837 --> 0.725897). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.720266
Test Accuracy of airplane: 84% (842/1000)
Test Accuracy of automobile: 86% (860/1000)
Test Accuracy of bird: 69% (692/1000)
Test Accuracy of cat: 51% (514/1000)
Test Accuracy of deer: 70% (704/1000)
Test Accuracy of dog: 70% (700/1000)
Test Accuracy of frog: 78% (784/1000)
Test Accuracy of horse: 80% (809/1000)
Test Accuracy of ship: 82% (827/1000)
Test Accuracy of truck: 74% (748/1000)
Test Accuracy (Overall): 74% (7480/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx].cpu())
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx] if not train_on_gpu else images[idx].cpu())
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is not available. Training on CPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data/cifar-10-python.tar.gz
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.689244 Validation Loss: 0.362073
Validation loss decreased (inf --> 0.362073). Saving model ...
Epoch: 2 Training Loss: 1.358639 Validation Loss: 0.306495
Validation loss decreased (0.362073 --> 0.306495). Saving model ...
Epoch: 3 Training Loss: 1.205274 Validation Loss: 0.281470
Validation loss decreased (0.306495 --> 0.281470). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomVerticalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer
self.conv1 = nn.Conv2d(3, 16, 3, padding=1) #3 input channels, 16 output channels
# max pooling layer
self.pool = nn.MaxPool2d(2, 2) #16x16
self.conv2 = nn.Conv2d(16, 32, 3, padding=1) #16 input channels, 32 output channels, 8x8 pixels
self.conv3= nn.Conv2d(32, 64, 3, padding=1) #32 input channels, 64 output channels, 4x4 pixels
# Fully Connected Layers
self.fc1 = nn.Linear(64*4*4,512)
self.fc2 = nn.Linear(512,10)
self.dropout = nn.Dropout(0.2)
def forward(self, x): # = 32x32
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x))) #X = 16x16
x = self.pool(F.relu(self.conv2(x))) #X = 8x8
x = self.pool(F.relu(self.conv3(x))) #X = 4x4
x = x.view(-1, 64*4*4)
x = self.dropout(F.relu(self.fc1(x)))
#x = F.log_softmax(self.fc2(x),dim=1)
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fc1): Linear(in_features=1024, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=10, bias=True)
(dropout): Dropout(p=0.2)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 50
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.705561 Validation Loss: 0.393399
Validation loss decreased (inf --> 0.393399). Saving model ...
Epoch: 2 Training Loss: 1.417653 Validation Loss: 0.325706
Validation loss decreased (0.393399 --> 0.325706). Saving model ...
Epoch: 3 Training Loss: 1.277622 Validation Loss: 0.304192
Validation loss decreased (0.325706 --> 0.304192). Saving model ...
Epoch: 4 Training Loss: 1.192003 Validation Loss: 0.285021
Validation loss decreased (0.304192 --> 0.285021). Saving model ...
Epoch: 5 Training Loss: 1.133855 Validation Loss: 0.278743
Validation loss decreased (0.285021 --> 0.278743). Saving model ...
Epoch: 6 Training Loss: 1.087721 Validation Loss: 0.259492
Validation loss decreased (0.278743 --> 0.259492). Saving model ...
Epoch: 7 Training Loss: 1.046709 Validation Loss: 0.249731
Validation loss decreased (0.259492 --> 0.249731). Saving model ...
Epoch: 8 Training Loss: 1.004042 Validation Loss: 0.249075
Validation loss decreased (0.249731 --> 0.249075). Saving model ...
Epoch: 9 Training Loss: 0.973266 Validation Loss: 0.235240
Validation loss decreased (0.249075 --> 0.235240). Saving model ...
Epoch: 10 Training Loss: 0.937774 Validation Loss: 0.232650
Validation loss decreased (0.235240 --> 0.232650). Saving model ...
Epoch: 11 Training Loss: 0.911635 Validation Loss: 0.219721
Validation loss decreased (0.232650 --> 0.219721). Saving model ...
Epoch: 12 Training Loss: 0.884851 Validation Loss: 0.224340
Epoch: 13 Training Loss: 0.860926 Validation Loss: 0.213191
Validation loss decreased (0.219721 --> 0.213191). Saving model ...
Epoch: 14 Training Loss: 0.837862 Validation Loss: 0.215625
Epoch: 15 Training Loss: 0.818488 Validation Loss: 0.203467
Validation loss decreased (0.213191 --> 0.203467). Saving model ...
Epoch: 16 Training Loss: 0.794842 Validation Loss: 0.201628
Validation loss decreased (0.203467 --> 0.201628). Saving model ...
Epoch: 17 Training Loss: 0.776353 Validation Loss: 0.198363
Validation loss decreased (0.201628 --> 0.198363). Saving model ...
Epoch: 18 Training Loss: 0.761074 Validation Loss: 0.186252
Validation loss decreased (0.198363 --> 0.186252). Saving model ...
Epoch: 19 Training Loss: 0.741987 Validation Loss: 0.192610
Epoch: 20 Training Loss: 0.728791 Validation Loss: 0.187832
Epoch: 21 Training Loss: 0.712792 Validation Loss: 0.184954
Validation loss decreased (0.186252 --> 0.184954). Saving model ...
Epoch: 22 Training Loss: 0.704070 Validation Loss: 0.184263
Validation loss decreased (0.184954 --> 0.184263). Saving model ...
Epoch: 23 Training Loss: 0.690957 Validation Loss: 0.181119
Validation loss decreased (0.184263 --> 0.181119). Saving model ...
Epoch: 24 Training Loss: 0.674947 Validation Loss: 0.176754
Validation loss decreased (0.181119 --> 0.176754). Saving model ...
Epoch: 25 Training Loss: 0.665197 Validation Loss: 0.173738
Validation loss decreased (0.176754 --> 0.173738). Saving model ...
Epoch: 26 Training Loss: 0.652318 Validation Loss: 0.172632
Validation loss decreased (0.173738 --> 0.172632). Saving model ...
Epoch: 27 Training Loss: 0.642922 Validation Loss: 0.174727
Epoch: 28 Training Loss: 0.631356 Validation Loss: 0.174653
Epoch: 29 Training Loss: 0.621394 Validation Loss: 0.168315
Validation loss decreased (0.172632 --> 0.168315). Saving model ...
Epoch: 30 Training Loss: 0.608336 Validation Loss: 0.173049
Epoch: 31 Training Loss: 0.601890 Validation Loss: 0.167400
Validation loss decreased (0.168315 --> 0.167400). Saving model ...
Epoch: 32 Training Loss: 0.593295 Validation Loss: 0.170074
Epoch: 33 Training Loss: 0.584395 Validation Loss: 0.165341
Validation loss decreased (0.167400 --> 0.165341). Saving model ...
Epoch: 34 Training Loss: 0.578509 Validation Loss: 0.164727
Validation loss decreased (0.165341 --> 0.164727). Saving model ...
Epoch: 35 Training Loss: 0.572962 Validation Loss: 0.164940
Epoch: 36 Training Loss: 0.561873 Validation Loss: 0.161906
Validation loss decreased (0.164727 --> 0.161906). Saving model ...
Epoch: 37 Training Loss: 0.551951 Validation Loss: 0.164499
Epoch: 38 Training Loss: 0.550246 Validation Loss: 0.163041
Epoch: 39 Training Loss: 0.538028 Validation Loss: 0.160806
Validation loss decreased (0.161906 --> 0.160806). Saving model ...
Epoch: 40 Training Loss: 0.537840 Validation Loss: 0.159064
Validation loss decreased (0.160806 --> 0.159064). Saving model ...
Epoch: 41 Training Loss: 0.529307 Validation Loss: 0.158732
Validation loss decreased (0.159064 --> 0.158732). Saving model ...
Epoch: 42 Training Loss: 0.522430 Validation Loss: 0.157710
Validation loss decreased (0.158732 --> 0.157710). Saving model ...
Epoch: 43 Training Loss: 0.516851 Validation Loss: 0.158224
Epoch: 44 Training Loss: 0.510892 Validation Loss: 0.159551
Epoch: 45 Training Loss: 0.504441 Validation Loss: 0.159589
Epoch: 46 Training Loss: 0.495829 Validation Loss: 0.156817
Validation loss decreased (0.157710 --> 0.156817). Saving model ...
Epoch: 47 Training Loss: 0.492502 Validation Loss: 0.152350
Validation loss decreased (0.156817 --> 0.152350). Saving model ...
Epoch: 48 Training Loss: 0.485088 Validation Loss: 0.160447
Epoch: 49 Training Loss: 0.482283 Validation Loss: 0.159126
Epoch: 50 Training Loss: 0.473832 Validation Loss: 0.155532
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.764262
Test Accuracy of airplane: 77% (771/1000)
Test Accuracy of automobile: 83% (839/1000)
Test Accuracy of bird: 64% (649/1000)
Test Accuracy of cat: 57% (571/1000)
Test Accuracy of deer: 70% (704/1000)
Test Accuracy of dog: 64% (648/1000)
Test Accuracy of frog: 82% (824/1000)
Test Accuracy of horse: 73% (731/1000)
Test Accuracy of ship: 82% (825/1000)
Test Accuracy of truck: 81% (811/1000)
Test Accuracy (Overall): 73% (7373/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %10s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.711101
Test Accuracy of airplane: 76% (767/1000)
Test Accuracy of automobile: 82% (822/1000)
Test Accuracy of bird: 55% (555/1000)
Test Accuracy of cat: 58% (583/1000)
Test Accuracy of deer: 79% (792/1000)
Test Accuracy of dog: 65% (652/1000)
Test Accuracy of frog: 81% (814/1000)
Test Accuracy of horse: 79% (794/1000)
Test Accuracy of ship: 90% (904/1000)
Test Accuracy of truck: 86% (869/1000)
Test Accuracy (Overall): 75% (7552/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
images = images.cpu()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is not available. Training on CPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data/cifar-10-python.tar.gz
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
_____no_output_____
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
_____no_output_____
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
_____no_output_____
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 64
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
# optimizer = optim.SGD(model.parameters(), lr=0.01)
optimizer = optim.Adam(model.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.588319 Validation Loss: 1.344885
Validation loss decreased (inf --> 1.344885). Saving model ...
Epoch: 2 Training Loss: 1.270793 Validation Loss: 1.147840
Validation loss decreased (1.344885 --> 1.147840). Saving model ...
Epoch: 3 Training Loss: 1.127659 Validation Loss: 1.026276
Validation loss decreased (1.147840 --> 1.026276). Saving model ...
Epoch: 4 Training Loss: 1.035544 Validation Loss: 0.998408
Validation loss decreased (1.026276 --> 0.998408). Saving model ...
Epoch: 5 Training Loss: 0.963458 Validation Loss: 0.933033
Validation loss decreased (0.998408 --> 0.933033). Saving model ...
Epoch: 6 Training Loss: 0.919769 Validation Loss: 0.868258
Validation loss decreased (0.933033 --> 0.868258). Saving model ...
Epoch: 7 Training Loss: 0.886504 Validation Loss: 0.860258
Validation loss decreased (0.868258 --> 0.860258). Saving model ...
Epoch: 8 Training Loss: 0.851388 Validation Loss: 0.828989
Validation loss decreased (0.860258 --> 0.828989). Saving model ...
Epoch: 9 Training Loss: 0.825224 Validation Loss: 0.818231
Validation loss decreased (0.828989 --> 0.818231). Saving model ...
Epoch: 10 Training Loss: 0.802818 Validation Loss: 0.805133
Validation loss decreased (0.818231 --> 0.805133). Saving model ...
Epoch: 11 Training Loss: 0.775919 Validation Loss: 0.789716
Validation loss decreased (0.805133 --> 0.789716). Saving model ...
Epoch: 12 Training Loss: 0.762815 Validation Loss: 0.756799
Validation loss decreased (0.789716 --> 0.756799). Saving model ...
Epoch: 13 Training Loss: 0.739908 Validation Loss: 0.757600
Epoch: 14 Training Loss: 0.720791 Validation Loss: 0.734654
Validation loss decreased (0.756799 --> 0.734654). Saving model ...
Epoch: 15 Training Loss: 0.711391 Validation Loss: 0.726107
Validation loss decreased (0.734654 --> 0.726107). Saving model ...
Epoch: 16 Training Loss: 0.696827 Validation Loss: 0.731682
Epoch: 17 Training Loss: 0.686691 Validation Loss: 0.708934
Validation loss decreased (0.726107 --> 0.708934). Saving model ...
Epoch: 18 Training Loss: 0.675223 Validation Loss: 0.718239
Epoch: 19 Training Loss: 0.658576 Validation Loss: 0.725064
Epoch: 20 Training Loss: 0.658114 Validation Loss: 0.711730
Epoch: 21 Training Loss: 0.647384 Validation Loss: 0.693786
Validation loss decreased (0.708934 --> 0.693786). Saving model ...
Epoch: 22 Training Loss: 0.637157 Validation Loss: 0.689330
Validation loss decreased (0.693786 --> 0.689330). Saving model ...
Epoch: 23 Training Loss: 0.627697 Validation Loss: 0.679524
Validation loss decreased (0.689330 --> 0.679524). Saving model ...
Epoch: 24 Training Loss: 0.616331 Validation Loss: 0.678109
Validation loss decreased (0.679524 --> 0.678109). Saving model ...
Epoch: 25 Training Loss: 0.612421 Validation Loss: 0.702835
Epoch: 26 Training Loss: 0.615539 Validation Loss: 0.679592
Epoch: 27 Training Loss: 0.595864 Validation Loss: 0.682066
Epoch: 28 Training Loss: 0.596186 Validation Loss: 0.693927
Epoch: 29 Training Loss: 0.583913 Validation Loss: 0.664008
Validation loss decreased (0.678109 --> 0.664008). Saving model ...
Epoch: 30 Training Loss: 0.582250 Validation Loss: 0.681649
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
_____no_output_____
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
_____no_output_____
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
_____no_output_____
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
_____no_output_____
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
_____no_output_____
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
_____no_output_____
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is not available. Training on CPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx] if not train_on_gpu else images[idx].cpu())
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Table of Contents1 Convolutional Neural Networks1.0.1 Test for CUDA1.1 Load and Augment the Data1.1.0.1 Augmentation1.1.0.2 TODO: Look at the transformation documentation; add more augmentation transforms, and see how your model performs.1.1.1 Visualize a Batch of Training Data1.1.2 View an Image in More Detail1.2 Define the Network Architecture1.2.0.1 TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.1.2.0.2 Output volume for a convolutional layer1.2.1 Specify Loss Function and Optimizer1.2.1.1 TODO: Define the loss and optimizer and see how these choices change the loss over time.1.3 Train the Network1.3.1 Load the Model with the Lowest Validation Loss1.4 Test the Trained Network1.4.1 Visualize Sample Test Results Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 1
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
/home/bmendonca/dev_tools/miniconda3/envs/deep-learning/lib/python3.6/site-packages/ipykernel_launcher.py:10: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
# Remove the CWD from sys.path while we load stuff.
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 2.094815 Validation Loss: 1.828428
Validation loss decreased (inf --> 1.828428). Saving model ...
Epoch: 2 Training Loss: 1.713838 Validation Loss: 1.529430
Validation loss decreased (1.828428 --> 1.529430). Saving model ...
Epoch: 3 Training Loss: 1.525942 Validation Loss: 1.451944
Validation loss decreased (1.529430 --> 1.451944). Saving model ...
Epoch: 4 Training Loss: 1.424111 Validation Loss: 1.318284
Validation loss decreased (1.451944 --> 1.318284). Saving model ...
Epoch: 5 Training Loss: 1.342020 Validation Loss: 1.303242
Validation loss decreased (1.318284 --> 1.303242). Saving model ...
Epoch: 6 Training Loss: 1.270879 Validation Loss: 1.186256
Validation loss decreased (1.303242 --> 1.186256). Saving model ...
Epoch: 7 Training Loss: 1.204774 Validation Loss: 1.167558
Validation loss decreased (1.186256 --> 1.167558). Saving model ...
Epoch: 8 Training Loss: 1.154341 Validation Loss: 1.121955
Validation loss decreased (1.167558 --> 1.121955). Saving model ...
Epoch: 9 Training Loss: 1.106816 Validation Loss: 1.021111
Validation loss decreased (1.121955 --> 1.021111). Saving model ...
Epoch: 10 Training Loss: 1.065994 Validation Loss: 0.989164
Validation loss decreased (1.021111 --> 0.989164). Saving model ...
Epoch: 11 Training Loss: 1.029737 Validation Loss: 0.973248
Validation loss decreased (0.989164 --> 0.973248). Saving model ...
Epoch: 12 Training Loss: 0.996208 Validation Loss: 0.923284
Validation loss decreased (0.973248 --> 0.923284). Saving model ...
Epoch: 13 Training Loss: 0.963231 Validation Loss: 0.907979
Validation loss decreased (0.923284 --> 0.907979). Saving model ...
Epoch: 14 Training Loss: 0.934870 Validation Loss: 0.860261
Validation loss decreased (0.907979 --> 0.860261). Saving model ...
Epoch: 15 Training Loss: 0.912282 Validation Loss: 0.857354
Validation loss decreased (0.860261 --> 0.857354). Saving model ...
Epoch: 16 Training Loss: 0.886103 Validation Loss: 0.844412
Validation loss decreased (0.857354 --> 0.844412). Saving model ...
Epoch: 17 Training Loss: 0.862855 Validation Loss: 0.849256
Epoch: 18 Training Loss: 0.842795 Validation Loss: 0.798689
Validation loss decreased (0.844412 --> 0.798689). Saving model ...
Epoch: 19 Training Loss: 0.822978 Validation Loss: 0.795128
Validation loss decreased (0.798689 --> 0.795128). Saving model ...
Epoch: 20 Training Loss: 0.801648 Validation Loss: 0.786230
Validation loss decreased (0.795128 --> 0.786230). Saving model ...
Epoch: 21 Training Loss: 0.787138 Validation Loss: 0.774352
Validation loss decreased (0.786230 --> 0.774352). Saving model ...
Epoch: 22 Training Loss: 0.774055 Validation Loss: 0.758015
Validation loss decreased (0.774352 --> 0.758015). Saving model ...
Epoch: 23 Training Loss: 0.756373 Validation Loss: 0.745331
Validation loss decreased (0.758015 --> 0.745331). Saving model ...
Epoch: 24 Training Loss: 0.746310 Validation Loss: 0.732623
Validation loss decreased (0.745331 --> 0.732623). Saving model ...
Epoch: 25 Training Loss: 0.728478 Validation Loss: 0.733209
Epoch: 26 Training Loss: 0.717744 Validation Loss: 0.707759
Validation loss decreased (0.732623 --> 0.707759). Saving model ...
Epoch: 27 Training Loss: 0.708449 Validation Loss: 0.717804
Epoch: 28 Training Loss: 0.696385 Validation Loss: 0.721994
Epoch: 29 Training Loss: 0.681748 Validation Loss: 0.695079
Validation loss decreased (0.707759 --> 0.695079). Saving model ...
Epoch: 30 Training Loss: 0.670312 Validation Loss: 0.687211
Validation loss decreased (0.695079 --> 0.687211). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.680835
Test Accuracy of airplane: 81% (812/1000)
Test Accuracy of automobile: 89% (899/1000)
Test Accuracy of bird: 63% (639/1000)
Test Accuracy of cat: 58% (584/1000)
Test Accuracy of deer: 77% (773/1000)
Test Accuracy of dog: 60% (606/1000)
Test Accuracy of frog: 87% (879/1000)
Test Accuracy of horse: 76% (768/1000)
Test Accuracy of ship: 84% (849/1000)
Test Accuracy of truck: 81% (813/1000)
Test Accuracy (Overall): 76% (7622/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx] if not train_on_gpu else images[idx].cpu())
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
/home/bmendonca/dev_tools/miniconda3/envs/deep-learning/lib/python3.6/site-packages/ipykernel_launcher.py:19: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data\cifar-10-python.tar.gz
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 2.107167 Validation Loss: 1.856569
Validation loss decreased (inf --> 1.856569). Saving model ...
Epoch: 2 Training Loss: 1.710143 Validation Loss: 1.551788
Validation loss decreased (1.856569 --> 1.551788). Saving model ...
Epoch: 3 Training Loss: 1.530706 Validation Loss: 1.453299
Validation loss decreased (1.551788 --> 1.453299). Saving model ...
Epoch: 4 Training Loss: 1.423775 Validation Loss: 1.330881
Validation loss decreased (1.453299 --> 1.330881). Saving model ...
Epoch: 5 Training Loss: 1.341299 Validation Loss: 1.262908
Validation loss decreased (1.330881 --> 1.262908). Saving model ...
Epoch: 6 Training Loss: 1.267307 Validation Loss: 1.201184
Validation loss decreased (1.262908 --> 1.201184). Saving model ...
Epoch: 7 Training Loss: 1.208742 Validation Loss: 1.133793
Validation loss decreased (1.201184 --> 1.133793). Saving model ...
Epoch: 8 Training Loss: 1.158457 Validation Loss: 1.085892
Validation loss decreased (1.133793 --> 1.085892). Saving model ...
Epoch: 9 Training Loss: 1.116715 Validation Loss: 1.050677
Validation loss decreased (1.085892 --> 1.050677). Saving model ...
Epoch: 10 Training Loss: 1.078183 Validation Loss: 1.012293
Validation loss decreased (1.050677 --> 1.012293). Saving model ...
Epoch: 11 Training Loss: 1.042484 Validation Loss: 0.994840
Validation loss decreased (1.012293 --> 0.994840). Saving model ...
Epoch: 12 Training Loss: 1.006779 Validation Loss: 0.959459
Validation loss decreased (0.994840 --> 0.959459). Saving model ...
Epoch: 13 Training Loss: 0.980314 Validation Loss: 0.926578
Validation loss decreased (0.959459 --> 0.926578). Saving model ...
Epoch: 14 Training Loss: 0.953953 Validation Loss: 0.917010
Validation loss decreased (0.926578 --> 0.917010). Saving model ...
Epoch: 15 Training Loss: 0.924642 Validation Loss: 0.891329
Validation loss decreased (0.917010 --> 0.891329). Saving model ...
Epoch: 16 Training Loss: 0.907778 Validation Loss: 0.872076
Validation loss decreased (0.891329 --> 0.872076). Saving model ...
Epoch: 17 Training Loss: 0.880919 Validation Loss: 0.846793
Validation loss decreased (0.872076 --> 0.846793). Saving model ...
Epoch: 18 Training Loss: 0.859858 Validation Loss: 0.850472
Epoch: 19 Training Loss: 0.837997 Validation Loss: 0.812801
Validation loss decreased (0.846793 --> 0.812801). Saving model ...
Epoch: 20 Training Loss: 0.825782 Validation Loss: 0.842424
Epoch: 21 Training Loss: 0.804590 Validation Loss: 0.788494
Validation loss decreased (0.812801 --> 0.788494). Saving model ...
Epoch: 22 Training Loss: 0.795575 Validation Loss: 0.785565
Validation loss decreased (0.788494 --> 0.785565). Saving model ...
Epoch: 23 Training Loss: 0.777562 Validation Loss: 0.770852
Validation loss decreased (0.785565 --> 0.770852). Saving model ...
Epoch: 24 Training Loss: 0.764069 Validation Loss: 0.757636
Validation loss decreased (0.770852 --> 0.757636). Saving model ...
Epoch: 25 Training Loss: 0.746261 Validation Loss: 0.745733
Validation loss decreased (0.757636 --> 0.745733). Saving model ...
Epoch: 26 Training Loss: 0.732991 Validation Loss: 0.743672
Validation loss decreased (0.745733 --> 0.743672). Saving model ...
Epoch: 27 Training Loss: 0.728252 Validation Loss: 0.760076
Epoch: 28 Training Loss: 0.712782 Validation Loss: 0.739961
Validation loss decreased (0.743672 --> 0.739961). Saving model ...
Epoch: 29 Training Loss: 0.698036 Validation Loss: 0.700385
Validation loss decreased (0.739961 --> 0.700385). Saving model ...
Epoch: 30 Training Loss: 0.686791 Validation Loss: 0.726166
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.719306
Test Accuracy of airplane: 81% (815/1000)
Test Accuracy of automobile: 85% (859/1000)
Test Accuracy of bird: 56% (565/1000)
Test Accuracy of cat: 62% (625/1000)
Test Accuracy of deer: 66% (665/1000)
Test Accuracy of dog: 70% (703/1000)
Test Accuracy of frog: 83% (832/1000)
Test Accuracy of horse: 78% (781/1000)
Test Accuracy of ship: 85% (853/1000)
Test Accuracy of truck: 83% (833/1000)
Test Accuracy (Overall): 75% (7531/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
if train_on_gpu:
images = images.cpu()
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(30),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
<ipython-input-4-2181f8df30a5>:10: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 2.111981 Validation Loss: 1.829360
Validation loss decreased (inf --> 1.829360). Saving model ...
Epoch: 2 Training Loss: 1.729530 Validation Loss: 1.608806
Validation loss decreased (1.829360 --> 1.608806). Saving model ...
Epoch: 3 Training Loss: 1.568372 Validation Loss: 1.478129
Validation loss decreased (1.608806 --> 1.478129). Saving model ...
Epoch: 4 Training Loss: 1.474528 Validation Loss: 1.408031
Validation loss decreased (1.478129 --> 1.408031). Saving model ...
Epoch: 5 Training Loss: 1.400147 Validation Loss: 1.331784
Validation loss decreased (1.408031 --> 1.331784). Saving model ...
Epoch: 6 Training Loss: 1.344653 Validation Loss: 1.279872
Validation loss decreased (1.331784 --> 1.279872). Saving model ...
Epoch: 7 Training Loss: 1.294106 Validation Loss: 1.219554
Validation loss decreased (1.279872 --> 1.219554). Saving model ...
Epoch: 8 Training Loss: 1.253066 Validation Loss: 1.185584
Validation loss decreased (1.219554 --> 1.185584). Saving model ...
Epoch: 9 Training Loss: 1.214180 Validation Loss: 1.154631
Validation loss decreased (1.185584 --> 1.154631). Saving model ...
Epoch: 10 Training Loss: 1.182422 Validation Loss: 1.105284
Validation loss decreased (1.154631 --> 1.105284). Saving model ...
Epoch: 11 Training Loss: 1.150241 Validation Loss: 1.086410
Validation loss decreased (1.105284 --> 1.086410). Saving model ...
Epoch: 12 Training Loss: 1.127433 Validation Loss: 1.069799
Validation loss decreased (1.086410 --> 1.069799). Saving model ...
Epoch: 13 Training Loss: 1.102591 Validation Loss: 1.038454
Validation loss decreased (1.069799 --> 1.038454). Saving model ...
Epoch: 14 Training Loss: 1.071469 Validation Loss: 1.012853
Validation loss decreased (1.038454 --> 1.012853). Saving model ...
Epoch: 15 Training Loss: 1.053388 Validation Loss: 1.025299
Epoch: 16 Training Loss: 1.032698 Validation Loss: 0.972938
Validation loss decreased (1.012853 --> 0.972938). Saving model ...
Epoch: 17 Training Loss: 1.014881 Validation Loss: 0.977107
Epoch: 18 Training Loss: 0.992677 Validation Loss: 0.928030
Validation loss decreased (0.972938 --> 0.928030). Saving model ...
Epoch: 19 Training Loss: 0.981584 Validation Loss: 0.941831
Epoch: 20 Training Loss: 0.959755 Validation Loss: 0.907710
Validation loss decreased (0.928030 --> 0.907710). Saving model ...
Epoch: 21 Training Loss: 0.949731 Validation Loss: 0.905118
Validation loss decreased (0.907710 --> 0.905118). Saving model ...
Epoch: 22 Training Loss: 0.932669 Validation Loss: 0.879399
Validation loss decreased (0.905118 --> 0.879399). Saving model ...
Epoch: 23 Training Loss: 0.921410 Validation Loss: 0.872651
Validation loss decreased (0.879399 --> 0.872651). Saving model ...
Epoch: 24 Training Loss: 0.904947 Validation Loss: 0.872621
Validation loss decreased (0.872651 --> 0.872621). Saving model ...
Epoch: 25 Training Loss: 0.893202 Validation Loss: 0.856152
Validation loss decreased (0.872621 --> 0.856152). Saving model ...
Epoch: 26 Training Loss: 0.883507 Validation Loss: 0.848151
Validation loss decreased (0.856152 --> 0.848151). Saving model ...
Epoch: 27 Training Loss: 0.876788 Validation Loss: 0.832935
Validation loss decreased (0.848151 --> 0.832935). Saving model ...
Epoch: 28 Training Loss: 0.861352 Validation Loss: 0.835735
Epoch: 29 Training Loss: 0.848545 Validation Loss: 0.823168
Validation loss decreased (0.832935 --> 0.823168). Saving model ...
Epoch: 30 Training Loss: 0.843982 Validation Loss: 0.820413
Validation loss decreased (0.823168 --> 0.820413). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.828379
Test Accuracy of airplane: 75% (759/1000)
Test Accuracy of automobile: 79% (791/1000)
Test Accuracy of bird: 51% (514/1000)
Test Accuracy of cat: 53% (532/1000)
Test Accuracy of deer: 72% (725/1000)
Test Accuracy of dog: 67% (673/1000)
Test Accuracy of frog: 74% (744/1000)
Test Accuracy of horse: 76% (760/1000)
Test Accuracy of ship: 79% (799/1000)
Test Accuracy of truck: 81% (813/1000)
Test Accuracy (Overall): 71% (7110/10000)
###Markdown
Visualize Sample Test Results
###Code
model.to('cpu')
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
#if train_on_gpu:
# images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below.
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd drive/My\ Drive/convolutional-neural-networks
%cd cifar-cnn/
###Output
/content/drive/My Drive/convolutional-neural-networks/cifar-cnn
###Markdown
Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 2.099168 Validation Loss: 1.804420
Validation loss decreased (inf --> 1.804420). Saving model ...
Epoch: 2 Training Loss: 1.698068 Validation Loss: 1.563787
Validation loss decreased (1.804420 --> 1.563787). Saving model ...
Epoch: 3 Training Loss: 1.524512 Validation Loss: 1.449027
Validation loss decreased (1.563787 --> 1.449027). Saving model ...
Epoch: 4 Training Loss: 1.416082 Validation Loss: 1.320896
Validation loss decreased (1.449027 --> 1.320896). Saving model ...
Epoch: 5 Training Loss: 1.338755 Validation Loss: 1.260650
Validation loss decreased (1.320896 --> 1.260650). Saving model ...
Epoch: 6 Training Loss: 1.268989 Validation Loss: 1.215268
Validation loss decreased (1.260650 --> 1.215268). Saving model ...
Epoch: 7 Training Loss: 1.212291 Validation Loss: 1.119831
Validation loss decreased (1.215268 --> 1.119831). Saving model ...
Epoch: 8 Training Loss: 1.157910 Validation Loss: 1.077347
Validation loss decreased (1.119831 --> 1.077347). Saving model ...
Epoch: 9 Training Loss: 1.106221 Validation Loss: 1.069549
Validation loss decreased (1.077347 --> 1.069549). Saving model ...
Epoch: 10 Training Loss: 1.064835 Validation Loss: 1.011595
Validation loss decreased (1.069549 --> 1.011595). Saving model ...
Epoch: 11 Training Loss: 1.027193 Validation Loss: 0.991579
Validation loss decreased (1.011595 --> 0.991579). Saving model ...
Epoch: 12 Training Loss: 0.988284 Validation Loss: 0.943573
Validation loss decreased (0.991579 --> 0.943573). Saving model ...
Epoch: 13 Training Loss: 0.955873 Validation Loss: 0.894150
Validation loss decreased (0.943573 --> 0.894150). Saving model ...
Epoch: 14 Training Loss: 0.927872 Validation Loss: 0.874074
Validation loss decreased (0.894150 --> 0.874074). Saving model ...
Epoch: 15 Training Loss: 0.900270 Validation Loss: 0.844935
Validation loss decreased (0.874074 --> 0.844935). Saving model ...
Epoch: 16 Training Loss: 0.876559 Validation Loss: 0.838447
Validation loss decreased (0.844935 --> 0.838447). Saving model ...
Epoch: 17 Training Loss: 0.857983 Validation Loss: 0.825325
Validation loss decreased (0.838447 --> 0.825325). Saving model ...
Epoch: 18 Training Loss: 0.831917 Validation Loss: 0.796899
Validation loss decreased (0.825325 --> 0.796899). Saving model ...
Epoch: 19 Training Loss: 0.810551 Validation Loss: 0.799468
Epoch: 20 Training Loss: 0.791272 Validation Loss: 0.807467
Epoch: 21 Training Loss: 0.773877 Validation Loss: 0.762958
Validation loss decreased (0.796899 --> 0.762958). Saving model ...
Epoch: 22 Training Loss: 0.763524 Validation Loss: 0.770973
Epoch: 23 Training Loss: 0.745562 Validation Loss: 0.738148
Validation loss decreased (0.762958 --> 0.738148). Saving model ...
Epoch: 24 Training Loss: 0.734298 Validation Loss: 0.756104
Epoch: 25 Training Loss: 0.722343 Validation Loss: 0.714989
Validation loss decreased (0.738148 --> 0.714989). Saving model ...
Epoch: 26 Training Loss: 0.707206 Validation Loss: 0.740726
Epoch: 27 Training Loss: 0.698205 Validation Loss: 0.707925
Validation loss decreased (0.714989 --> 0.707925). Saving model ...
Epoch: 28 Training Loss: 0.685281 Validation Loss: 0.716978
Epoch: 29 Training Loss: 0.675986 Validation Loss: 0.697820
Validation loss decreased (0.707925 --> 0.697820). Saving model ...
Epoch: 30 Training Loss: 0.667174 Validation Loss: 0.712110
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.483400 Validation Loss: 0.357015
Validation loss decreased (inf --> 0.357015). Saving model ...
Epoch: 2 Training Loss: 1.462634 Validation Loss: 0.355032
Validation loss decreased (0.357015 --> 0.355032). Saving model ...
Epoch: 3 Training Loss: 1.454201 Validation Loss: 0.351629
Validation loss decreased (0.355032 --> 0.351629). Saving model ...
Epoch: 4 Training Loss: 1.446663 Validation Loss: 0.349965
Validation loss decreased (0.351629 --> 0.349965). Saving model ...
Epoch: 5 Training Loss: 1.444058 Validation Loss: 0.348746
Validation loss decreased (0.349965 --> 0.348746). Saving model ...
Epoch: 6 Training Loss: 1.443467 Validation Loss: 0.350312
Epoch: 7 Training Loss: 1.435970 Validation Loss: 0.347319
Validation loss decreased (0.348746 --> 0.347319). Saving model ...
Epoch: 8 Training Loss: 1.430920 Validation Loss: 0.347215
Validation loss decreased (0.347319 --> 0.347215). Saving model ...
Epoch: 9 Training Loss: 1.428707 Validation Loss: 0.347475
Epoch: 10 Training Loss: 1.426082 Validation Loss: 0.344052
Validation loss decreased (0.347215 --> 0.344052). Saving model ...
Epoch: 11 Training Loss: 1.421921 Validation Loss: 0.343847
Validation loss decreased (0.344052 --> 0.343847). Saving model ...
Epoch: 12 Training Loss: 1.419442 Validation Loss: 0.344311
Epoch: 13 Training Loss: 1.418585 Validation Loss: 0.345510
Epoch: 14 Training Loss: 1.415068 Validation Loss: 0.342422
Validation loss decreased (0.343847 --> 0.342422). Saving model ...
Epoch: 15 Training Loss: 1.410706 Validation Loss: 0.340904
Validation loss decreased (0.342422 --> 0.340904). Saving model ...
Epoch: 16 Training Loss: 1.415896 Validation Loss: 0.342239
Epoch: 17 Training Loss: 1.408537 Validation Loss: 0.339082
Validation loss decreased (0.340904 --> 0.339082). Saving model ...
Epoch: 18 Training Loss: 1.406269 Validation Loss: 0.343943
Epoch: 19 Training Loss: 1.404933 Validation Loss: 0.338068
Validation loss decreased (0.339082 --> 0.338068). Saving model ...
Epoch: 20 Training Loss: 1.404783 Validation Loss: 0.341946
Epoch: 21 Training Loss: 1.401647 Validation Loss: 0.340137
Epoch: 22 Training Loss: 1.402606 Validation Loss: 0.339872
Epoch: 23 Training Loss: 1.398441 Validation Loss: 0.337148
Validation loss decreased (0.338068 --> 0.337148). Saving model ...
Epoch: 24 Training Loss: 1.401196 Validation Loss: 0.335357
Validation loss decreased (0.337148 --> 0.335357). Saving model ...
Epoch: 25 Training Loss: 1.392405 Validation Loss: 0.339850
Epoch: 26 Training Loss: 1.392132 Validation Loss: 0.339279
Epoch: 27 Training Loss: 1.387730 Validation Loss: 0.331665
Validation loss decreased (0.335357 --> 0.331665). Saving model ...
Epoch: 28 Training Loss: 1.380706 Validation Loss: 0.340137
Epoch: 29 Training Loss: 1.374192 Validation Loss: 0.332150
Epoch: 30 Training Loss: 1.372476 Validation Loss: 0.332638
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 1.650751
Test Accuracy of airplane: 37% (374/1000)
Test Accuracy of automobile: 47% (471/1000)
Test Accuracy of bird: 8% (80/1000)
Test Accuracy of cat: 20% (203/1000)
Test Accuracy of deer: 39% (390/1000)
Test Accuracy of dog: 27% (274/1000)
Test Accuracy of frog: 37% (372/1000)
Test Accuracy of horse: 48% (483/1000)
Test Accuracy of ship: 47% (473/1000)
Test Accuracy of truck: 58% (581/1000)
Test Accuracy (Overall): 37% (3701/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is not available. Training on CPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
/Users/tengwu/opt/anaconda3/envs/deep-learning/lib/python3.7/site-packages/ipykernel_launcher.py:10: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
# Remove the CWD from sys.path while we load stuff.
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer
self.conv1 = nn.Conv2d(3, 16, 3, padding=1) # output in same size
self.conv2 = nn.Conv2d(16, 32, 3, padding=1) # output in same size
self.conv3 = nn.Conv2d(32, 32, 3, padding=1) # output in same size
# fully connected linear layer
self.fc1 = nn.Linear(4*4*32, 512)
self.fc2 = nn.Linear(512, 10)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
self.dropout = nn.Dropout2d(0.2)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# output size = 4 x 4 x 32
# Flatten CNN output
x = x.contiguous().view(x.size(0), -1)
x = self.dropout(x)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
# import torch.nn as nn
# import torch.nn.functional as F
# # define the CNN architecture
# class Net(nn.Module):
# def __init__(self):
# super(Net, self).__init__()
# # convolutional layer (sees 32x32x3 image tensor)
# self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# # convolutional layer (sees 16x16x16 tensor)
# self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# # convolutional layer (sees 8x8x32 tensor)
# self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# # max pooling layer
# self.pool = nn.MaxPool2d(2, 2)
# # linear layer (64 * 4 * 4 -> 500)
# self.fc1 = nn.Linear(64 * 4 * 4, 500)
# # linear layer (500 -> 10)
# self.fc2 = nn.Linear(500, 10)
# # dropout layer (p=0.25)
# self.dropout = nn.Dropout(0.25)
# def forward(self, x):
# # add sequence of convolutional and max pooling layers
# x = self.pool(F.relu(self.conv1(x)))
# x = self.pool(F.relu(self.conv2(x)))
# x = self.pool(F.relu(self.conv3(x)))
# # flatten image input
# x = x.view(-1, 64 * 4 * 4)
# # add dropout layer
# x = self.dropout(x)
# # add 1st hidden layer, with relu activation function
# x = F.relu(self.fc1(x))
# # add dropout layer
# x = self.dropout(x)
# # add 2nd hidden layer, with relu activation function
# x = self.fc2(x)
# return x
# # create a complete CNN
# model = Net()
# print(model)
# # move tensors to GPU if CUDA is available
# if train_on_gpu:
# model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(fc1): Linear(in_features=512, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=10, bias=True)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(dropout): Dropout2d(p=0.2, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
model.load_state_dict(torch.load('model_augmented.pt', map_location=torch.device('cpu')))
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
torch.Size([20, 10])
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.783118
Test Accuracy of airplane: 79% (793/1000)
Test Accuracy of automobile: 83% (839/1000)
Test Accuracy of bird: 59% (590/1000)
Test Accuracy of cat: 41% (418/1000)
Test Accuracy of deer: 73% (736/1000)
Test Accuracy of dog: 62% (626/1000)
Test Accuracy of frog: 76% (769/1000)
Test Accuracy of horse: 84% (840/1000)
Test Accuracy of ship: 83% (831/1000)
Test Accuracy of truck: 82% (829/1000)
Test Accuracy (Overall): 72% (7271/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
/Users/tengwu/opt/anaconda3/envs/deep-learning/lib/python3.7/site-packages/ipykernel_launcher.py:19: MatplotlibDeprecationWarning: Passing non-integers as three-element position specification is deprecated since 3.3 and will be removed two minor releases later.
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 2
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to data\cifar-10-python.tar.gz
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, stride =1, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, stride =1, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, stride =1, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25, inplace=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 2.181793 Validation Loss: 1.906786
Validation loss decreased (inf --> 1.906786). Saving model ...
Epoch: 2 Training Loss: 1.757002 Validation Loss: 1.566095
Validation loss decreased (1.906786 --> 1.566095). Saving model ...
Epoch: 3 Training Loss: 1.528996 Validation Loss: 1.494989
Validation loss decreased (1.566095 --> 1.494989). Saving model ...
Epoch: 4 Training Loss: 1.416966 Validation Loss: 1.327248
Validation loss decreased (1.494989 --> 1.327248). Saving model ...
Epoch: 5 Training Loss: 1.330309 Validation Loss: 1.260442
Validation loss decreased (1.327248 --> 1.260442). Saving model ...
Epoch: 6 Training Loss: 1.260725 Validation Loss: 1.201306
Validation loss decreased (1.260442 --> 1.201306). Saving model ...
Epoch: 7 Training Loss: 1.191393 Validation Loss: 1.128321
Validation loss decreased (1.201306 --> 1.128321). Saving model ...
Epoch: 8 Training Loss: 1.140873 Validation Loss: 1.154466
Epoch: 9 Training Loss: 1.097859 Validation Loss: 1.056986
Validation loss decreased (1.128321 --> 1.056986). Saving model ...
Epoch: 10 Training Loss: 1.058494 Validation Loss: 1.043310
Validation loss decreased (1.056986 --> 1.043310). Saving model ...
Epoch: 11 Training Loss: 1.020470 Validation Loss: 0.984152
Validation loss decreased (1.043310 --> 0.984152). Saving model ...
Epoch: 12 Training Loss: 0.988581 Validation Loss: 0.966906
Validation loss decreased (0.984152 --> 0.966906). Saving model ...
Epoch: 13 Training Loss: 0.961191 Validation Loss: 0.943046
Validation loss decreased (0.966906 --> 0.943046). Saving model ...
Epoch: 14 Training Loss: 0.931902 Validation Loss: 0.904914
Validation loss decreased (0.943046 --> 0.904914). Saving model ...
Epoch: 15 Training Loss: 0.907263 Validation Loss: 0.925723
Epoch: 16 Training Loss: 0.885157 Validation Loss: 0.862987
Validation loss decreased (0.904914 --> 0.862987). Saving model ...
Epoch: 17 Training Loss: 0.865677 Validation Loss: 0.883511
Epoch: 18 Training Loss: 0.849353 Validation Loss: 0.856922
Validation loss decreased (0.862987 --> 0.856922). Saving model ...
Epoch: 19 Training Loss: 0.828315 Validation Loss: 0.830107
Validation loss decreased (0.856922 --> 0.830107). Saving model ...
Epoch: 20 Training Loss: 0.804776 Validation Loss: 0.802484
Validation loss decreased (0.830107 --> 0.802484). Saving model ...
Epoch: 21 Training Loss: 0.794827 Validation Loss: 0.807261
Epoch: 22 Training Loss: 0.777293 Validation Loss: 0.805065
Epoch: 23 Training Loss: 0.765782 Validation Loss: 0.799462
Validation loss decreased (0.802484 --> 0.799462). Saving model ...
Epoch: 24 Training Loss: 0.751345 Validation Loss: 0.773901
Validation loss decreased (0.799462 --> 0.773901). Saving model ...
Epoch: 25 Training Loss: 0.731395 Validation Loss: 0.758595
Validation loss decreased (0.773901 --> 0.758595). Saving model ...
Epoch: 26 Training Loss: 0.718107 Validation Loss: 0.754077
Validation loss decreased (0.758595 --> 0.754077). Saving model ...
Epoch: 27 Training Loss: 0.705634 Validation Loss: 0.750373
Validation loss decreased (0.754077 --> 0.750373). Saving model ...
Epoch: 28 Training Loss: 0.693494 Validation Loss: 0.748585
Validation loss decreased (0.750373 --> 0.748585). Saving model ...
Epoch: 29 Training Loss: 0.686959 Validation Loss: 0.745222
Validation loss decreased (0.748585 --> 0.745222). Saving model ...
Epoch: 30 Training Loss: 0.675830 Validation Loss: 0.730552
Validation loss decreased (0.745222 --> 0.730552). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.723856
Test Accuracy of airplane: 78% (787/1000)
Test Accuracy of automobile: 86% (867/1000)
Test Accuracy of bird: 64% (649/1000)
Test Accuracy of cat: 55% (556/1000)
Test Accuracy of deer: 68% (685/1000)
Test Accuracy of dog: 67% (675/1000)
Test Accuracy of frog: 80% (809/1000)
Test Accuracy of horse: 82% (827/1000)
Test Accuracy of ship: 89% (894/1000)
Test Accuracy of truck: 81% (814/1000)
Test Accuracy (Overall): 75% (7563/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images.cpu()[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(30),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.684407 Validation Loss: 0.364063
Validation loss decreased (inf --> 0.364063). Saving model ...
Epoch: 2 Training Loss: 1.371414 Validation Loss: 0.311776
Validation loss decreased (0.364063 --> 0.311776). Saving model ...
Epoch: 3 Training Loss: 1.218783 Validation Loss: 0.280549
Validation loss decreased (0.311776 --> 0.280549). Saving model ...
Epoch: 4 Training Loss: 1.131205 Validation Loss: 0.264552
Validation loss decreased (0.280549 --> 0.264552). Saving model ...
Epoch: 5 Training Loss: 1.071300 Validation Loss: 0.249352
Validation loss decreased (0.264552 --> 0.249352). Saving model ...
Epoch: 6 Training Loss: 1.015548 Validation Loss: 0.237172
Validation loss decreased (0.249352 --> 0.237172). Saving model ...
Epoch: 7 Training Loss: 0.968873 Validation Loss: 0.223524
Validation loss decreased (0.237172 --> 0.223524). Saving model ...
Epoch: 8 Training Loss: 0.926542 Validation Loss: 0.214586
Validation loss decreased (0.223524 --> 0.214586). Saving model ...
Epoch: 9 Training Loss: 0.889676 Validation Loss: 0.206086
Validation loss decreased (0.214586 --> 0.206086). Saving model ...
Epoch: 10 Training Loss: 0.855688 Validation Loss: 0.204517
Validation loss decreased (0.206086 --> 0.204517). Saving model ...
Epoch: 11 Training Loss: 0.827312 Validation Loss: 0.193199
Validation loss decreased (0.204517 --> 0.193199). Saving model ...
Epoch: 12 Training Loss: 0.803023 Validation Loss: 0.187411
Validation loss decreased (0.193199 --> 0.187411). Saving model ...
Epoch: 13 Training Loss: 0.774376 Validation Loss: 0.182259
Validation loss decreased (0.187411 --> 0.182259). Saving model ...
Epoch: 14 Training Loss: 0.755903 Validation Loss: 0.175893
Validation loss decreased (0.182259 --> 0.175893). Saving model ...
Epoch: 15 Training Loss: 0.734230 Validation Loss: 0.170841
Validation loss decreased (0.175893 --> 0.170841). Saving model ...
Epoch: 16 Training Loss: 0.714479 Validation Loss: 0.167823
Validation loss decreased (0.170841 --> 0.167823). Saving model ...
Epoch: 17 Training Loss: 0.690665 Validation Loss: 0.166724
Validation loss decreased (0.167823 --> 0.166724). Saving model ...
Epoch: 18 Training Loss: 0.676282 Validation Loss: 0.161710
Validation loss decreased (0.166724 --> 0.161710). Saving model ...
Epoch: 19 Training Loss: 0.661176 Validation Loss: 0.161106
Validation loss decreased (0.161710 --> 0.161106). Saving model ...
Epoch: 20 Training Loss: 0.645175 Validation Loss: 0.155103
Validation loss decreased (0.161106 --> 0.155103). Saving model ...
Epoch: 21 Training Loss: 0.637313 Validation Loss: 0.157526
Epoch: 22 Training Loss: 0.624120 Validation Loss: 0.157021
Epoch: 23 Training Loss: 0.612902 Validation Loss: 0.150273
Validation loss decreased (0.155103 --> 0.150273). Saving model ...
Epoch: 24 Training Loss: 0.601285 Validation Loss: 0.150101
Validation loss decreased (0.150273 --> 0.150101). Saving model ...
Epoch: 25 Training Loss: 0.591319 Validation Loss: 0.146002
Validation loss decreased (0.150101 --> 0.146002). Saving model ...
Epoch: 26 Training Loss: 0.581394 Validation Loss: 0.145936
Validation loss decreased (0.146002 --> 0.145936). Saving model ...
Epoch: 27 Training Loss: 0.571619 Validation Loss: 0.146214
Epoch: 28 Training Loss: 0.558487 Validation Loss: 0.145059
Validation loss decreased (0.145936 --> 0.145059). Saving model ...
Epoch: 29 Training Loss: 0.554171 Validation Loss: 0.141117
Validation loss decreased (0.145059 --> 0.141117). Saving model ...
Epoch: 30 Training Loss: 0.547580 Validation Loss: 0.140780
Validation loss decreased (0.141117 --> 0.140780). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.711688
Test Accuracy of airplane: 80% (809/1000)
Test Accuracy of automobile: 88% (881/1000)
Test Accuracy of bird: 65% (653/1000)
Test Accuracy of cat: 64% (645/1000)
Test Accuracy of deer: 61% (615/1000)
Test Accuracy of dog: 66% (665/1000)
Test Accuracy of frog: 80% (801/1000)
Test Accuracy of horse: 82% (825/1000)
Test Accuracy of ship: 84% (846/1000)
Test Accuracy of truck: 81% (814/1000)
Test Accuracy (Overall): 75% (7554/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
images = images.cpu().numpy()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
train_transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
test_transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=train_transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=test_transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is not available. Training on CPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
print(batch_size)
print(train_loader)
# obtain one batch of training images
dataiter = iter(train_loader)
print(dataiter)
images, labels = next(dataiter)
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
20
<torch.utils.data.dataloader.DataLoader object at 0x7f6cfe78a2b0>
<torch.utils.data.dataloader._DataLoaderIter object at 0x7f6c8d7c8ef0>
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.638317 Validation Loss: 0.352579
Validation loss decreased (inf --> 0.352579). Saving model ...
Epoch: 2 Training Loss: 1.354590 Validation Loss: 0.302874
Validation loss decreased (0.352579 --> 0.302874). Saving model ...
Epoch: 3 Training Loss: 1.219148 Validation Loss: 0.279294
Validation loss decreased (0.302874 --> 0.279294). Saving model ...
Epoch: 4 Training Loss: 1.137867 Validation Loss: 0.261197
Validation loss decreased (0.279294 --> 0.261197). Saving model ...
Epoch: 5 Training Loss: 1.070979 Validation Loss: 0.251508
Validation loss decreased (0.261197 --> 0.251508). Saving model ...
Epoch: 6 Training Loss: 1.016159 Validation Loss: 0.231878
Validation loss decreased (0.251508 --> 0.231878). Saving model ...
Epoch: 7 Training Loss: 0.963271 Validation Loss: 0.224220
Validation loss decreased (0.231878 --> 0.224220). Saving model ...
Epoch: 8 Training Loss: 0.916982 Validation Loss: 0.210285
Validation loss decreased (0.224220 --> 0.210285). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
_____no_output_____
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squee,ze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is not available. Training on CPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.sampler)
valid_loss = valid_loss/len(valid_loader.sampler)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=500, bias=True)
(fc2): Linear(in_features=500, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.656812 Validation Loss: 0.365488
Validation loss decreased (inf --> 0.365488). Saving model ...
Epoch: 2 Training Loss: 1.345370 Validation Loss: 0.314175
Validation loss decreased (0.365488 --> 0.314175). Saving model ...
Epoch: 3 Training Loss: 1.212607 Validation Loss: 0.295734
Validation loss decreased (0.314175 --> 0.295734). Saving model ...
Epoch: 4 Training Loss: 1.135926 Validation Loss: 0.276584
Validation loss decreased (0.295734 --> 0.276584). Saving model ...
Epoch: 5 Training Loss: 1.070200 Validation Loss: 0.264104
Validation loss decreased (0.276584 --> 0.264104). Saving model ...
Epoch: 6 Training Loss: 1.015063 Validation Loss: 0.249558
Validation loss decreased (0.264104 --> 0.249558). Saving model ...
Epoch: 7 Training Loss: 0.960978 Validation Loss: 0.238545
Validation loss decreased (0.249558 --> 0.238545). Saving model ...
Epoch: 8 Training Loss: 0.919527 Validation Loss: 0.226336
Validation loss decreased (0.238545 --> 0.226336). Saving model ...
Epoch: 9 Training Loss: 0.879474 Validation Loss: 0.226361
Epoch: 10 Training Loss: 0.851672 Validation Loss: 0.222212
Validation loss decreased (0.226336 --> 0.222212). Saving model ...
Epoch: 11 Training Loss: 0.817540 Validation Loss: 0.210881
Validation loss decreased (0.222212 --> 0.210881). Saving model ...
Epoch: 12 Training Loss: 0.791703 Validation Loss: 0.203747
Validation loss decreased (0.210881 --> 0.203747). Saving model ...
Epoch: 13 Training Loss: 0.770142 Validation Loss: 0.197745
Validation loss decreased (0.203747 --> 0.197745). Saving model ...
Epoch: 14 Training Loss: 0.746529 Validation Loss: 0.195186
Validation loss decreased (0.197745 --> 0.195186). Saving model ...
Epoch: 15 Training Loss: 0.721853 Validation Loss: 0.189458
Validation loss decreased (0.195186 --> 0.189458). Saving model ...
Epoch: 16 Training Loss: 0.704981 Validation Loss: 0.187049
Validation loss decreased (0.189458 --> 0.187049). Saving model ...
Epoch: 17 Training Loss: 0.687266 Validation Loss: 0.181188
Validation loss decreased (0.187049 --> 0.181188). Saving model ...
Epoch: 20 Training Loss: 0.643191 Validation Loss: 0.174395
Validation loss decreased (0.177552 --> 0.174395). Saving model ...
Epoch: 21 Training Loss: 0.629806 Validation Loss: 0.176519
Epoch: 22 Training Loss: 0.617813 Validation Loss: 0.170081
Validation loss decreased (0.174395 --> 0.170081). Saving model ...
Epoch: 23 Training Loss: 0.607368 Validation Loss: 0.173193
Epoch: 24 Training Loss: 0.595777 Validation Loss: 0.171224
Epoch: 25 Training Loss: 0.589411 Validation Loss: 0.166990
Validation loss decreased (0.170081 --> 0.166990). Saving model ...
Epoch: 26 Training Loss: 0.576741 Validation Loss: 0.166768
Validation loss decreased (0.166990 --> 0.166768). Saving model ...
Epoch: 27 Training Loss: 0.568565 Validation Loss: 0.169211
Epoch: 28 Training Loss: 0.559056 Validation Loss: 0.167645
Epoch: 29 Training Loss: 0.549626 Validation Loss: 0.165719
Validation loss decreased (0.166768 --> 0.165719). Saving model ...
Epoch: 30 Training Loss: 0.539699 Validation Loss: 0.163262
Validation loss decreased (0.165719 --> 0.163262). Saving model ...
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.822158
Test Accuracy of airplane: 73% (735/1000)
Test Accuracy of automobile: 78% (784/1000)
Test Accuracy of bird: 51% (517/1000)
Test Accuracy of cat: 54% (549/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 59% (597/1000)
Test Accuracy of frog: 77% (779/1000)
Test Accuracy of horse: 74% (749/1000)
Test Accuracy of ship: 85% (852/1000)
Test Accuracy of truck: 84% (841/1000)
Test Accuracy (Overall): 71% (7135/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(10),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layers (input = 32x32x3 image tensor)
self.conv1 = nn.Conv2d( 3, 16, 3, padding=1)
self.conv2 = nn.Conv2d(16, 16, 3, padding=1)
self.conv3 = nn.Conv2d(16, 32, 3, padding=1)
self.conv4 = nn.Conv2d(32, 32, 3, padding=1)
self.conv5 = nn.Conv2d(32, 64, 3, padding=1)
self.conv6 = nn.Conv2d(64, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layers (4*4 == H*W from 32/2/2/2 by pooling)
self.fc1 = nn.Linear(64*4*4, 512)
self.fc2 = nn.Linear(512, 128)
self.fc3 = nn.Linear(128, 10)
# dropout
self.dropout = nn.Dropout(p=0.25)
def forward(self, x):
# convolutional and max pooling layers
x = F.relu(self.conv1(x))
x = self.pool(F.relu(self.conv2(x)))
x = F.relu(self.conv3(x))
x = self.pool(F.relu(self.conv4(x)))
x = F.relu(self.conv5(x))
x = self.pool(F.relu(self.conv6(x)))
# linear layers
x = x.view(-1, 64*4*4)
x = F.relu(self.fc1(self.dropout(x)))
x = F.relu(self.fc2(self.dropout(x)))
x = self.fc3(self.dropout(x))
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
Net(
(conv1): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv6): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1024, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=128, bias=True)
(fc3): Linear(in_features=128, out_features=10, bias=True)
(dropout): Dropout(p=0.25)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 50
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.842378 Validation Loss: 0.460530
Validation loss decreased (inf --> 0.460530). Saving model ...
Epoch: 2 Training Loss: 1.842197 Validation Loss: 0.460553
Epoch: 3 Training Loss: 1.842215 Validation Loss: 0.460505
Validation loss decreased (0.460530 --> 0.460505). Saving model ...
Epoch: 4 Training Loss: 1.842163 Validation Loss: 0.460515
Epoch: 5 Training Loss: 1.842072 Validation Loss: 0.460478
Validation loss decreased (0.460505 --> 0.460478). Saving model ...
Epoch: 6 Training Loss: 1.841842 Validation Loss: 0.460330
Validation loss decreased (0.460478 --> 0.460330). Saving model ...
Epoch: 7 Training Loss: 1.836805 Validation Loss: 0.449019
Validation loss decreased (0.460330 --> 0.449019). Saving model ...
Epoch: 8 Training Loss: 1.633820 Validation Loss: 0.378595
Validation loss decreased (0.449019 --> 0.378595). Saving model ...
Epoch: 9 Training Loss: 1.454447 Validation Loss: 0.336559
Validation loss decreased (0.378595 --> 0.336559). Saving model ...
Epoch: 10 Training Loss: 1.349750 Validation Loss: 0.312233
Validation loss decreased (0.336559 --> 0.312233). Saving model ...
Epoch: 11 Training Loss: 1.280820 Validation Loss: 0.304862
Validation loss decreased (0.312233 --> 0.304862). Saving model ...
Epoch: 12 Training Loss: 1.229178 Validation Loss: 0.289841
Validation loss decreased (0.304862 --> 0.289841). Saving model ...
Epoch: 13 Training Loss: 1.177532 Validation Loss: 0.277040
Validation loss decreased (0.289841 --> 0.277040). Saving model ...
Epoch: 14 Training Loss: 1.127174 Validation Loss: 0.259038
Validation loss decreased (0.277040 --> 0.259038). Saving model ...
Epoch: 15 Training Loss: 1.077449 Validation Loss: 0.245929
Validation loss decreased (0.259038 --> 0.245929). Saving model ...
Epoch: 16 Training Loss: 1.031487 Validation Loss: 0.237574
Validation loss decreased (0.245929 --> 0.237574). Saving model ...
Epoch: 17 Training Loss: 0.984346 Validation Loss: 0.219962
Validation loss decreased (0.237574 --> 0.219962). Saving model ...
Epoch: 18 Training Loss: 0.939384 Validation Loss: 0.221538
Epoch: 19 Training Loss: 0.903095 Validation Loss: 0.207217
Validation loss decreased (0.219962 --> 0.207217). Saving model ...
Epoch: 20 Training Loss: 0.869717 Validation Loss: 0.195827
Validation loss decreased (0.207217 --> 0.195827). Saving model ...
Epoch: 21 Training Loss: 0.837897 Validation Loss: 0.191226
Validation loss decreased (0.195827 --> 0.191226). Saving model ...
Epoch: 22 Training Loss: 0.804888 Validation Loss: 0.190055
Validation loss decreased (0.191226 --> 0.190055). Saving model ...
Epoch: 23 Training Loss: 0.782462 Validation Loss: 0.174228
Validation loss decreased (0.190055 --> 0.174228). Saving model ...
Epoch: 24 Training Loss: 0.755410 Validation Loss: 0.175404
Epoch: 25 Training Loss: 0.735316 Validation Loss: 0.163515
Validation loss decreased (0.174228 --> 0.163515). Saving model ...
Epoch: 26 Training Loss: 0.710704 Validation Loss: 0.162370
Validation loss decreased (0.163515 --> 0.162370). Saving model ...
Epoch: 27 Training Loss: 0.689230 Validation Loss: 0.159367
Validation loss decreased (0.162370 --> 0.159367). Saving model ...
Epoch: 28 Training Loss: 0.678445 Validation Loss: 0.157606
Validation loss decreased (0.159367 --> 0.157606). Saving model ...
Epoch: 29 Training Loss: 0.658007 Validation Loss: 0.148904
Validation loss decreased (0.157606 --> 0.148904). Saving model ...
Epoch: 30 Training Loss: 0.640387 Validation Loss: 0.152154
Epoch: 31 Training Loss: 0.621024 Validation Loss: 0.145929
Validation loss decreased (0.148904 --> 0.145929). Saving model ...
Epoch: 32 Training Loss: 0.610787 Validation Loss: 0.144632
Validation loss decreased (0.145929 --> 0.144632). Saving model ...
Epoch: 33 Training Loss: 0.599607 Validation Loss: 0.141144
Validation loss decreased (0.144632 --> 0.141144). Saving model ...
Epoch: 34 Training Loss: 0.585471 Validation Loss: 0.139650
Validation loss decreased (0.141144 --> 0.139650). Saving model ...
Epoch: 35 Training Loss: 0.576838 Validation Loss: 0.138787
Validation loss decreased (0.139650 --> 0.138787). Saving model ...
Epoch: 36 Training Loss: 0.561062 Validation Loss: 0.139849
Epoch: 37 Training Loss: 0.553572 Validation Loss: 0.133136
Validation loss decreased (0.138787 --> 0.133136). Saving model ...
Epoch: 38 Training Loss: 0.541815 Validation Loss: 0.130927
Validation loss decreased (0.133136 --> 0.130927). Saving model ...
Epoch: 39 Training Loss: 0.534079 Validation Loss: 0.128684
Validation loss decreased (0.130927 --> 0.128684). Saving model ...
Epoch: 40 Training Loss: 0.523722 Validation Loss: 0.134889
Epoch: 41 Training Loss: 0.515368 Validation Loss: 0.125356
Validation loss decreased (0.128684 --> 0.125356). Saving model ...
Epoch: 42 Training Loss: 0.509255 Validation Loss: 0.129859
Epoch: 43 Training Loss: 0.501581 Validation Loss: 0.126276
Epoch: 44 Training Loss: 0.492593 Validation Loss: 0.125028
Validation loss decreased (0.125356 --> 0.125028). Saving model ...
Epoch: 45 Training Loss: 0.485370 Validation Loss: 0.125097
Epoch: 46 Training Loss: 0.477438 Validation Loss: 0.122839
Validation loss decreased (0.125028 --> 0.122839). Saving model ...
Epoch: 47 Training Loss: 0.472412 Validation Loss: 0.120813
Validation loss decreased (0.122839 --> 0.120813). Saving model ...
Epoch: 48 Training Loss: 0.466368 Validation Loss: 0.119521
Validation loss decreased (0.120813 --> 0.119521). Saving model ...
Epoch: 49 Training Loss: 0.456266 Validation Loss: 0.123663
Epoch: 50 Training Loss: 0.452240 Validation Loss: 0.119887
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.622506
Test Accuracy of airplane: 83% (834/1000)
Test Accuracy of automobile: 88% (885/1000)
Test Accuracy of bird: 62% (626/1000)
Test Accuracy of cat: 59% (590/1000)
Test Accuracy of deer: 73% (732/1000)
Test Accuracy of dog: 70% (709/1000)
Test Accuracy of frog: 89% (898/1000)
Test Accuracy of horse: 85% (855/1000)
Test Accuracy of ship: 83% (838/1000)
Test Accuracy of truck: 87% (876/1000)
Test Accuracy (Overall): 78% (7843/10000)
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____
###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load and Augment the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data. AugmentationIn this cell, we perform some simple [data augmentation](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) by randomly flipping and rotating the given image data. We do this by defining a torchvision `transform`, and you can learn about all the transforms that are used to pre-process and augment data, [here](https://pytorch.org/docs/stable/torchvision/transforms.html). TODO: Look at the [transformation documentation](https://pytorch.org/docs/stable/torchvision/transforms.html); add more augmentation transforms, and see how your model performs.This type of data augmentation should add some positional variety to these images, so that when we train a model on this data, it will be robust in the face of geometric changes (i.e. it will recognize a ship, no matter which direction it is facing). It's recommended that you choose one or two transforms.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
transform_list = [
transforms.RandomHorizontalFlip(), # randomly flip and rotate
transforms.RandomRotation(15),
transforms.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.2),
transforms.RandomVerticalFlip()]
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.RandomChoice(transform_list),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the CNN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# convolutional layer (sees 32x32x3 image tensor)
self.conv1 = nn.Conv2d(3, 16, 3, padding=1)
# convolutional layer (sees 16x16x16 tensor)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
# convolutional layer (sees 8x8x32 tensor)
self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# linear layer (64 * 4 * 4 -> 500)
self.fc1 = nn.Linear(64 * 4 * 4, 500)
# linear layer (500 -> 10)
self.fc2 = nn.Linear(500, 10)
# dropout layer (p=0.25)
self.dropout = nn.Dropout(0.25)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = self.pool(F.relu(self.conv3(x)))
# flatten image input
x = x.view(-1, 64 * 4 * 4)
# add dropout layer
x = self.dropout(x)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add 2nd hidden layer, with relu activation function
x = self.fc2(x)
return x
# define the CNN architecture
class MyNet(nn.Module):
def __init__(self):
super(MyNet, self).__init__()
# convolutional layers
self.conv1 = nn.Conv2d(3, 16, 5, padding=3)
self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
self.conv3 = nn.Conv2d(32, 64, 2, padding=1)
self.conv4 = nn.Conv2d(64, 128, 2, padding=1)
# dense layers
self.fc1 = nn.Linear(512, 128)
self.fc2 = nn.Linear(128, 32)
self.fc3 = nn.Linear(32, 10)
# dropout layer
self.drop = nn.Dropout(p=0.2)
# max pooling layers
self.pool4 = nn.MaxPool2d(4, 2)
self.pool2 = nn.MaxPool2d(2, 2)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool4(F.relu(self.conv1(x))) # size = (3, 32, 32) -> (16, 32, 32) -> (16, 16, 16)
x = self.pool2(F.relu(self.conv2(x))) # size = (16 16, 16) -> (32 16, 16) -> (32, 8, 8)
x = self.pool2(F.relu(self.conv3(x))) # size = (32, 8, 8) -> (64, 8, 8) -> (64, 4, 4)
x = self.pool2(F.relu(self.conv4(x))) # size = (64, 4, 4) -> (128, 4, 4) -> (128, 2, 2)
x = x.view(-1, 512)
x = self.drop(F.relu(self.fc1(x))) # size = (1, 512) -> (1, 128)
x = self.drop(F.relu(self.fc2(x))) # size = (1, 128) -> (1, 32)
x = self.fc3(x) # size = (1, 32) -> (1, 10)
return x
# create a complete CNN
model = MyNet()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
###Output
MyNet(
(conv1): Conv2d(3, 16, kernel_size=(5, 5), stride=(1, 1), padding=(3, 3))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv3): Conv2d(32, 64, kernel_size=(2, 2), stride=(1, 1), padding=(1, 1))
(conv4): Conv2d(64, 128, kernel_size=(2, 2), stride=(1, 1), padding=(1, 1))
(fc1): Linear(in_features=512, out_features=128, bias=True)
(fc2): Linear(in_features=128, out_features=32, bias=True)
(fc3): Linear(in_features=32, out_features=10, bias=True)
(drop): Dropout(p=0.2)
(pool4): MaxPool2d(kernel_size=4, stride=2, padding=0, dilation=1, ceil_mode=False)
(pool2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
learning_rate = 0.01
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(valid_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_augmented.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.842711 Validation Loss: 0.460229
Validation loss decreased (inf --> 0.460229). Saving model ...
Epoch: 2 Training Loss: 1.833979 Validation Loss: 0.449507
Validation loss decreased (0.460229 --> 0.449507). Saving model ...
Epoch: 3 Training Loss: 1.683570 Validation Loss: 0.392847
Validation loss decreased (0.449507 --> 0.392847). Saving model ...
Epoch: 4 Training Loss: 1.549801 Validation Loss: 0.373239
Validation loss decreased (0.392847 --> 0.373239). Saving model ...
Epoch: 5 Training Loss: 1.483168 Validation Loss: 0.353645
Validation loss decreased (0.373239 --> 0.353645). Saving model ...
Epoch: 6 Training Loss: 1.409747 Validation Loss: 0.333760
Validation loss decreased (0.353645 --> 0.333760). Saving model ...
Epoch: 7 Training Loss: 1.341141 Validation Loss: 0.315922
Validation loss decreased (0.333760 --> 0.315922). Saving model ...
Epoch: 8 Training Loss: 1.284577 Validation Loss: 0.305044
Validation loss decreased (0.315922 --> 0.305044). Saving model ...
Epoch: 9 Training Loss: 1.231253 Validation Loss: 0.294547
Validation loss decreased (0.305044 --> 0.294547). Saving model ...
Epoch: 10 Training Loss: 1.184640 Validation Loss: 0.281313
Validation loss decreased (0.294547 --> 0.281313). Saving model ...
Epoch: 11 Training Loss: 1.144069 Validation Loss: 0.270297
Validation loss decreased (0.281313 --> 0.270297). Saving model ...
Epoch: 12 Training Loss: 1.112879 Validation Loss: 0.267905
Validation loss decreased (0.270297 --> 0.267905). Saving model ...
Epoch: 13 Training Loss: 1.075963 Validation Loss: 0.254982
Validation loss decreased (0.267905 --> 0.254982). Saving model ...
Epoch: 14 Training Loss: 1.037124 Validation Loss: 0.249552
Validation loss decreased (0.254982 --> 0.249552). Saving model ...
Epoch: 15 Training Loss: 1.004062 Validation Loss: 0.241872
Validation loss decreased (0.249552 --> 0.241872). Saving model ...
Epoch: 16 Training Loss: 0.971737 Validation Loss: 0.229611
Validation loss decreased (0.241872 --> 0.229611). Saving model ...
Epoch: 17 Training Loss: 0.943584 Validation Loss: 0.227751
Validation loss decreased (0.229611 --> 0.227751). Saving model ...
Epoch: 18 Training Loss: 0.906143 Validation Loss: 0.216225
Validation loss decreased (0.227751 --> 0.216225). Saving model ...
Epoch: 19 Training Loss: 0.881923 Validation Loss: 0.218879
Epoch: 20 Training Loss: 0.857662 Validation Loss: 0.211918
Validation loss decreased (0.216225 --> 0.211918). Saving model ...
Epoch: 21 Training Loss: 0.839312 Validation Loss: 0.210851
Validation loss decreased (0.211918 --> 0.210851). Saving model ...
Epoch: 22 Training Loss: 0.819790 Validation Loss: 0.198523
Validation loss decreased (0.210851 --> 0.198523). Saving model ...
Epoch: 23 Training Loss: 0.803444 Validation Loss: 0.191653
Validation loss decreased (0.198523 --> 0.191653). Saving model ...
Epoch: 24 Training Loss: 0.782581 Validation Loss: 0.197010
Epoch: 25 Training Loss: 0.771426 Validation Loss: 0.187076
Validation loss decreased (0.191653 --> 0.187076). Saving model ...
Epoch: 26 Training Loss: 0.756101 Validation Loss: 0.194271
Epoch: 27 Training Loss: 0.745239 Validation Loss: 0.183930
Validation loss decreased (0.187076 --> 0.183930). Saving model ...
Epoch: 28 Training Loss: 0.730283 Validation Loss: 0.181949
Validation loss decreased (0.183930 --> 0.181949). Saving model ...
Epoch: 29 Training Loss: 0.716733 Validation Loss: 0.182407
Epoch: 30 Training Loss: 0.709625 Validation Loss: 0.185551
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_augmented.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for batch_idx, (data, target) in enumerate(test_loader):
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.919487
Test Accuracy of airplane: 74% (747/1000)
Test Accuracy of automobile: 76% (765/1000)
Test Accuracy of bird: 57% (574/1000)
Test Accuracy of cat: 64% (649/1000)
Test Accuracy of deer: 57% (577/1000)
Test Accuracy of dog: 48% (482/1000)
Test Accuracy of frog: 73% (733/1000)
Test Accuracy of horse: 71% (710/1000)
Test Accuracy of ship: 76% (760/1000)
Test Accuracy of truck: 79% (794/1000)
Test Accuracy (Overall): 67% (6791/10000)
###Markdown
These are the instructor's results over 30 epochs:Test Loss: 0.822158Test Accuracy of airplane: 73% (735/1000)Test Accuracy of automobile: 78% (784/1000)Test Accuracy of bird: 51% (517/1000)Test Accuracy of cat: 54% (549/1000)Test Accuracy of deer: 73% (732/1000)Test Accuracy of dog: 59% (597/1000)Test Accuracy of frog: 77% (779/1000)Test Accuracy of horse: 74% (749/1000)Test Accuracy of ship: 85% (852/1000)Test Accuracy of truck: 84% (841/1000)Test Accuracy (Overall): 71% (7135/10000) Then these are my results with her model and my transformations:Test Loss: 0.773984Test Accuracy of airplane: 78% (787/1000)Test Accuracy of automobile: 83% (838/1000)Test Accuracy of bird: 63% (631/1000)Test Accuracy of cat: 50% (507/1000)Test Accuracy of deer: 67% (675/1000)Test Accuracy of dog: 60% (605/1000)Test Accuracy of frog: 83% (832/1000)Test Accuracy of horse: 76% (761/1000)Test Accuracy of ship: 84% (840/1000)Test Accuracy of truck: 83% (831/1000)Test Accuracy (Overall): 73% (7307/10000) And here are my results with my model and my transformations:Test Loss: 0.919487Test Accuracy of airplane: 74% (747/1000)Test Accuracy of automobile: 76% (765/1000)Test Accuracy of bird: 57% (574/1000)Test Accuracy of cat: 64% (649/1000)Test Accuracy of deer: 57% (577/1000)Test Accuracy of dog: 48% (482/1000)Test Accuracy of frog: 73% (733/1000)Test Accuracy of horse: 71% (710/1000)Test Accuracy of ship: 76% (760/1000)Test Accuracy of truck: 79% (794/1000)Test Accuracy (Overall): 67% (6791/10000) Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____ |
Section 5/Sec-5.3-Average-consenus-over-regular-dyanmic-topology.ipynb | ###Markdown
Decentralized Gradient under regular dynamic topologyAfter introducing the average consensus over the dynamic one peer strategy of exponential graph, it is natural to apply that on the Adapt-With-Combine algorithm as well. It is simply just$$ x_{k+1,i} = \sum_{j \in \mathcal{N}_i} w^{(k)}_{ij}x_{k,j} - \alpha \nabla f_i(x_{k,i}) $$noting that we change from the static weights $w_{ij}$ to dynamic $w^{(k)}_{ij}$ and if step size $\alpha$ is zero or $f$ is constant, it regresses to average consensus problem. In order to achieve that in BlueFog, you need to specifiy the weights argument in the `neighbor_allreduce` function. Recall the full function signature is:```bf.neighbor_allreduce( tensor: torch.Tensor, self_weight: Union[float, NoneType] = None, src_weights: Union[Dict[int, float], NoneType] = None, dst_weights: Union[List[int], Dict[int, float], NoneType] = None, enable_topo_check: bool = True, name: Union[str, NoneType] = None,) -> torch.Tensor```Different from the static case, you need to give different `self_weight`, `src_weights`, and `dst_weights` in each iteration. **Note:** the argument `src_weights`, which can be either the list of ranks sending to or the map of list of ranks to weights, is necessary for dynamic topology.
###Code
import ipyparallel as ipp
rc = ipp.Client(profile="bluefog")
%%px
import torch
import bluefog.torch as bf
from bluefog.common import topology_util
bf.init()
print(f"Rank: {bf.rank()}, Size: {bf.size()}")
%%px
data_size = 100
seed = 1234
max_iters = 10
torch.random.manual_seed(seed * bf.rank())
x = torch.randn(data_size, dtype=torch.double)
x_bar = bf.allreduce(x, average=True)
mse = [torch.norm(x - x_bar, p=2) / torch.norm(x_bar, p=2)]
###Output
_____no_output_____
###Markdown
Since the design of BlueFog is to sperate the communication functionality from the topology usage, you need to explicitly to create these arguments for the `neighbor_allreduce` function. It is time we combined the utility function `neighbor_allreduce` and `GetDynamicOnePeerSendRecvRanks` for dynamic topology usage.In this section, we only consider the [regular graph](https://en.wikipedia.org/wiki/Regular_graph) -- a regular graph is a graph where each vertex has the same number of neighbors; i.e. every vertex has the same degree or valency. It is easy to see that under $\tau$-regular graph, the `GetDynamicOnePeerSendRecvRanks` function is $\tau$-periodical function. Further, since `GetDynamicOnePeerSendRecvRanks` determine the send-to neighbor based on the relative difference of node index, every iteration, all nodes will have one destination (send-to) node and one source(receive-from) node only, which lead to $W^{(k)}$ is doubly stochastic (not necessary symmetric) at every iterations. This special property will gurantee that our consensus algorithm still converge unbiasedly. However, in general case, it doesn't hold. We will discuss that in the next subsection.For now, let's look at the Exponential Two graph, which is regular graph:
###Code
%%px
bf.set_topology(bf.ExponentialTwoGraph(bf.size()))
dynamic_neighbor_allreduce_gen = bf.GetDynamicOnePeerSendRecvRanks(
bf.load_topology(), bf.rank()
)
for ite in range(max_iters):
dst_neighbors, src_neighbors = next(dynamic_neighbor_allreduce_gen)
uniform_weight = 1 / (len(src_neighbors) + 1)
src_weights = {r: uniform_weight for r in src_neighbors}
self_weight = uniform_weight
x = bf.neighbor_allreduce(
x,
name="x",
self_weight=self_weight,
src_weights=src_weights,
dst_weights=dst_neighbors,
enable_topo_check=True,
)
mse.append(torch.norm(x - x_bar, p=2) / torch.norm(x_bar, p=2))
mse = rc[0].pull("mse", block=True)
plt.semilogy(mse)
###Output
_____no_output_____
###Markdown
Above figure clearly illustrated the lemma in previous section -- the algorithm will reach the consensus under $\tau=$`log(bf.size())` step, which has been show in the lemma of previous subsection. Next, we show the performance comparison between static and dynamic topology over linear cost function.
###Code
num_data = 600
dimension = 20
noise_level = 0.1
X = np.random.randn(num_data, dimension)
x_o = np.random.randn(dimension, 1)
ns = noise_level * np.random.randn(num_data, 1)
y = X.dot(x_o) + ns
# We know the optimal solution in close form solution.
x_opt = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
###Output
_____no_output_____
###Markdown
For simplicity, we just use centralized solution to generate the data and distributed the partial data to all workers. Each worker should only contain `1/len(works)` number of all data.
###Code
num_workers = len(rc.ids)
assert (
num_data % num_workers == 0
), "Please adjust number of data so that it is the multiples of number of workers"
x_opt_torch = torch.from_numpy(x_opt)
for i in range(num_workers):
X_partial = torch.from_numpy(X[i::num_workers])
y_partial = torch.from_numpy(y[i::num_workers])
rc[i].push({"X": X_partial, "y": y_partial, "x_opt": x_opt_torch}, block=True)
%px print(X.shape, y.shape)
%%px
max_iters = 100
mse_dynamic = []
mse_static = []
bf.set_topology(bf.ExponentialTwoGraph(bf.size()))
dynamic_neighbor_allreduce_gen = topology_util.GetDynamicOnePeerSendRecvRanks(
bf.load_topology(), bf.rank()
)
x_static = torch.randn(x_opt.shape, dtype=torch.double)
x_dynamic = torch.randn(x_opt.shape, dtype=torch.double)
step_size = 0.005
for ite in range(max_iters):
send_neighbors, recv_neighbors = next(dynamic_neighbor_allreduce_gen)
uniform_weight = 1 / (len(recv_neighbors) + 1)
neighbor_weights = {r: uniform_weight for r in recv_neighbors}
self_weight = uniform_weight
x_dynamic = x_dynamic - step_size * X.T.mm(X.mm(x_dynamic) - y)
x_dynamic = bf.neighbor_allreduce(
x_dynamic,
name="x_dynamic",
self_weight=self_weight,
src_weights=neighbor_weights,
dst_weights=send_neighbors,
enable_topo_check=True,
)
x_static = x_static - step_size * X.T.mm(X.mm(x_static) - y)
x_static = bf.neighbor_allreduce(x_static, name="x_static")
mse_dynamic.append(torch.norm(x_dynamic - x_opt, p=2) / torch.norm(x_opt, p=2))
mse_static.append(torch.norm(x_static - x_opt, p=2) / torch.norm(x_opt, p=2))
mse_dynamic, mse_static = rc[0].pull(["mse_dynamic", "mse_static"], block=True)
plt.semilogy(mse_dynamic, label="dynamic topology")
plt.semilogy(mse_static, label="static topology")
plt.legend()
###Output
_____no_output_____ |
content/sections/section4/notebook/cs109a_section_4.ipynb | ###Markdown
CS109A Introduction to Data Science Standard Section 4: Regularization and Model Selection**Harvard University****Fall 2019****Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner**Section Leaders**: Marios Mattheakis, Abhimanyu (Abhi) Vasishth, Robbert (Rob) Struyven
###Code
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("http://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
###Output
_____no_output_____
###Markdown
For this section, our goal is to get you familiarized with Regularization in Multiple Linear Regression and to start thinking about Model and Hyper-Parameter Selection. Specifically, we will:- Load in the King County House Price Dataset- Perform some basic EDA- Split the data up into a training, **validation**, and test set (we'll see why we need a validation set)- Scale the variables (by standardizing them) and seeing why we need to do this- Make our multiple & polynomial regression models (like we did in the previous section)- Learn what **regularization** is and how it can help- Understand **ridge** and **lasso** regression- Get an introduction to **cross-validation** using RidgeCV and LassoCV
###Code
# Data and Stats packages
import numpy as np
import pandas as pd
pd.set_option('max_columns', 200)
# Visualization packages
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
EDA: House Prices Data From KaggleFor our dataset, we'll be using the house price dataset from [King County, WA](https://en.wikipedia.org/wiki/King_County,_Washington). The dataset is from [Kaggle](https://www.kaggle.com/harlfoxem/housesalesprediction). The task is to build a regression model to **predict the price**, based on different attributes. First, let's do some EDA.
###Code
# Load the dataset
house_df = pd.read_csv('../data/kc_house_data.csv')
house_df = house_df.sample(frac=1, random_state=42)[0:4000]
print(house_df.shape)
print(house_df.dtypes)
house_df.head()
###Output
(4000, 21)
id int64
date object
price float64
bedrooms int64
bathrooms float64
sqft_living int64
sqft_lot int64
floors float64
waterfront int64
view int64
condition int64
grade int64
sqft_above int64
sqft_basement int64
yr_built int64
yr_renovated int64
zipcode int64
lat float64
long float64
sqft_living15 int64
sqft_lot15 int64
dtype: object
###Markdown
Now let's check for null values and look at the datatypes within the dataset.
###Code
house_df.info()
house_df.describe()
###Output
_____no_output_____
###Markdown
Let's choose a subset of columns here. **NOTE**: The way I'm selecting columns here is not principled and is just for convenience. In your homework assignments (and in real life), we expect you to choose columns more rigorously.1. `bedrooms`2. `bathrooms`3. `sqft_living`4. `sqft_lot`5. `floors`6. `sqft_above`7. `sqft_basement`8. `lat`9. `long`10. **`price`**: Our response variable
###Code
cols_of_interest = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long', 'price']
house_df = house_df[cols_of_interest]
# Convert house price to 1000s of dollars
house_df['price'] = house_df['price']/1000
###Output
_____no_output_____
###Markdown
Let's see how the response variable (`price`) is distributed
###Code
fig, ax = plt.subplots(figsize=(12,5))
ax.hist(house_df['price'], bins=100)
ax.set_title('Histogram of house price (in 1000s of dollars)');
# This takes a bit of time but is worth it!!
# sns.pairplot(house_df);
###Output
_____no_output_____
###Markdown
Train-Validation-Test SplitUp until this point, we have only had a train-test split. Why are we introducing a validation set? What's the point?This is the general idea: 1. **Training Set**: Data you have seen. You train different types of models with various different hyper-parameters and regularization parameters on this data. 2. **Validation Set**: Used to compare different models. We use this step to tune our hyper-parameters i.e. find the optimal set of hyper-parameters (such as $k$ for k-NN or our $\beta_i$ values or number of degrees of our polynomial for linear regression). Pick your best model here. 3. **Test Set**: Using the best model from the previous step, simply report the score e.g. R^2 score, MSE or any metric that you care about, of that model on your test set. **DON'T TUNE YOUR PARAMETERS HERE!**. Why, I hear you ask? Because we want to know how our model might do on data it hasn't seen before. We don't have access to this data (because it may not exist yet) but the test set, which we haven't seen or touched so far, is a good way to mimic this new data. Let's do 60% train, 20% validation, 20% test for this dataset.
###Code
from sklearn.model_selection import train_test_split
# first split the data into a train-test split and don't touch the test set yet
train_df, test_df = train_test_split(house_df, test_size=0.2, random_state=42)
# next, split the training set into a train-validation split
# the test-size is 0.25 since we are splitting 80% of the data into 20% and 60% overall
train_df, val_df = train_test_split(train_df, test_size=0.25, random_state=42)
print('Train Set: {0:0.2f}%'.format(100*train_df.size/house_df.size))
print('Validation Set: {0:0.2f}%'.format(100*val_df.size/house_df.size))
print('Test Set: {0:0.2f}%'.format(100*test_df.size/house_df.size))
###Output
Train Set: 60.00%
Validation Set: 20.00%
Test Set: 20.00%
###Markdown
ModelingIn the [last section](https://github.com/Harvard-IACS/2019-CS109A/tree/master/content/sections/section3), we went over the mechanics of Multiple Linear Regression and created models that had interaction terms and polynomial terms. Specifically, we dealt with the following sorts of models. $$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_M x_M$$Let's adopt a similar process here and get a few different models. Creating a Design Matrix From our model setup in the equation in the previous section, we obtain the following: $$Y = \begin{bmatrix}y_1 \\y_2 \\\vdots \\y_n\end{bmatrix}, \quad X = \begin{bmatrix}x_{1,1} & x_{1,2} & \dots & x_{1,M} \\x_{2,1} & x_{2,2} & \dots & x_{2,M} \\\vdots & \vdots & \ddots & \vdots \\x_{n,1} & x_{n,2} & \dots & x_{n,M} \\\end{bmatrix}, \quad \beta = \begin{bmatrix}\beta_1 \\\beta_2 \\\vdots \\\beta_M\end{bmatrix}, \quad \epsilon = \begin{bmatrix}\epsilon_1 \\\epsilon_2 \\\vdots \\\epsilon_n\end{bmatrix},$$$X$ is an n$\times$M matrix: this is our **design matrix**, $\beta$ is an M-dimensional vector (an M$\times$1 matrix), and $Y$ is an n-dimensional vector (an n$\times$1 matrix). In addition, we know that $\epsilon$ is an n-dimensional vector (an n$\times$1 matrix).
###Code
X = train_df[cols_of_interest]
y = train_df['price']
print(X.shape)
print(y.shape)
###Output
(2400, 10)
(2400,)
###Markdown
Scaling our Design Matrix Warm-Up ExerciseWarm-Up Exercise: for which of the following do the units of the predictors matter (e.g., trip length in minutes vs seconds; temperature in F or C)? A similar question would be: for which of these models do the magnitudes of values taken by different predictors matter? (We will go over Ridge and Lasso Regression in greater detail later)- k-NN (Nearest Neighbors regression)- Linear regression- Lasso regression- Ridge regression**Solutions**- kNN: **yes**. Scaling affects distance metric, which determines what "neighbor" means- Linear regression: **no**. Multiply predictor by $c$ -> divide coef by $c$.- Lasso: **yes**: If we divided coef by $c$, then corresponding penalty term is also divided by $c$.- Ridge: **yes**: Same as Lasso, except penalty divided by $c^2$. Standard Scaler (Standardization) [Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? Hint: you may have seen this in STAT 110 or another statistics course multiple times.$$z = \frac{x-\mu}{\sigma}$$In the above setup: - $z$ is the standardized variable- $x$ is the variable before standardization- $\mu$ is the mean of the variable before standardization- $\sigma$ is the standard deviation of the variable before standardizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import StandardScaler
x = house_df['sqft_living']
mu = x.mean()
sigma = x.std()
z = (x-mu)/sigma
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for standardization
x_reshaped = np.array(x).reshape(-1,1)
z_sklearn = StandardScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before standardization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before standardization')
ax[1].hist(z, bins=100)
ax[1].set_title('Manually standardizing sqft_living')
ax[2].hist(z_sklearn, bins=100)
ax[2].set_title('Standardizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'z_manual': z, 'z_sklearn': z_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
Min-Max Scaler (Normalization)[Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? $$x_{new} = \frac{x-x_{min}}{x_{max}-x_{min}}$$In the above setup: - $x_{new}$ is the normalized variable- $x$ is the variable before normalized- $x_{max}$ is the max value of the variable before normalization- $x_{min}$ is the min value of the variable before normalizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import MinMaxScaler
x = house_df['sqft_living']
x_new = (x-x.min())/(x.max()-x.min())
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for normalization
x_reshaped = np.array(x).reshape(-1,1)
x_new_sklearn = MinMaxScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before normalization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before normalization')
ax[1].hist(x_new, bins=100)
ax[1].set_title('Manually normalizing sqft_living')
ax[2].hist(x_new_sklearn, bins=100)
ax[2].set_title('Normalizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'x_new_manual': x_new, 'x_new_sklearn': x_new_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
**The million dollar question**Should I standardize or normalize my data? [This](https://medium.com/@rrfd/standardize-or-normalize-examples-in-python-e3f174b65dfc), [this](https://medium.com/@swethalakshmanan14/how-when-and-why-should-you-normalize-standardize-rescale-your-data-3f083def38ff) and [this](https://stackoverflow.com/questions/32108179/linear-regression-normalization-vs-standardization) are useful resources that I highly recommend. But in a nutshell, what they say is the following: **Pros of Normalization**1. Normalization (which makes your data go from 0-1) is widely used in image processing and computer vision, where pixel intensities are non-negative and are typically scaled from a 0-255 scale to a 0-1 range for a lot of different algorithms. 2. Normalization is also very useful in neural networks (which we will see later in the course) as it leads to the algorithms converging faster.3. Normalization is useful when your data does not have a discernible distribution and you are not making assumptions about your data's distribution.**Pros of Standardization**1. Standardization maintains outliers (do you see why?) whereas normalization makes outliers less obvious. In applications where outliers are useful, standardization should be done.2. Standardization is useful when you assume your data comes from a Gaussian distribution (or something that is approximately Gaussian). **Some General Advice**1. We learn parameters for standardization ($\mu$ and $\sigma$) and for normalization ($x_{min}$ and $x_{max}$). Make sure these parameters are learned on the training set i.e use the training set parameters even when normalizing/standardizing the test set. In sklearn terms, fit your scaler on the training set and use the scaler to transform your test set and validation set (**don't re-fit your scaler on test set data!**).2. The point of standardization and normalization is to make your variables take on a more manageable scale. You should ideally standardize or normalize all your variables at the same time. 3. Standardization and normalization is not always needed and is not an automatic thing you have to do on any data science homework!! Do so sparingly and try to justify why this is needed.**Interpreting Coefficients**A great quote from [here](https://stats.stackexchange.com/questions/29781/when-conducting-multiple-regression-when-should-you-center-your-predictor-varia)> [Standardization] makes it so the intercept term is interpreted as the expected value of 𝑌𝑖 when the predictor values are set to their means. Otherwise, the intercept is interpreted as the expected value of 𝑌𝑖 when the predictors are set to 0, which may not be a realistic or interpretable situation (e.g. what if the predictors were height and weight?) Standardizing our Design Matrix
###Code
features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long']
X_train = train_df[features]
y_train = np.array(train_df['price']).reshape(-1,1)
X_val = val_df[features]
y_val = np.array(val_df['price']).reshape(-1,1)
X_test = test_df[features]
y_test = np.array(test_df['price']).reshape(-1,1)
scaler = StandardScaler().fit(X_train)
# This converts our matrices into numpy matrices
X_train_t = scaler.transform(X_train)
X_val_t = scaler.transform(X_val)
X_test_t = scaler.transform(X_test)
# Making the numpy matrices pandas dataframes
X_train_df = pd.DataFrame(X_train_t, columns=features)
X_val_df = pd.DataFrame(X_val_t, columns=features)
X_test_df = pd.DataFrame(X_test_t, columns=features)
display(X_train_df.describe())
display(X_val_df.describe())
display(X_test_df.describe())
scaler = StandardScaler().fit(y_train)
y_train = scaler.transform(y_train)
y_val = scaler.transform(y_val)
y_test = scaler.transform(y_test)
###Output
_____no_output_____
###Markdown
One-Degree Polynomial Model
###Code
import statsmodels.api as sm
from statsmodels.regression.linear_model import OLS
model_1 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df)).fit()
model_1.summary()
###Output
_____no_output_____
###Markdown
Two-Degree Polynomial Model
###Code
def add_square_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
return df
X_train_df_2 = add_square_terms(X_train)
X_val_df_2 = add_square_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_2.columns
scaler = StandardScaler().fit(X_train_df_2)
X_train_df_2 = pd.DataFrame(scaler.transform(X_train_df_2), columns=cols)
X_val_df_2 = pd.DataFrame(scaler.transform(X_val_df_2), columns=cols)
print(X_train_df.shape, X_train_df_2.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_2.head()
model_2 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_2)).fit()
model_2.summary()
###Output
_____no_output_____
###Markdown
Three-Degree Polynomial Model
###Code
# generalizing our function from above
def add_square_and_cube_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
df['{}^3'.format(col)] = df[col]**3
return df
X_train_df_3 = add_square_and_cube_terms(X_train)
X_val_df_3 = add_square_and_cube_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_3.columns
scaler = StandardScaler().fit(X_train_df_3)
X_train_df_3 = p
X_val_df_3 = pd.DataFrame(scaler.transform(X_val_df_3), columns=cols)
print(X_train_df.shape, X_train_df_3.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_3.head()
model_3 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_3)).fit()
model_3.summary()
###Output
_____no_output_____
###Markdown
N-Degree Polynomial Model
###Code
# generalizing our function from above
def add_higher_order_polynomial_terms(df, N=7):
df = df.copy()
cols = df.columns.copy()
for col in cols:
for i in range(2, N+1):
df['{}^{}'.format(col, i)] = df[col]**i
return df
N = 8
X_train_df_N = add_higher_order_polynomial_terms(X_train,N)
X_val_df_N = add_higher_order_polynomial_terms(X_val,N)
# Standardizing our added coefficients
cols = X_train_df_N.columns
scaler = StandardScaler().fit(X_train_df_N)
X_train_df_N = pd.DataFrame(scaler.transform(X_train_df_N), columns=cols)
X_val_df_N = pd.DataFrame(scaler.transform(X_val_df_N), columns=cols)
print(X_train_df.shape, X_train_df_N.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_N.head()
model_N = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_N)).fit()
model_N.summary()
###Output
_____no_output_____
###Markdown
You can also create a model with interaction terms or any other higher order polynomial term of your choice. **Note:** Can you see how creating a function that takes in a dataframe and a degree and creates polynomial terms up until that degree can be useful? This is what we have you do in your homework! Regularization What is Regularization and why should I care?When we have a lot of predictors, we need to worry about overfitting. Let's check this out:
###Code
from sklearn.metrics import r2_score
x = [1,2,3,N]
models = [model_1, model_2, model_3, model_N]
X_trains = [X_train_df, X_train_df_2, X_train_df_3, X_train_df_N]
X_vals = [X_val_df, X_val_df_2, X_val_df_3, X_val_df_N]
r2_train = []
r2_val = []
for i,model in enumerate(models):
y_pred_tra = model.predict(sm.add_constant(X_trains[i]))
y_pred_val = model.predict(sm.add_constant(X_vals[i]))
r2_train.append(r2_score(y_train, y_pred_tra))
r2_val.append(r2_score(y_val, y_pred_val))
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, r2_train, 'o-', label=r'Training $R^2$')
ax.plot(x, r2_val, 'o-', label=r'Validation $R^2$')
ax.set_xlabel('Number of degree of polynomial')
ax.set_ylabel(r'$R^2$ score')
ax.set_title(r'$R^2$ score vs polynomial degree')
ax.legend();
###Output
_____no_output_____
###Markdown
We notice a big difference between training and validation R^2 scores: seems like we are overfitting. **Introducing: regularization.** What about Multicollinearity? There's seemingly a lot of multicollinearity in the data. Take a look at this warning that we got when showing our summary for our polynomial models: What is [multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity)? Why do we have it in our dataset? Why is this a problem? Does regularization help solve the issue of multicollinearity? What does Regularization help with?We have some pretty large and extreme coefficient values in our most recent models. These coefficient values also have very high variance. We can also clearly see some overfitting to the training set. In order to reduce the coefficients of our parameters, we can introduce a penalty term that penalizes some of these extreme coefficient values. Specifically, regularization helps us: 1. Avoid overfitting. Reduce features that have weak predictive power.2. Discourage the use of a model that is too complex Big Idea: Reduce Variance by Increasing BiasImage Source: [here](https://www.cse.wustl.edu/~m.neumann/sp2016/cse517/lecturenotes/lecturenote12.html) Ridge RegressionRidge Regression is one such form of regularization. In practice, the ridge estimator reduces the complexity of the model by shrinking the coefficients, but it doesn’t nullify them. We control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [ridge regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) represents this $\lambda$ using a parameter alpha. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients. Lasso RegressionLasso Regression is another form of regularization. Again, we control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [lasso regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) represents this $\lambda$ using a parameter alpha. In Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients. Some Differences between Ridge and Lasso Regression1. Since Lasso regression tend to produce zero estimates for a number of model parameters - we say that Lasso solutions are **sparse** - we consider to be a method for variable selection.2. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients whereas in Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients.3. Ridge Regression has a closed form solution! Lasso Regression does not. We often have to solve this iteratively. In the sklearn package for Lasso regression, there is a parameter called `max_iter` that determines how many iterations we perform. Why Standardizing Variables was not a waste of timeLasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to standardize the variables. Let's use Ridge and Lasso to regularize our degree N polynomial **Exercise**: Play around with different values of alpha. Notice the new $R^2$ value and also the range of values that the predictors take in the plot.
###Code
from sklearn.linear_model import Ridge
# some values you can try out: 0.01, 0.1, 0.5, 1, 5, 10, 20, 40, 100, 200, 500, 1000, 10000
alpha = 100
ridge_model = Ridge(alpha=alpha).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Ridge with alpha={}: {}'.format(alpha, ridge_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(ridge_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Ridge Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
from sklearn.linear_model import Lasso
# some values you can try out: 0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.5, 1, 2, 5, 10, 20
alpha = 0.01
lasso_model = Lasso(alpha=alpha, max_iter = 1000).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Lasso with alpha={}: {}'.format(alpha, lasso_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(lasso_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Lasso Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
###Output
R squared score for our original OLS model: -1.8608470610311345
R squared score for Lasso with alpha=0.01: 0.5975930359800542
###Markdown
Model Selection and Cross-ValidationHere's our current setup so far: So we try out 10,000 different models on our validation set and pick the one that's the best? No! **Since we could also be overfitting the validation set!** One solution to the problems raised by using a single validation set is to evaluate each model on multiple validation sets and average the validation performance. This is the essence of cross-validation!Image source: [here](https://medium.com/@sebastiannorena/some-model-tuning-methods-bfef3e6544f0)Let's give this a try using [RidgeCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html) and [LassoCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LassoCV.html):
###Code
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoCV
alphas = (0.001, 0.01, 0.1, 10, 100, 1000, 10000)
# Let us do k-fold cross validation
k = 4
fitted_ridge = RidgeCV(alphas=alphas).fit(X_train_df_N, y_train)
fitted_lasso = LassoCV(alphas=alphas).fit(X_train_df_N, y_train)
print('R^2 score for our original OLS model: {}\n'.format(r2_val[-1]))
ridge_a = fitted_ridge.alpha_
print('Best alpha for ridge: {}'.format(ridge_a))
print('R^2 score for Ridge with alpha={}: {}\n'.format(ridge_a, fitted_ridge.score(X_val_df_N,y_val)))
lasso_a = fitted_lasso.alpha_
print('Best alpha for lasso: {}'.format(lasso_a))
print('R squared score for Lasso with alpha={}: {}'.format(lasso_a, fitted_lasso.score(X_val_df_N,y_val)))
###Output
R^2 score for our original OLS model: -1.8608470610311345
Best alpha for ridge: 1000.0
R^2 score for Ridge with alpha=1000.0: 0.5779474940635888
Best alpha for lasso: 0.01
R squared score for Lasso with alpha=0.01: 0.5975930359800542
###Markdown
CS109A Introduction to Data Science Standard Section 4: Regularization and Model Selection**Harvard University****Fall 2019****Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner**Section Leaders**: Marios Mattheakis, Abhimanyu (Abhi) Vasishth, Robbert (Rob) Struyven
###Code
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("http://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
###Output
_____no_output_____
###Markdown
For this section, our goal is to get you familiarized with Regularization in Multiple Linear Regression and to start thinking about Model and Hyper-Parameter Selection. Specifically, we will:- Load in the King County House Price Dataset- Perform some basic EDA- Split the data up into a training, **validation**, and test set (we'll see why we need a validation set)- Scale the variables (by standardizing them) and seeing why we need to do this- Make our multiple & polynomial regression models (like we did in the previous section)- Learn what **regularization** is and how it can help- Understand **ridge** and **lasso** regression- Get an introduction to **cross-validation** using RidgeCV and LassoCV
###Code
# Data and Stats packages
import numpy as np
import pandas as pd
pd.set_option('max_columns', 200)
# Visualization packages
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
EDA: House Prices Data From KaggleFor our dataset, we'll be using the house price dataset from [King County, WA](https://en.wikipedia.org/wiki/King_County,_Washington). The dataset is from [Kaggle](https://www.kaggle.com/harlfoxem/housesalesprediction). The task is to build a regression model to **predict the price**, based on different attributes. First, let's do some EDA.
###Code
# Load the dataset
house_df = pd.read_csv('../data/kc_house_data.csv')
house_df = house_df.sample(frac=1, random_state=42)[0:4000]
print(house_df.shape)
print(house_df.dtypes)
house_df.head()
###Output
(4000, 21)
id int64
date object
price float64
bedrooms int64
bathrooms float64
sqft_living int64
sqft_lot int64
floors float64
waterfront int64
view int64
condition int64
grade int64
sqft_above int64
sqft_basement int64
yr_built int64
yr_renovated int64
zipcode int64
lat float64
long float64
sqft_living15 int64
sqft_lot15 int64
dtype: object
###Markdown
Now let's check for null values and look at the datatypes within the dataset.
###Code
house_df.info()
house_df.describe()
###Output
_____no_output_____
###Markdown
Let's choose a subset of columns here. **NOTE**: The way I'm selecting columns here is not principled and is just for convenience. In your homework assignments (and in real life), we expect you to choose columns more rigorously.1. `bedrooms`2. `bathrooms`3. `sqft_living`4. `sqft_lot`5. `floors`6. `sqft_above`7. `sqft_basement`8. `lat`9. `long`10. **`price`**: Our response variable
###Code
cols_of_interest = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long', 'price']
house_df = house_df[cols_of_interest]
# Convert house price to 1000s of dollars
house_df['price'] = house_df['price']/1000
###Output
_____no_output_____
###Markdown
Let's see how the response variable (`price`) is distributed
###Code
fig, ax = plt.subplots(figsize=(12,5))
ax.hist(house_df['price'], bins=100)
ax.set_title('Histogram of house price (in 1000s of dollars)');
# This takes a bit of time but is worth it!!
# sns.pairplot(house_df);
###Output
_____no_output_____
###Markdown
Train-Validation-Test SplitUp until this point, we have only had a train-test split. Why are we introducing a validation set? What's the point?This is the general idea: 1. **Training Set**: Data you have seen. You train different types of models with various different hyper-parameters and regularization parameters on this data. 2. **Validation Set**: Used to compare different models. We use this step to tune our hyper-parameters i.e. find the optimal set of hyper-parameters (such as $k$ for k-NN or our $\beta_i$ values or number of degrees of our polynomial for linear regression). Pick your best model here. 3. **Test Set**: Using the best model from the previous step, simply report the score e.g. R^2 score, MSE or any metric that you care about, of that model on your test set. **DON'T TUNE YOUR PARAMETERS HERE!**. Why, I hear you ask? Because we want to know how our model might do on data it hasn't seen before. We don't have access to this data (because it may not exist yet) but the test set, which we haven't seen or touched so far, is a good way to mimic this new data. Let's do 60% train, 20% validation, 20% test for this dataset.
###Code
from sklearn.model_selection import train_test_split
# first split the data into a train-test split and don't touch the test set yet
train_df, test_df = train_test_split(house_df, test_size=0.2, random_state=42)
# next, split the training set into a train-validation split
# the test-size is 0.25 since we are splitting 80% of the data into 20% and 60% overall
train_df, val_df = train_test_split(train_df, test_size=0.25, random_state=42)
print('Train Set: {0:0.2f}%'.format(100*train_df.size/house_df.size))
print('Validation Set: {0:0.2f}%'.format(100*val_df.size/house_df.size))
print('Test Set: {0:0.2f}%'.format(100*test_df.size/house_df.size))
###Output
Train Set: 60.00%
Validation Set: 20.00%
Test Set: 20.00%
###Markdown
ModelingIn the [last section](https://github.com/Harvard-IACS/2019-CS109A/tree/master/content/sections/section3), we went over the mechanics of Multiple Linear Regression and created models that had interaction terms and polynomial terms. Specifically, we dealt with the following sorts of models. $$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_M x_M$$Let's adopt a similar process here and get a few different models. Creating a Design Matrix From our model setup in the equation in the previous section, we obtain the following: $$Y = \begin{bmatrix}y_1 \\y_2 \\\vdots \\y_n\end{bmatrix}, \quad X = \begin{bmatrix}x_{1,1} & x_{1,2} & \dots & x_{1,M} \\x_{2,1} & x_{2,2} & \dots & x_{2,M} \\\vdots & \vdots & \ddots & \vdots \\x_{n,1} & x_{n,2} & \dots & x_{n,M} \\\end{bmatrix}, \quad \beta = \begin{bmatrix}\beta_1 \\\beta_2 \\\vdots \\\beta_M\end{bmatrix}, \quad \epsilon = \begin{bmatrix}\epsilon_1 \\\epsilon_2 \\\vdots \\\epsilon_n\end{bmatrix},$$$X$ is an n$\times$M matrix: this is our **design matrix**, $\beta$ is an M-dimensional vector (an M$\times$1 matrix), and $Y$ is an n-dimensional vector (an n$\times$1 matrix). In addition, we know that $\epsilon$ is an n-dimensional vector (an n$\times$1 matrix).
###Code
X = train_df[cols_of_interest]
y = train_df['price']
print(X.shape)
print(y.shape)
###Output
(2400, 10)
(2400,)
###Markdown
Scaling our Design Matrix Warm-Up ExerciseWarm-Up Exercise: for which of the following do the units of the predictors matter (e.g., trip length in minutes vs seconds; temperature in F or C)? A similar question would be: for which of these models do the magnitudes of values taken by different predictors matter? (We will go over Ridge and Lasso Regression in greater detail later)- k-NN (Nearest Neighbors regression)- Linear regression- Lasso regression- Ridge regression**Solutions**- kNN: **yes**. Scaling affects distance metric, which determines what "neighbor" means- Linear regression: **no**. Multiply predictor by $c$ -> divide coef by $c$.- Lasso: **yes**: If we divided coef by $c$, then corresponding penalty term is also divided by $c$.- Ridge: **yes**: Same as Lasso, except penalty divided by $c^2$. Standard Scaler (Standardization) [Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? Hint: you may have seen this in STAT 110 or another statistics course multiple times.$$z = \frac{x-\mu}{\sigma}$$In the above setup: - $z$ is the standardized variable- $x$ is the variable before standardization- $\mu$ is the mean of the variable before standardization- $\sigma$ is the standard deviation of the variable before standardizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import StandardScaler
x = house_df['sqft_living']
mu = x.mean()
sigma = x.std()
z = (x-mu)/sigma
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for standardization
x_reshaped = np.array(x).reshape(-1,1)
z_sklearn = StandardScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before standardization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before standardization')
ax[1].hist(z, bins=100)
ax[1].set_title('Manually standardizing sqft_living')
ax[2].hist(z_sklearn, bins=100)
ax[2].set_title('Standardizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'z_manual': z, 'z_sklearn': z_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
Min-Max Scaler (Normalization)[Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? $$x_{new} = \frac{x-x_{min}}{x_{max}-x_{min}}$$In the above setup: - $x_{new}$ is the normalized variable- $x$ is the variable before normalized- $x_{max}$ is the max value of the variable before normalization- $x_{min}$ is the min value of the variable before normalizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import MinMaxScaler
x = house_df['sqft_living']
x_new = (x-x.min())/(x.max()-x.min())
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for normalization
x_reshaped = np.array(x).reshape(-1,1)
x_new_sklearn = MinMaxScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before normalization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before normalization')
ax[1].hist(x_new, bins=100)
ax[1].set_title('Manually normalizing sqft_living')
ax[2].hist(x_new_sklearn, bins=100)
ax[2].set_title('Normalizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'x_new_manual': x_new, 'x_new_sklearn': x_new_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
**The million dollar question**Should I standardize or normalize my data? [This](https://medium.com/@rrfd/standardize-or-normalize-examples-in-python-e3f174b65dfc), [this](https://medium.com/@swethalakshmanan14/how-when-and-why-should-you-normalize-standardize-rescale-your-data-3f083def38ff) and [this](https://stackoverflow.com/questions/32108179/linear-regression-normalization-vs-standardization) are useful resources that I highly recommend. But in a nutshell, what they say is the following: **Pros of Normalization**1. Normalization (which makes your data go from 0-1) is widely used in image processing and computer vision, where pixel intensities are non-negative and are typically scaled from a 0-255 scale to a 0-1 range for a lot of different algorithms. 2. Normalization is also very useful in neural networks (which we will see later in the course) as it leads to the algorithms converging faster.3. Normalization is useful when your data does not have a discernible distribution and you are not making assumptions about your data's distribution.**Pros of Standardization**1. Standardization maintains outliers (do you see why?) whereas normalization makes outliers less obvious. In applications where outliers are useful, standardization should be done.2. Standardization is useful when you assume your data comes from a Gaussian distribution (or something that is approximately Gaussian). **Some General Advice**1. We learn parameters for standardization ($\mu$ and $\sigma$) and for normalization ($x_{min}$ and $x_{max}$). Make sure these parameters are learned on the training set i.e use the training set parameters even when normalizing/standardizing the test set. In sklearn terms, fit your scaler on the training set and use the scaler to transform your test set and validation set (**don't re-fit your scaler on test set data!**).2. The point of standardization and normalization is to make your variables take on a more manageable scale. You should ideally standardize or normalize all your variables at the same time. 3. Standardization and normalization is not always needed and is not an automatic thing you have to do on any data science homework!! Do so sparingly and try to justify why this is needed.**Interpreting Coefficients**A great quote from [here](https://stats.stackexchange.com/questions/29781/when-conducting-multiple-regression-when-should-you-center-your-predictor-varia)> [Standardization] makes it so the intercept term is interpreted as the expected value of 𝑌𝑖 when the predictor values are set to their means. Otherwise, the intercept is interpreted as the expected value of 𝑌𝑖 when the predictors are set to 0, which may not be a realistic or interpretable situation (e.g. what if the predictors were height and weight?) Standardizing our Design Matrix
###Code
features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long']
X_train = train_df[features]
y_train = np.array(train_df['price']).reshape(-1,1)
X_val = val_df[features]
y_val = np.array(val_df['price']).reshape(-1,1)
X_test = test_df[features]
y_test = np.array(test_df['price']).reshape(-1,1)
scaler = StandardScaler().fit(X_train)
# This converts our matrices into numpy matrices
X_train_t = scaler.transform(X_train)
X_val_t = scaler.transform(X_val)
X_test_t = scaler.transform(X_test)
# Making the numpy matrices pandas dataframes
X_train_df = pd.DataFrame(X_train_t, columns=features)
X_val_df = pd.DataFrame(X_val_t, columns=features)
X_test_df = pd.DataFrame(X_test_t, columns=features)
display(X_train_df.describe())
display(X_val_df.describe())
display(X_test_df.describe())
scaler = StandardScaler().fit(y_train)
y_train = scaler.transform(y_train)
y_val = scaler.transform(y_val)
y_test = scaler.transform(y_test)
###Output
_____no_output_____
###Markdown
One-Degree Polynomial Model
###Code
import statsmodels.api as sm
from statsmodels.regression.linear_model import OLS
model_1 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df)).fit()
model_1.summary()
###Output
_____no_output_____
###Markdown
Two-Degree Polynomial Model
###Code
def add_square_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
return df
X_train_df_2 = add_square_terms(X_train)
X_val_df_2 = add_square_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_2.columns
scaler = StandardScaler().fit(X_train_df_2)
X_train_df_2 = pd.DataFrame(scaler.transform(X_train_df_2), columns=cols)
X_val_df_2 = pd.DataFrame(scaler.transform(X_val_df_2), columns=cols)
print(X_train_df.shape, X_train_df_2.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_2.head()
model_2 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_2)).fit()
model_2.summary()
###Output
_____no_output_____
###Markdown
Three-Degree Polynomial Model
###Code
# generalizing our function from above
def add_square_and_cube_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
df['{}^3'.format(col)] = df[col]**3
return df
X_train_df_3 = add_square_and_cube_terms(X_train)
X_val_df_3 = add_square_and_cube_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_3.columns
scaler = StandardScaler().fit(X_train_df_3)
X_train_df_3 = pd.DataFrame(scaler.transform(X_train_df_3), columns=cols)
X_val_df_3 = pd.DataFrame(scaler.transform(X_val_df_3), columns=cols)
print(X_train_df.shape, X_train_df_3.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_3.head()
model_3 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_3)).fit()
model_3.summary()
###Output
_____no_output_____
###Markdown
N-Degree Polynomial Model
###Code
# generalizing our function from above
def add_higher_order_polynomial_terms(df, N=7):
df = df.copy()
cols = df.columns.copy()
for col in cols:
for i in range(2, N+1):
df['{}^{}'.format(col, i)] = df[col]**i
return df
N = 8
X_train_df_N = add_higher_order_polynomial_terms(X_train,N)
X_val_df_N = add_higher_order_polynomial_terms(X_val,N)
# Standardizing our added coefficients
cols = X_train_df_N.columns
scaler = StandardScaler().fit(X_train_df_N)
X_train_df_N = pd.DataFrame(scaler.transform(X_train_df_N), columns=cols)
X_val_df_N = pd.DataFrame(scaler.transform(X_val_df_N), columns=cols)
print(X_train_df.shape, X_train_df_N.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_N.head()
model_N = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_N)).fit()
model_N.summary()
###Output
_____no_output_____
###Markdown
You can also create a model with interaction terms or any other higher order polynomial term of your choice. **Note:** Can you see how creating a function that takes in a dataframe and a degree and creates polynomial terms up until that degree can be useful? This is what we have you do in your homework! Regularization What is Regularization and why should I care?When we have a lot of predictors, we need to worry about overfitting. Let's check this out:
###Code
from sklearn.metrics import r2_score
x = [1,2,3,N]
models = [model_1, model_2, model_3, model_N]
X_trains = [X_train_df, X_train_df_2, X_train_df_3, X_train_df_N]
X_vals = [X_val_df, X_val_df_2, X_val_df_3, X_val_df_N]
r2_train = []
r2_val = []
for i,model in enumerate(models):
y_pred_tra = model.predict(sm.add_constant(X_trains[i]))
y_pred_val = model.predict(sm.add_constant(X_vals[i]))
r2_train.append(r2_score(y_train, y_pred_tra))
r2_val.append(r2_score(y_val, y_pred_val))
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, r2_train, 'o-', label=r'Training $R^2$')
ax.plot(x, r2_val, 'o-', label=r'Validation $R^2$')
ax.set_xlabel('Number of degree of polynomial')
ax.set_ylabel(r'$R^2$ score')
ax.set_title(r'$R^2$ score vs polynomial degree')
ax.legend();
###Output
_____no_output_____
###Markdown
We notice a big difference between training and validation R^2 scores: seems like we are overfitting. **Introducing: regularization.** What about Multicollinearity? There's seemingly a lot of multicollinearity in the data. Take a look at this warning that we got when showing our summary for our polynomial models: What is [multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity)? Why do we have it in our dataset? Why is this a problem? Does regularization help solve the issue of multicollinearity? What does Regularization help with?We have some pretty large and extreme coefficient values in our most recent models. These coefficient values also have very high variance. We can also clearly see some overfitting to the training set. In order to reduce the coefficients of our parameters, we can introduce a penalty term that penalizes some of these extreme coefficient values. Specifically, regularization helps us: 1. Avoid overfitting. Reduce features that have weak predictive power.2. Discourage the use of a model that is too complex Big Idea: Reduce Variance by Increasing BiasImage Source: [here](https://www.cse.wustl.edu/~m.neumann/sp2016/cse517/lecturenotes/lecturenote12.html) Ridge RegressionRidge Regression is one such form of regularization. In practice, the ridge estimator reduces the complexity of the model by shrinking the coefficients, but it doesn’t nullify them. We control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [ridge regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) represents this $\lambda$ using a parameter alpha. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients. Lasso RegressionLasso Regression is another form of regularization. Again, we control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [lasso regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) represents this $\lambda$ using a parameter alpha. In Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients. Some Differences between Ridge and Lasso Regression1. Since Lasso regression tend to produce zero estimates for a number of model parameters - we say that Lasso solutions are **sparse** - we consider to be a method for variable selection.2. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients whereas in Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients.3. Ridge Regression has a closed form solution! Lasso Regression does not. We often have to solve this iteratively. In the sklearn package for Lasso regression, there is a parameter called `max_iter` that determines how many iterations we perform. Why Standardizing Variables was not a waste of timeLasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to standardize the variables. Let's use Ridge and Lasso to regularize our degree N polynomial **Exercise**: Play around with different values of alpha. Notice the new $R^2$ value and also the range of values that the predictors take in the plot.
###Code
from sklearn.linear_model import Ridge
# some values you can try out: 0.01, 0.1, 0.5, 1, 5, 10, 20, 40, 100, 200, 500, 1000, 10000
alpha = 100
ridge_model = Ridge(alpha=alpha).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Ridge with alpha={}: {}'.format(alpha, ridge_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(ridge_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Ridge Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
from sklearn.linear_model import Lasso
# some values you can try out: 0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.5, 1, 2, 5, 10, 20
alpha = 0.01
lasso_model = Lasso(alpha=alpha, max_iter = 1000).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Lasso with alpha={}: {}'.format(alpha, lasso_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(lasso_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Lasso Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
###Output
R squared score for our original OLS model: 0.11935480685127198
R squared score for Lasso with alpha=0.01: 0.6652038673158105
###Markdown
Model Selection and Cross-ValidationHere's our current setup so far: So we try out 10,000 different models on our validation set and pick the one that's the best? No! **Since we could also be overfitting the validation set!** One solution to the problems raised by using a single validation set is to evaluate each model on multiple validation sets and average the validation performance. This is the essence of cross-validation!Image source: [here](https://medium.com/@sebastiannorena/some-model-tuning-methods-bfef3e6544f0)Let's give this a try using [RidgeCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html) and [LassoCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LassoCV.html):
###Code
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoCV
alphas = (0.001, 0.01, 0.1, 10, 100, 1000, 10000)
# Let us do k-fold cross validation
k = 4
fitted_ridge = RidgeCV(alphas=alphas).fit(X_train_df_N, y_train)
fitted_lasso = LassoCV(alphas=alphas).fit(X_train_df_N, y_train)
print('R^2 score for our original OLS model: {}\n'.format(r2_val[-1]))
ridge_a = fitted_ridge.alpha_
print('Best alpha for ridge: {}'.format(ridge_a))
print('R^2 score for Ridge with alpha={}: {}\n'.format(ridge_a, fitted_ridge.score(X_val_df_N,y_val)))
lasso_a = fitted_lasso.alpha_
print('Best alpha for lasso: {}'.format(lasso_a))
print('R squared score for Lasso with alpha={}: {}'.format(lasso_a, fitted_lasso.score(X_val_df_N,y_val)))
###Output
R^2 score for our original OLS model: 0.11935480685127198
Best alpha for ridge: 1000.0
R^2 score for Ridge with alpha=1000.0: 0.6148040375619113
Best alpha for lasso: 0.01
R squared score for Lasso with alpha=0.01: 0.6652038673158105
###Markdown
CS109A Introduction to Data Science Standard Section 4: Regularization and Model Selection**Harvard University****Fall 2019****Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner**Section Leaders**: Marios Mattheakis, Abhimanyu (Abhi) Vasishth, Robbert (Rob) Struyven
###Code
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("http://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
###Output
_____no_output_____
###Markdown
For this section, our goal is to get you familiarized with Regularization in Multiple Linear Regression and to start thinking about Model and Hyper-Parameter Selection. Specifically, we will:- Load in the King County House Price Dataset- Perform some basic EDA- Split the data up into a training, **validation**, and test set (we'll see why we need a validation set)- Scale the variables (by standardizing them) and seeing why we need to do this- Make our multiple & polynomial regression models (like we did in the previous section)- Learn what **regularization** is and how it can help- Understand **ridge** and **lasso** regression- Get an introduction to **cross-validation** using RidgeCV and LassoCV
###Code
# Data and Stats packages
import numpy as np
import pandas as pd
pd.set_option('max_columns', 200)
# Visualization packages
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
EDA: House Prices Data From KaggleFor our dataset, we'll be using the house price dataset from [King County, WA](https://en.wikipedia.org/wiki/King_County,_Washington). The dataset is from [Kaggle](https://www.kaggle.com/harlfoxem/housesalesprediction). The task is to build a regression model to **predict the price**, based on different attributes. First, let's do some EDA.
###Code
# Load the dataset
house_df = pd.read_csv('../data/kc_house_data.csv')
house_df = house_df.sample(frac=1, random_state=42)[0:4000]
print(house_df.shape)
print(house_df.dtypes)
house_df.head()
###Output
(4000, 21)
id int64
date object
price float64
bedrooms int64
bathrooms float64
sqft_living int64
sqft_lot int64
floors float64
waterfront int64
view int64
condition int64
grade int64
sqft_above int64
sqft_basement int64
yr_built int64
yr_renovated int64
zipcode int64
lat float64
long float64
sqft_living15 int64
sqft_lot15 int64
dtype: object
###Markdown
Now let's check for null values and look at the datatypes within the dataset.
###Code
house_df.info()
house_df.describe()
###Output
_____no_output_____
###Markdown
Let's choose a subset of columns here. **NOTE**: The way I'm selecting columns here is not principled and is just for convenience. In your homework assignments (and in real life), we expect you to choose columns more rigorously.1. `bedrooms`2. `bathrooms`3. `sqft_living`4. `sqft_lot`5. `floors`6. `sqft_above`7. `sqft_basement`8. `lat`9. `long`10. **`price`**: Our response variable
###Code
cols_of_interest = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long', 'price']
house_df = house_df[cols_of_interest]
# Convert house price to 1000s of dollars
house_df['price'] = house_df['price']/1000
###Output
_____no_output_____
###Markdown
Let's see how the response variable (`price`) is distributed
###Code
fig, ax = plt.subplots(figsize=(12,5))
ax.hist(house_df['price'], bins=100)
ax.set_title('Histogram of house price (in 1000s of dollars)');
# This takes a bit of time but is worth it!!
# sns.pairplot(house_df);
###Output
_____no_output_____
###Markdown
Train-Validation-Test SplitUp until this point, we have only had a train-test split. Why are we introducing a validation set? What's the point?This is the general idea: 1. **Training Set**: Data you have seen. You train different types of models with various different hyper-parameters and regularization parameters on this data. 2. **Validation Set**: Used to compare different models. We use this step to tune our hyper-parameters i.e. find the optimal set of hyper-parameters (such as $k$ for k-NN or our $\beta_i$ values or number of degrees of our polynomial for linear regression). Pick your best model here. 3. **Test Set**: Using the best model from the previous step, simply report the score e.g. R^2 score, MSE or any metric that you care about, of that model on your test set. **DON'T TUNE YOUR PARAMETERS HERE!**. Why, I hear you ask? Because we want to know how our model might do on data it hasn't seen before. We don't have access to this data (because it may not exist yet) but the test set, which we haven't seen or touched so far, is a good way to mimic this new data. Let's do 60% train, 20% validation, 20% test for this dataset.
###Code
from sklearn.model_selection import train_test_split
# first split the data into a train-test split and don't touch the test set yet
train_df, test_df = train_test_split(house_df, test_size=0.2, random_state=42)
# next, split the training set into a train-validation split
# the test-size is 0.25 since we are splitting 80% of the data into 20% and 60% overall
train_df, val_df = train_test_split(train_df, test_size=0.25, random_state=42)
print('Train Set: {0:0.2f}%'.format(100*train_df.size/house_df.size))
print('Validation Set: {0:0.2f}%'.format(100*val_df.size/house_df.size))
print('Test Set: {0:0.2f}%'.format(100*test_df.size/house_df.size))
###Output
Train Set: 60.00%
Validation Set: 20.00%
Test Set: 20.00%
###Markdown
ModelingIn the [last section](https://github.com/Harvard-IACS/2019-CS109A/tree/master/content/sections/section3), we went over the mechanics of Multiple Linear Regression and created models that had interaction terms and polynomial terms. Specifically, we dealt with the following sorts of models. $$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_M x_M$$Let's adopt a similar process here and get a few different models. Creating a Design Matrix From our model setup in the equation in the previous section, we obtain the following: $$Y = \begin{bmatrix}y_1 \\y_2 \\\vdots \\y_n\end{bmatrix}, \quad X = \begin{bmatrix}x_{1,1} & x_{1,2} & \dots & x_{1,M} \\x_{2,1} & x_{2,2} & \dots & x_{2,M} \\\vdots & \vdots & \ddots & \vdots \\x_{n,1} & x_{n,2} & \dots & x_{n,M} \\\end{bmatrix}, \quad \beta = \begin{bmatrix}\beta_1 \\\beta_2 \\\vdots \\\beta_M\end{bmatrix}, \quad \epsilon = \begin{bmatrix}\epsilon_1 \\\epsilon_2 \\\vdots \\\epsilon_n\end{bmatrix},$$$X$ is an n$\times$M matrix: this is our **design matrix**, $\beta$ is an M-dimensional vector (an M$\times$1 matrix), and $Y$ is an n-dimensional vector (an n$\times$1 matrix). In addition, we know that $\epsilon$ is an n-dimensional vector (an n$\times$1 matrix).
###Code
X = train_df[cols_of_interest]
y = train_df['price']
print(X.shape)
print(y.shape)
###Output
(2400, 10)
(2400,)
###Markdown
Scaling our Design Matrix Warm-Up ExerciseWarm-Up Exercise: for which of the following do the units of the predictors matter (e.g., trip length in minutes vs seconds; temperature in F or C)? A similar question would be: for which of these models do the magnitudes of values taken by different predictors matter? (We will go over Ridge and Lasso Regression in greater detail later)- k-NN (Nearest Neighbors regression)- Linear regression- Lasso regression- Ridge regression**Solutions**- kNN: **yes**. Scaling affects distance metric, which determines what "neighbor" means- Linear regression: **no**. Multiply predictor by $c$ -> divide coef by $c$.- Lasso: **yes**: If we divided coef by $c$, then corresponding penalty term is also divided by $c$.- Ridge: **yes**: Same as Lasso, except penalty divided by $c^2$. Standard Scaler (Standardization) [Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? Hint: you may have seen this in STAT 110 or another statistics course multiple times.$$z = \frac{x-\mu}{\sigma}$$In the above setup: - $z$ is the standardized variable- $x$ is the variable before standardization- $\mu$ is the mean of the variable before standardization- $\sigma$ is the standard deviation of the variable before standardizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import StandardScaler
x = house_df['sqft_living']
mu = x.mean()
sigma = x.std()
z = (x-mu)/sigma
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for standardization
x_reshaped = np.array(x).reshape(-1,1)
z_sklearn = StandardScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before standardization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before standardization')
ax[1].hist(z, bins=100)
ax[1].set_title('Manually standardizing sqft_living')
ax[2].hist(z_sklearn, bins=100)
ax[2].set_title('Standardizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'z_manual': z, 'z_sklearn': z_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
Min-Max Scaler (Normalization)[Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? $$x_{new} = \frac{x-x_{min}}{x_{max}-x_{min}}$$In the above setup: - $x_{new}$ is the normalized variable- $x$ is the variable before normalized- $x_{max}$ is the max value of the variable before normalization- $x_{min}$ is the min value of the variable before normalizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import MinMaxScaler
x = house_df['sqft_living']
x_new = (x-x.min())/(x.max()-x.min())
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for normalization
x_reshaped = np.array(x).reshape(-1,1)
x_new_sklearn = MinMaxScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before normalization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before normalization')
ax[1].hist(x_new, bins=100)
ax[1].set_title('Manually normalizing sqft_living')
ax[2].hist(x_new_sklearn, bins=100)
ax[2].set_title('Normalizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'x_new_manual': x_new, 'x_new_sklearn': x_new_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
**The million dollar question**Should I standardize or normalize my data? [This](https://medium.com/@rrfd/standardize-or-normalize-examples-in-python-e3f174b65dfc), [this](https://medium.com/@swethalakshmanan14/how-when-and-why-should-you-normalize-standardize-rescale-your-data-3f083def38ff) and [this](https://stackoverflow.com/questions/32108179/linear-regression-normalization-vs-standardization) are useful resources that I highly recommend. But in a nutshell, what they say is the following: **Pros of Normalization**1. Normalization (which makes your data go from 0-1) is widely used in image processing and computer vision, where pixel intensities are non-negative and are typically scaled from a 0-255 scale to a 0-1 range for a lot of different algorithms. 2. Normalization is also very useful in neural networks (which we will see later in the course) as it leads to the algorithms converging faster.3. Normalization is useful when your data does not have a discernible distribution and you are not making assumptions about your data's distribution.**Pros of Standardization**1. Standardization maintains outliers (do you see why?) whereas normalization makes outliers less obvious. In applications where outliers are useful, standardization should be done.2. Standardization is useful when you assume your data comes from a Gaussian distribution (or something that is approximately Gaussian). **Some General Advice**1. We learn parameters for standardization ($\mu$ and $\sigma$) and for normalization ($x_{min}$ and $x_{max}$). Make sure these parameters are learned on the training set i.e use the training set parameters even when normalizing/standardizing the test set. In sklearn terms, fit your scaler on the training set and use the scaler to transform your test set and validation set (**don't re-fit your scaler on test set data!**).2. The point of standardization and normalization is to make your variables take on a more manageable scale. You should ideally standardize or normalize all your variables at the same time. 3. Standardization and normalization is not always needed and is not an automatic thing you have to do on any data science homework!! Do so sparingly and try to justify why this is needed.**Interpreting Coefficients**A great quote from [here](https://stats.stackexchange.com/questions/29781/when-conducting-multiple-regression-when-should-you-center-your-predictor-varia)> [Standardization] makes it so the intercept term is interpreted as the expected value of 𝑌𝑖 when the predictor values are set to their means. Otherwise, the intercept is interpreted as the expected value of 𝑌𝑖 when the predictors are set to 0, which may not be a realistic or interpretable situation (e.g. what if the predictors were height and weight?) Standardizing our Design Matrix
###Code
features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long']
X_train = train_df[features]
y_train = np.array(train_df['price']).reshape(-1,1)
X_val = val_df[features]
y_val = np.array(val_df['price']).reshape(-1,1)
X_test = test_df[features]
y_test = np.array(test_df['price']).reshape(-1,1)
scaler = StandardScaler().fit(X_train)
# This converts our matrices into numpy matrices
X_train_t = scaler.transform(X_train)
X_val_t = scaler.transform(X_val)
X_test_t = scaler.transform(X_test)
# Making the numpy matrices pandas dataframes
X_train_df = pd.DataFrame(X_train_t, columns=features)
X_val_df = pd.DataFrame(X_val_t, columns=features)
X_test_df = pd.DataFrame(X_test_t, columns=features)
display(X_train_df.describe())
display(X_val_df.describe())
display(X_test_df.describe())
scaler = StandardScaler().fit(y_train)
y_train = scaler.transform(y_train)
y_val = scaler.transform(y_val)
y_test = scaler.transform(y_test)
###Output
_____no_output_____
###Markdown
One-Degree Polynomial Model
###Code
import statsmodels.api as sm
from statsmodels.regression.linear_model import OLS
model_1 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df)).fit()
model_1.summary()
###Output
_____no_output_____
###Markdown
Two-Degree Polynomial Model
###Code
def add_square_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
return df
X_train_df_2 = add_square_terms(X_train)
X_val_df_2 = add_square_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_2.columns
scaler = StandardScaler().fit(X_train_df_2)
X_train_df_2 = pd.DataFrame(scaler.transform(X_train_df_2), columns=cols)
X_val_df_2 = pd.DataFrame(scaler.transform(X_val_df_2), columns=cols)
print(X_train_df.shape, X_train_df_2.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_2.head()
model_2 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_2)).fit()
model_2.summary()
###Output
_____no_output_____
###Markdown
Three-Degree Polynomial Model
###Code
# generalizing our function from above
def add_square_and_cube_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
df['{}^3'.format(col)] = df[col]**3
return df
X_train_df_3 = add_square_and_cube_terms(X_train)
X_val_df_3 = add_square_and_cube_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_3.columns
scaler = StandardScaler().fit(X_train_df_3)
X_train_df_3 = pd.DataFrame(scaler.transform(X_train_df_3), columns=cols)
X_val_df_3 = pd.DataFrame(scaler.transform(X_val_df_3), columns=cols)
print(X_train_df.shape, X_train_df_3.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_3.head()
model_3 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_3)).fit()
model_3.summary()
###Output
_____no_output_____
###Markdown
N-Degree Polynomial Model
###Code
# generalizing our function from above
def add_higher_order_polynomial_terms(df, N=7):
df = df.copy()
cols = df.columns.copy()
for col in cols:
for i in range(2, N+1):
df['{}^{}'.format(col, i)] = df[col]**i
return df
N = 8
X_train_df_N = add_higher_order_polynomial_terms(X_train,N)
X_val_df_N = add_higher_order_polynomial_terms(X_val,N)
# Standardizing our added coefficients
cols = X_train_df_N.columns
scaler = StandardScaler().fit(X_train_df_N)
X_train_df_N = pd.DataFrame(scaler.transform(X_train_df_N), columns=cols)
X_val_df_N = pd.DataFrame(scaler.transform(X_val_df_N), columns=cols)
print(X_train_df.shape, X_train_df_N.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_N.head()
model_N = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_N)).fit()
model_N.summary()
###Output
_____no_output_____
###Markdown
You can also create a model with interaction terms or any other higher order polynomial term of your choice. **Note:** Can you see how creating a function that takes in a dataframe and a degree and creates polynomial terms up until that degree can be useful? This is what we have you do in your homework! Regularization What is Regularization and why should I care?When we have a lot of predictors, we need to worry about overfitting. Let's check this out:
###Code
from sklearn.metrics import r2_score
x = [1,2,3,N]
models = [model_1, model_2, model_3, model_N]
X_trains = [X_train_df, X_train_df_2, X_train_df_3, X_train_df_N]
X_vals = [X_val_df, X_val_df_2, X_val_df_3, X_val_df_N]
r2_train = []
r2_val = []
for i,model in enumerate(models):
y_pred_tra = model.predict(sm.add_constant(X_trains[i]))
y_pred_val = model.predict(sm.add_constant(X_vals[i]))
r2_train.append(r2_score(y_train, y_pred_tra))
r2_val.append(r2_score(y_val, y_pred_val))
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, r2_train, 'o-', label=r'Training $R^2$')
ax.plot(x, r2_val, 'o-', label=r'Validation $R^2$')
ax.set_xlabel('Number of degree of polynomial')
ax.set_ylabel(r'$R^2$ score')
ax.set_title(r'$R^2$ score vs polynomial degree')
ax.legend();
###Output
_____no_output_____
###Markdown
We notice a big difference between training and validation R^2 scores: seems like we are overfitting. **Introducing: regularization.** What about Multicollinearity? There's seemingly a lot of multicollinearity in the data. Take a look at this warning that we got when showing our summary for our polynomial models: What is [multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity)? Why do we have it in our dataset? Why is this a problem? Does regularization help solve the issue of multicollinearity? What does Regularization help with?We have some pretty large and extreme coefficient values in our most recent models. These coefficient values also have very high variance. We can also clearly see some overfitting to the training set. In order to reduce the coefficients of our parameters, we can introduce a penalty term that penalizes some of these extreme coefficient values. Specifically, regularization helps us: 1. Avoid overfitting. Reduce features that have weak predictive power.2. Discourage the use of a model that is too complex Big Idea: Reduce Variance by Increasing BiasImage Source: [here](https://www.cse.wustl.edu/~m.neumann/sp2016/cse517/lecturenotes/lecturenote12.html) Ridge RegressionRidge Regression is one such form of regularization. In practice, the ridge estimator reduces the complexity of the model by shrinking the coefficients, but it doesn’t nullify them. We control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [ridge regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) represents this $\lambda$ using a parameter alpha. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients. Lasso RegressionLasso Regression is another form of regularization. Again, we control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [lasso regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) represents this $\lambda$ using a parameter alpha. In Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients. Some Differences between Ridge and Lasso Regression1. Since Lasso regression tend to produce zero estimates for a number of model parameters - we say that Lasso solutions are **sparse** - we consider to be a method for variable selection.2. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients whereas in Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients.3. Ridge Regression has a closed form solution! Lasso Regression does not. We often have to solve this iteratively. In the sklearn package for Lasso regression, there is a parameter called `max_iter` that determines how many iterations we perform. Why Standardizing Variables was not a waste of timeLasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to standardize the variables. Let's use Ridge and Lasso to regularize our degree N polynomial **Exercise**: Play around with different values of alpha. Notice the new $R^2$ value and also the range of values that the predictors take in the plot.
###Code
from sklearn.linear_model import Ridge
# some values you can try out: 0.01, 0.1, 0.5, 1, 5, 10, 20, 40, 100, 200, 500, 1000, 10000
alpha = 100
ridge_model = Ridge(alpha=alpha).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Ridge with alpha={}: {}'.format(alpha, ridge_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(ridge_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Ridge Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
from sklearn.linear_model import Lasso
# some values you can try out: 0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.5, 1, 2, 5, 10, 20
alpha = 0.01
lasso_model = Lasso(alpha=alpha, max_iter = 1000).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Lasso with alpha={}: {}'.format(alpha, lasso_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(lasso_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Lasso Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
###Output
R squared score for our original OLS model: -1.8608470610311345
R squared score for Lasso with alpha=0.01: 0.5975930359800542
###Markdown
Model Selection and Cross-ValidationHere's our current setup so far: So we try out 10,000 different models on our validation set and pick the one that's the best? No! **Since we could also be overfitting the validation set!** One solution to the problems raised by using a single validation set is to evaluate each model on multiple validation sets and average the validation performance. This is the essence of cross-validation!Image source: [here](https://medium.com/@sebastiannorena/some-model-tuning-methods-bfef3e6544f0)Let's give this a try using [RidgeCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html) and [LassoCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LassoCV.html):
###Code
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoCV
alphas = (0.001, 0.01, 0.1, 10, 100, 1000, 10000)
# Let us do k-fold cross validation
k = 4
fitted_ridge = RidgeCV(alphas=alphas).fit(X_train_df_N, y_train)
fitted_lasso = LassoCV(alphas=alphas).fit(X_train_df_N, y_train)
print('R^2 score for our original OLS model: {}\n'.format(r2_val[-1]))
ridge_a = fitted_ridge.alpha_
print('Best alpha for ridge: {}'.format(ridge_a))
print('R^2 score for Ridge with alpha={}: {}\n'.format(ridge_a, fitted_ridge.score(X_val_df_N,y_val)))
lasso_a = fitted_lasso.alpha_
print('Best alpha for lasso: {}'.format(lasso_a))
print('R squared score for Lasso with alpha={}: {}'.format(lasso_a, fitted_lasso.score(X_val_df_N,y_val)))
###Output
R^2 score for our original OLS model: -1.8608470610311345
Best alpha for ridge: 1000.0
R^2 score for Ridge with alpha=1000.0: 0.5779474940635888
Best alpha for lasso: 0.01
R squared score for Lasso with alpha=0.01: 0.5975930359800542
###Markdown
CS109A Introduction to Data Science Standard Section 4: Regularization and Model Selection**Harvard University****Fall 2019****Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner**Section Leaders**: Marios Mattheakis, Abhimanyu (Abhi) Vasishth, Robbert (Rob) Struyven
###Code
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("http://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
###Output
_____no_output_____
###Markdown
For this section, our goal is to get you familiarized with Regularization in Multiple Linear Regression and to start thinking about Model and Hyper-Parameter Selection. Specifically, we will:- Load in the King County House Price Dataset- Perform some basic EDA- Split the data up into a training, **validation**, and test set (we'll see why we need a validation set)- Scale the variables (by standardizing them) and seeing why we need to do this- Make our multiple & polynomial regression models (like we did in the previous section)- Learn what **regularization** is and how it can help- Understand **ridge** and **lasso** regression- Get an introduction to **cross-validation** using RidgeCV and LassoCV
###Code
# Data and Stats packages
import numpy as np
import pandas as pd
pd.set_option('max_columns', 200)
# Visualization packages
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
EDA: House Prices Data From KaggleFor our dataset, we'll be using the house price dataset from [King County, WA](https://en.wikipedia.org/wiki/King_County,_Washington). The dataset is from [Kaggle](https://www.kaggle.com/harlfoxem/housesalesprediction). The task is to build a regression model to **predict the price**, based on different attributes. First, let's do some EDA.
###Code
# Load the dataset
house_df = pd.read_csv('../data/kc_house_data.csv')
house_df = house_df.sample(frac=1, random_state=42)[0:4000]
print(house_df.shape)
print(house_df.dtypes)
house_df.head()
###Output
(4000, 21)
id int64
date object
price float64
bedrooms int64
bathrooms float64
sqft_living int64
sqft_lot int64
floors float64
waterfront int64
view int64
condition int64
grade int64
sqft_above int64
sqft_basement int64
yr_built int64
yr_renovated int64
zipcode int64
lat float64
long float64
sqft_living15 int64
sqft_lot15 int64
dtype: object
###Markdown
Now let's check for null values and look at the datatypes within the dataset.
###Code
house_df.info()
house_df.describe()
###Output
_____no_output_____
###Markdown
Let's choose a subset of columns here. **NOTE**: The way I'm selecting columns here is not principled and is just for convenience. In your homework assignments (and in real life), we expect you to choose columns more rigorously.1. `bedrooms`2. `bathrooms`3. `sqft_living`4. `sqft_lot`5. `floors`6. `sqft_above`7. `sqft_basement`8. `lat`9. `long`10. **`price`**: Our response variable
###Code
cols_of_interest = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long', 'price']
house_df = house_df[cols_of_interest]
# Convert house price to 1000s of dollars
house_df['price'] = house_df['price']/1000
###Output
_____no_output_____
###Markdown
Let's see how the response variable (`price`) is distributed
###Code
fig, ax = plt.subplots(figsize=(12,5))
ax.hist(house_df['price'], bins=100)
ax.set_title('Histogram of house price (in 1000s of dollars)');
# This takes a bit of time but is worth it!!
# sns.pairplot(house_df);
###Output
_____no_output_____
###Markdown
Train-Validation-Test SplitUp until this point, we have only had a train-test split. Why are we introducing a validation set? What's the point?This is the general idea: 1. **Training Set**: Data you have seen. You train different types of models with various different hyper-parameters and regularization parameters on this data. 2. **Validation Set**: Used to compare different models. We use this step to tune our hyper-parameters i.e. find the optimal set of hyper-parameters (such as $k$ for k-NN or our $\beta_i$ values or number of degrees of our polynomial for linear regression). Pick your best model here. 3. **Test Set**: Using the best model from the previous step, simply report the score e.g. R^2 score, MSE or any metric that you care about, of that model on your test set. **DON'T TUNE YOUR PARAMETERS HERE!**. Why, I hear you ask? Because we want to know how our model might do on data it hasn't seen before. We don't have access to this data (because it may not exist yet) but the test set, which we haven't seen or touched so far, is a good way to mimic this new data. Let's do 60% train, 20% validation, 20% test for this dataset.
###Code
from sklearn.model_selection import train_test_split
# first split the data into a train-test split and don't touch the test set yet
train_df, test_df = train_test_split(house_df, test_size=0.2, random_state=42)
# next, split the training set into a train-validation split
# the test-size is 0.25 since we are splitting 80% of the data into 20% and 60% overall
train_df, val_df = train_test_split(train_df, test_size=0.25, random_state=42)
print('Train Set: {0:0.2f}%'.format(100*train_df.size/house_df.size))
print('Validation Set: {0:0.2f}%'.format(100*val_df.size/house_df.size))
print('Test Set: {0:0.2f}%'.format(100*test_df.size/house_df.size))
###Output
Train Set: 60.00%
Validation Set: 20.00%
Test Set: 20.00%
###Markdown
ModelingIn the [last section](https://github.com/Harvard-IACS/2019-CS109A/tree/master/content/sections/section3), we went over the mechanics of Multiple Linear Regression and created models that had interaction terms and polynomial terms. Specifically, we dealt with the following sorts of models. $$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_M x_M$$Let's adopt a similar process here and get a few different models. Creating a Design Matrix From our model setup in the equation in the previous section, we obtain the following: $$Y = \begin{bmatrix}y_1 \\y_2 \\\vdots \\y_n\end{bmatrix}, \quad X = \begin{bmatrix}x_{1,1} & x_{1,2} & \dots & x_{1,M} \\x_{2,1} & x_{2,2} & \dots & x_{2,M} \\\vdots & \vdots & \ddots & \vdots \\x_{n,1} & x_{n,2} & \dots & x_{n,M} \\\end{bmatrix}, \quad \beta = \begin{bmatrix}\beta_1 \\\beta_2 \\\vdots \\\beta_M\end{bmatrix}, \quad \epsilon = \begin{bmatrix}\epsilon_1 \\\epsilon_2 \\\vdots \\\epsilon_n\end{bmatrix},$$$X$ is an n$\times$M matrix: this is our **design matrix**, $\beta$ is an M-dimensional vector (an M$\times$1 matrix), and $Y$ is an n-dimensional vector (an n$\times$1 matrix). In addition, we know that $\epsilon$ is an n-dimensional vector (an n$\times$1 matrix).
###Code
X = train_df[cols_of_interest]
y = train_df['price']
print(X.shape)
print(y.shape)
###Output
(2400, 10)
(2400,)
###Markdown
Scaling our Design Matrix Warm-Up ExerciseWarm-Up Exercise: for which of the following do the units of the predictors matter (e.g., trip length in minutes vs seconds; temperature in F or C)? A similar question would be: for which of these models do the magnitudes of values taken by different predictors matter? (We will go over Ridge and Lasso Regression in greater detail later)- k-NN (Nearest Neighbors regression)- Linear regression- Lasso regression- Ridge regression**Solutions**- kNN: **yes**. Scaling affects distance metric, which determines what "neighbor" means- Linear regression: **no**. Multiply predictor by $c$ -> divide coef by $c$.- Lasso: **yes**: If we divided coef by $c$, then corresponding penalty term is also divided by $c$.- Ridge: **yes**: Same as Lasso, except penalty divided by $c^2$. Standard Scaler (Standardization) [Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? Hint: you may have seen this in STAT 110 or another statistics course multiple times.$$z = \frac{x-\mu}{\sigma}$$In the above setup: - $z$ is the standardized variable- $x$ is the variable before standardization- $\mu$ is the mean of the variable before standardization- $\sigma$ is the standard deviation of the variable before standardizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import StandardScaler
x = house_df['sqft_living']
mu = x.mean()
sigma = x.std()
z = (x-mu)/sigma
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for standardization
x_reshaped = np.array(x).reshape(-1,1)
z_sklearn = StandardScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before standardization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before standardization')
ax[1].hist(z, bins=100)
ax[1].set_title('Manually standardizing sqft_living')
ax[2].hist(z_sklearn, bins=100)
ax[2].set_title('Standardizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'z_manual': z, 'z_sklearn': z_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
Min-Max Scaler (Normalization)[Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? $$x_{new} = \frac{x-x_{min}}{x_{max}-x_{min}}$$In the above setup: - $x_{new}$ is the normalized variable- $x$ is the variable before normalized- $x_{max}$ is the max value of the variable before normalization- $x_{min}$ is the min value of the variable before normalizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import MinMaxScaler
x = house_df['sqft_living']
x_new = (x-x.min())/(x.max()-x.min())
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for normalization
x_reshaped = np.array(x).reshape(-1,1)
x_new_sklearn = MinMaxScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before normalization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before normalization')
ax[1].hist(x_new, bins=100)
ax[1].set_title('Manually normalizing sqft_living')
ax[2].hist(x_new_sklearn, bins=100)
ax[2].set_title('Normalizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'x_new_manual': x_new, 'x_new_sklearn': x_new_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
**The million dollar question**Should I standardize or normalize my data? [This](https://medium.com/@rrfd/standardize-or-normalize-examples-in-python-e3f174b65dfc), [this](https://medium.com/@swethalakshmanan14/how-when-and-why-should-you-normalize-standardize-rescale-your-data-3f083def38ff) and [this](https://stackoverflow.com/questions/32108179/linear-regression-normalization-vs-standardization) are useful resources that I highly recommend. But in a nutshell, what they say is the following: **Pros of Normalization**1. Normalization (which makes your data go from 0-1) is widely used in image processing and computer vision, where pixel intensities are non-negative and are typically scaled from a 0-255 scale to a 0-1 range for a lot of different algorithms. 2. Normalization is also very useful in neural networks (which we will see later in the course) as it leads to the algorithms converging faster.3. Normalization is useful when your data does not have a discernible distribution and you are not making assumptions about your data's distribution.**Pros of Standardization**1. Standardization maintains outliers (do you see why?) whereas normalization makes outliers less obvious. In applications where outliers are useful, standardization should be done.2. Standardization is useful when you assume your data comes from a Gaussian distribution (or something that is approximately Gaussian). **Some General Advice**1. We learn parameters for standardization ($\mu$ and $\sigma$) and for normalization ($x_{min}$ and $x_{max}$). Make sure these parameters are learned on the training set i.e use the training set parameters even when normalizing/standardizing the test set. In sklearn terms, fit your scaler on the training set and use the scaler to transform your test set and validation set (**don't re-fit your scaler on test set data!**).2. The point of standardization and normalization is to make your variables take on a more manageable scale. You should ideally standardize or normalize all your variables at the same time. 3. Standardization and normalization is not always needed and is not an automatic thing you have to do on any data science homework!! Do so sparingly and try to justify why this is needed.**Interpreting Coefficients**A great quote from [here](https://stats.stackexchange.com/questions/29781/when-conducting-multiple-regression-when-should-you-center-your-predictor-varia)> [Standardization] makes it so the intercept term is interpreted as the expected value of 𝑌𝑖 when the predictor values are set to their means. Otherwise, the intercept is interpreted as the expected value of 𝑌𝑖 when the predictors are set to 0, which may not be a realistic or interpretable situation (e.g. what if the predictors were height and weight?) Standardizing our Design Matrix
###Code
features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long']
X_train = train_df[features]
y_train = np.array(train_df['price']).reshape(-1,1)
X_val = val_df[features]
y_val = np.array(val_df['price']).reshape(-1,1)
X_test = test_df[features]
y_test = np.array(test_df['price']).reshape(-1,1)
scaler = StandardScaler().fit(X_train)
# This converts our matrices into numpy matrices
X_train_t = scaler.transform(X_train)
X_val_t = scaler.transform(X_val)
X_test_t = scaler.transform(X_test)
# Making the numpy matrices pandas dataframes
X_train_df = pd.DataFrame(X_train_t, columns=features)
X_val_df = pd.DataFrame(X_val_t, columns=features)
X_test_df = pd.DataFrame(X_test_t, columns=features)
display(X_train_df.describe())
display(X_val_df.describe())
display(X_test_df.describe())
scaler = StandardScaler().fit(y_train)
y_train = scaler.transform(y_train)
y_val = scaler.transform(y_val)
y_test = scaler.transform(y_test)
###Output
_____no_output_____
###Markdown
One-Degree Polynomial Model
###Code
import statsmodels.api as sm
from statsmodels.regression.linear_model import OLS
model_1 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df)).fit()
model_1.summary()
###Output
_____no_output_____
###Markdown
Two-Degree Polynomial Model
###Code
def add_square_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
return df
X_train_df_2 = add_square_terms(X_train)
X_val_df_2 = add_square_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_2.columns
scaler = StandardScaler().fit(X_train_df_2)
X_train_df_2 = pd.DataFrame(scaler.transform(X_train_df_2), columns=cols)
X_val_df_2 = pd.DataFrame(scaler.transform(X_val_df_2), columns=cols)
print(X_train_df.shape, X_train_df_2.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_2.head()
model_2 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_2)).fit()
model_2.summary()
###Output
_____no_output_____
###Markdown
Three-Degree Polynomial Model
###Code
# generalizing our function from above
def add_square_and_cube_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
df['{}^3'.format(col)] = df[col]**3
return df
X_train_df_3 = add_square_and_cube_terms(X_train)
X_val_df_3 = add_square_and_cube_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_3.columns
scaler = StandardScaler().fit(X_train_df_3)
X_train_df_3 = pd.DataFrame(scaler.transform(X_train_df_3), columns=cols)
X_val_df_3 = pd.DataFrame(scaler.transform(X_val_df_3), columns=cols)
print(X_train_df.shape, X_train_df_3.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_3.head()
model_3 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_3)).fit()
model_3.summary()
###Output
_____no_output_____
###Markdown
N-Degree Polynomial Model
###Code
# generalizing our function from above
def add_higher_order_polynomial_terms(df, N=7):
df = df.copy()
cols = df.columns.copy()
for col in cols:
for i in range(2, N+1):
df['{}^{}'.format(col, i)] = df[col]**i
return df
N = 8
X_train_df_N = add_higher_order_polynomial_terms(X_train,N)
X_val_df_N = add_higher_order_polynomial_terms(X_val,N)
# Standardizing our added coefficients
cols = X_train_df_N.columns
scaler = StandardScaler().fit(X_train_df_N)
X_train_df_N = pd.DataFrame(scaler.transform(X_train_df_N), columns=cols)
X_val_df_N = pd.DataFrame(scaler.transform(X_val_df_N), columns=cols)
print(X_train_df.shape, X_train_df_N.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_N.head()
model_N = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_N)).fit()
model_N.summary()
###Output
_____no_output_____
###Markdown
You can also create a model with interaction terms or any other higher order polynomial term of your choice. **Note:** Can you see how creating a function that takes in a dataframe and a degree and creates polynomial terms up until that degree can be useful? This is what we have you do in your homework! Regularization What is Regularization and why should I care?When we have a lot of predictors, we need to worry about overfitting. Let's check this out:
###Code
from sklearn.metrics import r2_score
x = [1,2,3,N]
models = [model_1, model_2, model_3, model_N]
X_trains = [X_train_df, X_train_df_2, X_train_df_3, X_train_df_N]
X_vals = [X_val_df, X_val_df_2, X_val_df_3, X_val_df_N]
r2_train = []
r2_val = []
for i,model in enumerate(models):
y_pred_tra = model.predict(sm.add_constant(X_trains[i]))
y_pred_val = model.predict(sm.add_constant(X_vals[i]))
r2_train.append(r2_score(y_train, y_pred_tra))
r2_val.append(r2_score(y_val, y_pred_val))
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, r2_train, 'o-', label=r'Training $R^2$')
ax.plot(x, r2_val, 'o-', label=r'Validation $R^2$')
ax.set_xlabel('Number of degree of polynomial')
ax.set_ylabel(r'$R^2$ score')
ax.set_title(r'$R^2$ score vs polynomial degree')
ax.legend();
###Output
_____no_output_____
###Markdown
We notice a big difference between training and validation R^2 scores: seems like we are overfitting. **Introducing: regularization.** What about Multicollinearity? There's seemingly a lot of multicollinearity in the data. Take a look at this warning that we got when showing our summary for our polynomial models: What is [multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity)? Why do we have it in our dataset? Why is this a problem? Does regularization help solve the issue of multicollinearity? What does Regularization help with?We have some pretty large and extreme coefficient values in our most recent models. These coefficient values also have very high variance. We can also clearly see some overfitting to the training set. In order to reduce the coefficients of our parameters, we can introduce a penalty term that penalizes some of these extreme coefficient values. Specifically, regularization helps us: 1. Avoid overfitting. Reduce features that have weak predictive power.2. Discourage the use of a model that is too complex Big Idea: Reduce Variance by Increasing BiasImage Source: [here](https://www.cse.wustl.edu/~m.neumann/sp2016/cse517/lecturenotes/lecturenote12.html) Ridge RegressionRidge Regression is one such form of regularization. In practice, the ridge estimator reduces the complexity of the model by shrinking the coefficients, but it doesn’t nullify them. We control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [ridge regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) represents this $\lambda$ using a parameter alpha. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients. Lasso RegressionLasso Regression is another form of regularization. Again, we control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [lasso regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) represents this $\lambda$ using a parameter alpha. In Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients. Some Differences between Ridge and Lasso Regression1. Since Lasso regression tend to produce zero estimates for a number of model parameters - we say that Lasso solutions are **sparse** - we consider to be a method for variable selection.2. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients whereas in Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients.3. Ridge Regression has a closed form solution! Lasso Regression does not. We often have to solve this iteratively. In the sklearn package for Lasso regression, there is a parameter called `max_iter` that determines how many iterations we perform. Why Standardizing Variables was not a waste of timeLasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to standardize the variables. Let's use Ridge and Lasso to regularize our degree N polynomial **Exercise**: Play around with different values of alpha. Notice the new $R^2$ value and also the range of values that the predictors take in the plot.
###Code
from sklearn.linear_model import Ridge
# some values you can try out: 0.01, 0.1, 0.5, 1, 5, 10, 20, 40, 100, 200, 500, 1000, 10000
alpha = 100
ridge_model = Ridge(alpha=alpha).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Ridge with alpha={}: {}'.format(alpha, ridge_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(ridge_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Ridge Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
from sklearn.linear_model import Lasso
# some values you can try out: 0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.5, 1, 2, 5, 10, 20
alpha = 0.01
lasso_model = Lasso(alpha=alpha, max_iter = 1000).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Lasso with alpha={}: {}'.format(alpha, lasso_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(lasso_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Lasso Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
###Output
R squared score for our original OLS model: -1.8608470610311345
R squared score for Lasso with alpha=0.01: 0.5975930359800542
###Markdown
Model Selection and Cross-ValidationHere's our current setup so far: So we try out 10,000 different models on our validation set and pick the one that's the best? No! **Since we could also be overfitting the validation set!** One solution to the problems raised by using a single validation set is to evaluate each model on multiple validation sets and average the validation performance. This is the essence of cross-validation!Image source: [here](https://medium.com/@sebastiannorena/some-model-tuning-methods-bfef3e6544f0)Let's give this a try using [RidgeCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html) and [LassoCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LassoCV.html):
###Code
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoCV
alphas = (0.001, 0.01, 0.1, 10, 100, 1000, 10000)
# Let us do k-fold cross validation
k = 4
fitted_ridge = RidgeCV(alphas=alphas).fit(X_train_df_N, y_train)
fitted_lasso = LassoCV(alphas=alphas).fit(X_train_df_N, y_train)
print('R^2 score for our original OLS model: {}\n'.format(r2_val[-1]))
ridge_a = fitted_ridge.alpha_
print('Best alpha for ridge: {}'.format(ridge_a))
print('R^2 score for Ridge with alpha={}: {}\n'.format(ridge_a, fitted_ridge.score(X_val_df_N,y_val)))
lasso_a = fitted_lasso.alpha_
print('Best alpha for lasso: {}'.format(lasso_a))
print('R squared score for Lasso with alpha={}: {}'.format(lasso_a, fitted_lasso.score(X_val_df_N,y_val)))
###Output
R^2 score for our original OLS model: -1.8608470610311345
Best alpha for ridge: 1000.0
R^2 score for Ridge with alpha=1000.0: 0.5779474940635888
Best alpha for lasso: 0.01
R squared score for Lasso with alpha=0.01: 0.5975930359800542
###Markdown
CS109A Introduction to Data Science Standard Section 4: Regularization and Model Selection**Harvard University****Fall 2019****Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner**Section Leaders**: Marios Mattheakis, Abhimanyu (Abhi) Vasishth, Robbert (Rob) Struyven
###Code
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("http://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
###Output
_____no_output_____
###Markdown
For this section, our goal is to get you familiarized with Regularization in Multiple Linear Regression and to start thinking about Model and Hyper-Parameter Selection. Specifically, we will:- Load in the King County House Price Dataset- Perform some basic EDA- Split the data up into a training, **validation**, and test set (we'll see why we need a validation set)- Scale the variables (by standardizing them) and seeing why we need to do this- Make our multiple & polynomial regression models (like we did in the previous section)- Learn what **regularization** is and how it can help- Understand **ridge** and **lasso** regression- Get an introduction to **cross-validation** using RidgeCV and LassoCV
###Code
# Data and Stats packages
import numpy as np
import pandas as pd
pd.set_option('max_columns', 200)
# Visualization packages
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
EDA: House Prices Data From KaggleFor our dataset, we'll be using the house price dataset from [King County, WA](https://en.wikipedia.org/wiki/King_County,_Washington). The dataset is from [Kaggle](https://www.kaggle.com/harlfoxem/housesalesprediction). The task is to build a regression model to **predict the price**, based on different attributes. First, let's do some EDA.
###Code
# Load the dataset
house_df = pd.read_csv('../data/kc_house_data.csv')
house_df = house_df.sample(frac=1, random_state=42)[0:4000]
print(house_df.shape)
print(house_df.dtypes)
house_df.head()
###Output
_____no_output_____
###Markdown
Now let's check for null values and look at the datatypes within the dataset.
###Code
house_df.info()
house_df.describe()
###Output
_____no_output_____
###Markdown
Let's choose a subset of columns here. **NOTE**: The way I'm selecting columns here is not principled and is just for convenience. In your homework assignments (and in real life), we expect you to choose columns more rigorously.1. `bedrooms`2. `bathrooms`3. `sqft_living`4. `sqft_lot`5. `floors`6. `sqft_above`7. `sqft_basement`8. `lat`9. `long`10. **`price`**: Our response variable
###Code
cols_of_interest = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long', 'price']
house_df = house_df[cols_of_interest]
# Convert house price to 1000s of dollars
house_df['price'] = house_df['price']/1000
###Output
_____no_output_____
###Markdown
Let's see how the response variable (`price`) is distributed
###Code
fig, ax = plt.subplots(figsize=(12,5))
ax.hist(house_df['price'], bins=100)
ax.set_title('Histogram of house price (in 1000s of dollars)');
# This takes a bit of time but is worth it!!
# sns.pairplot(house_df);
###Output
_____no_output_____
###Markdown
Train-Validation-Test SplitUp until this point, we have only had a train-test split. Why are we introducing a validation set? What's the point?This is the general idea: 1. **Training Set**: Data you have seen. You train different types of models with various different hyper-parameters and regularization parameters on this data. 2. **Validation Set**: Used to compare different models. We use this step to tune our hyper-parameters i.e. find the optimal set of hyper-parameters (such as $k$ for k-NN or our $\beta_i$ values or number of degrees of our polynomial for linear regression). Pick your best model here. 3. **Test Set**: Using the best model from the previous step, simply report the score e.g. R^2 score, MSE or any metric that you care about, of that model on your test set. **DON'T TUNE YOUR PARAMETERS HERE!**. Why, I hear you ask? Because we want to know how our model might do on data it hasn't seen before. We don't have access to this data (because it may not exist yet) but the test set, which we haven't seen or touched so far, is a good way to mimic this new data. Let's do 60% train, 20% validation, 20% test for this dataset.
###Code
from sklearn.model_selection import train_test_split
# first split the data into a train-test split and don't touch the test set yet
train_df, test_df = train_test_split(house_df, test_size=0.2, random_state=42)
# next, split the training set into a train-validation split
# the test-size is 0.25 since we are splitting 80% of the data into 20% and 60% overall
train_df, val_df = train_test_split(train_df, test_size=0.25, random_state=42)
print('Train Set: {0:0.2f}%'.format(100*train_df.size/house_df.size))
print('Validation Set: {0:0.2f}%'.format(100*val_df.size/house_df.size))
print('Test Set: {0:0.2f}%'.format(100*test_df.size/house_df.size))
###Output
_____no_output_____
###Markdown
ModelingIn the [last section](https://github.com/Harvard-IACS/2019-CS109A/tree/master/content/sections/section3), we went over the mechanics of Multiple Linear Regression and created models that had interaction terms and polynomial terms. Specifically, we dealt with the following sorts of models. $$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_M x_M$$Let's adopt a similar process here and get a few different models. Creating a Design Matrix From our model setup in the equation in the previous section, we obtain the following: $$Y = \begin{bmatrix}y_1 \\y_2 \\\vdots \\y_n\end{bmatrix}, \quad X = \begin{bmatrix}x_{1,1} & x_{1,2} & \dots & x_{1,M} \\x_{2,1} & x_{2,2} & \dots & x_{2,M} \\\vdots & \vdots & \ddots & \vdots \\x_{n,1} & x_{n,2} & \dots & x_{n,M} \\\end{bmatrix}, \quad \beta = \begin{bmatrix}\beta_1 \\\beta_2 \\\vdots \\\beta_M\end{bmatrix}, \quad \epsilon = \begin{bmatrix}\epsilon_1 \\\epsilon_2 \\\vdots \\\epsilon_n\end{bmatrix},$$$X$ is an n$\times$M matrix: this is our **design matrix**, $\beta$ is an M-dimensional vector (an M$\times$1 matrix), and $Y$ is an n-dimensional vector (an n$\times$1 matrix). In addition, we know that $\epsilon$ is an n-dimensional vector (an n$\times$1 matrix).
###Code
X = train_df[cols_of_interest]
y = train_df['price']
print(X.shape)
print(y.shape)
###Output
_____no_output_____
###Markdown
Scaling our Design Matrix Warm-Up ExerciseWarm-Up Exercise: for which of the following do the units of the predictors matter (e.g., trip length in minutes vs seconds; temperature in F or C)? A similar question would be: for which of these models do the magnitudes of values taken by different predictors matter? (We will go over Ridge and Lasso Regression in greater detail later)- k-NN (Nearest Neighbors regression)- Linear regression- Lasso regression- Ridge regression**Solutions**- kNN: **yes**. Scaling affects distance metric, which determines what "neighbor" means- Linear regression: **no**. Multiply predictor by $c$ -> divide coef by $c$.- Lasso: **yes**: If we divided coef by $c$, then corresponding penalty term is also divided by $c$.- Ridge: **yes**: Same as Lasso, except penalty divided by $c^2$. Standard Scaler (Standardization) [Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? Hint: you may have seen this in STAT 110 or another statistics course multiple times.$$z = \frac{x-\mu}{\sigma}$$In the above setup: - $z$ is the standardized variable- $x$ is the variable before standardization- $\mu$ is the mean of the variable before standardization- $\sigma$ is the standard deviation of the variable before standardizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import StandardScaler
x = house_df['sqft_living']
mu = x.mean()
sigma = x.std()
z = (x-mu)/sigma
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for standardization
x_reshaped = np.array(x).reshape(-1,1)
z_sklearn = StandardScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before standardization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before standardization')
ax[1].hist(z, bins=100)
ax[1].set_title('Manually standardizing sqft_living')
ax[2].hist(z_sklearn, bins=100)
ax[2].set_title('Standardizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'z_manual': z, 'z_sklearn': z_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
Min-Max Scaler (Normalization)[Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? $$x_{new} = \frac{x-x_{min}}{x_{max}-x_{min}}$$In the above setup: - $x_{new}$ is the normalized variable- $x$ is the variable before normalized- $x_{max}$ is the max value of the variable before normalization- $x_{min}$ is the min value of the variable before normalizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import MinMaxScaler
x = house_df['sqft_living']
x_new = (x-x.min())/(x.max()-x.min())
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for normalization
x_reshaped = np.array(x).reshape(-1,1)
x_new_sklearn = MinMaxScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before normalization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before normalization')
ax[1].hist(x_new, bins=100)
ax[1].set_title('Manually normalizing sqft_living')
ax[2].hist(x_new_sklearn, bins=100)
ax[2].set_title('Normalizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'x_new_manual': x_new, 'x_new_sklearn': x_new_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
**The million dollar question**Should I standardize or normalize my data? [This](https://medium.com/@rrfd/standardize-or-normalize-examples-in-python-e3f174b65dfc), [this](https://medium.com/@swethalakshmanan14/how-when-and-why-should-you-normalize-standardize-rescale-your-data-3f083def38ff) and [this](https://stackoverflow.com/questions/32108179/linear-regression-normalization-vs-standardization) are useful resources that I highly recommend. But in a nutshell, what they say is the following: **Pros of Normalization**1. Normalization (which makes your data go from 0-1) is widely used in image processing and computer vision, where pixel intensities are non-negative and are typically scaled from a 0-255 scale to a 0-1 range for a lot of different algorithms. 2. Normalization is also very useful in neural networks (which we will see later in the course) as it leads to the algorithms converging faster.3. Normalization is useful when your data does not have a discernible distribution and you are not making assumptions about your data's distribution.**Pros of Standardization**1. Standardization maintains outliers (do you see why?) whereas normalization makes outliers less obvious. In applications where outliers are useful, standardization should be done.2. Standardization is useful when you assume your data comes from a Gaussian distribution (or something that is approximately Gaussian). **Some General Advice**1. We learn parameters for standardization ($\mu$ and $\sigma$) and for normalization ($x_{min}$ and $x_{max}$). Make sure these parameters are learned on the training set i.e use the training set parameters even when normalizing/standardizing the test set. In sklearn terms, fit your scaler on the training set and use the scaler to transform your test set and validation set (**don't re-fit your scaler on test set data!**).2. The point of standardization and normalization is to make your variables take on a more manageable scale. You should ideally standardize or normalize all your variables at the same time. 3. Standardization and normalization is not always needed and is not an automatic thing you have to do on any data science homework!! Do so sparingly and try to justify why this is needed.**Interpreting Coefficients**A great quote from [here](https://stats.stackexchange.com/questions/29781/when-conducting-multiple-regression-when-should-you-center-your-predictor-varia)> [Standardization] makes it so the intercept term is interpreted as the expected value of 𝑌𝑖 when the predictor values are set to their means. Otherwise, the intercept is interpreted as the expected value of 𝑌𝑖 when the predictors are set to 0, which may not be a realistic or interpretable situation (e.g. what if the predictors were height and weight?) Standardizing our Design Matrix
###Code
features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long']
X_train = train_df[features]
y_train = np.array(train_df['price']).reshape(-1,1)
X_val = val_df[features]
y_val = np.array(val_df['price']).reshape(-1,1)
X_test = test_df[features]
y_test = np.array(test_df['price']).reshape(-1,1)
scaler = StandardScaler().fit(X_train)
# This converts our matrices into numpy matrices
X_train_t = scaler.transform(X_train)
X_val_t = scaler.transform(X_val)
X_test_t = scaler.transform(X_test)
# Making the numpy matrices pandas dataframes
X_train_df = pd.DataFrame(X_train_t, columns=features)
X_val_df = pd.DataFrame(X_val_t, columns=features)
X_test_df = pd.DataFrame(X_test_t, columns=features)
display(X_train_df.describe())
display(X_val_df.describe())
display(X_test_df.describe())
scaler = StandardScaler().fit(y_train)
y_train = scaler.transform(y_train)
y_val = scaler.transform(y_val)
y_test = scaler.transform(y_test)
###Output
_____no_output_____
###Markdown
One-Degree Polynomial Model
###Code
import statsmodels.api as sm
from statsmodels.regression.linear_model import OLS
model_1 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df)).fit()
model_1.summary()
###Output
_____no_output_____
###Markdown
Two-Degree Polynomial Model
###Code
def add_square_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
return df
X_train_df_2 = add_square_terms(X_train)
X_val_df_2 = add_square_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_2.columns
scaler = StandardScaler().fit(X_train_df_2)
X_train_df_2 = pd.DataFrame(scaler.transform(X_train_df_2), columns=cols)
X_val_df_2 = pd.DataFrame(scaler.transform(X_val_df_2), columns=cols)
print(X_train_df.shape, X_train_df_2.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_2.head()
model_2 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_2)).fit()
model_2.summary()
###Output
_____no_output_____
###Markdown
Three-Degree Polynomial Model
###Code
# generalizing our function from above
def add_square_and_cube_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
df['{}^3'.format(col)] = df[col]**3
return df
X_train_df_3 = add_square_and_cube_terms(X_train)
X_val_df_3 = add_square_and_cube_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_3.columns
scaler = StandardScaler().fit(X_train_df_3)
X_train_df_3 = pd.DataFrame(scaler.transform(X_train_df_3), columns=cols)
X_val_df_3 = pd.DataFrame(scaler.transform(X_val_df_3), columns=cols)
print(X_train_df.shape, X_train_df_3.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_3.head()
model_3 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_3)).fit()
model_3.summary()
###Output
_____no_output_____
###Markdown
N-Degree Polynomial Model
###Code
# generalizing our function from above
def add_higher_order_polynomial_terms(df, N=7):
df = df.copy()
cols = df.columns.copy()
for col in cols:
for i in range(2, N+1):
df['{}^{}'.format(col, i)] = df[col]**i
return df
N = 8
X_train_df_N = add_higher_order_polynomial_terms(X_train,N)
X_val_df_N = add_higher_order_polynomial_terms(X_val,N)
# Standardizing our added coefficients
cols = X_train_df_N.columns
scaler = StandardScaler().fit(X_train_df_N)
X_train_df_N = pd.DataFrame(scaler.transform(X_train_df_N), columns=cols)
X_val_df_N = pd.DataFrame(scaler.transform(X_val_df_N), columns=cols)
print(X_train_df.shape, X_train_df_N.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_N.head()
model_N = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_N)).fit()
model_N.summary()
###Output
_____no_output_____
###Markdown
You can also create a model with interaction terms or any other higher order polynomial term of your choice. **Note:** Can you see how creating a function that takes in a dataframe and a degree and creates polynomial terms up until that degree can be useful? This is what we have you do in your homework! Regularization What is Regularization and why should I care?When we have a lot of predictors, we need to worry about overfitting. Let's check this out:
###Code
from sklearn.metrics import r2_score
x = [1,2,3,N]
models = [model_1, model_2, model_3, model_N]
X_trains = [X_train_df, X_train_df_2, X_train_df_3, X_train_df_N]
X_vals = [X_val_df, X_val_df_2, X_val_df_3, X_val_df_N]
r2_train = []
r2_val = []
for i,model in enumerate(models):
y_pred_tra = model.predict(sm.add_constant(X_trains[i]))
y_pred_val = model.predict(sm.add_constant(X_vals[i]))
r2_train.append(r2_score(y_train, y_pred_tra))
r2_val.append(r2_score(y_val, y_pred_val))
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, r2_train, 'o-', label=r'Training $R^2$')
ax.plot(x, r2_val, 'o-', label=r'Validation $R^2$')
ax.set_xlabel('Number of degree of polynomial')
ax.set_ylabel(r'$R^2$ score')
ax.set_title(r'$R^2$ score vs polynomial degree')
ax.legend();
###Output
_____no_output_____
###Markdown
We notice a big difference between training and validation R^2 scores: seems like we are overfitting. **Introducing: regularization.** What about Multicollinearity? There's seemingly a lot of multicollinearity in the data. Take a look at this warning that we got when showing our summary for our polynomial models: What is [multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity)? Why do we have it in our dataset? Why is this a problem? Does regularization help solve the issue of multicollinearity? What does Regularization help with?We have some pretty large and extreme coefficient values in our most recent models. These coefficient values also have very high variance. We can also clearly see some overfitting to the training set. In order to reduce the coefficients of our parameters, we can introduce a penalty term that penalizes some of these extreme coefficient values. Specifically, regularization helps us: 1. Avoid overfitting. Reduce features that have weak predictive power.2. Discourage the use of a model that is too complex Big Idea: Reduce Variance by Increasing BiasImage Source: [here](https://www.cse.wustl.edu/~m.neumann/sp2016/cse517/lecturenotes/lecturenote12.html) Ridge RegressionRidge Regression is one such form of regularization. In practice, the ridge estimator reduces the complexity of the model by shrinking the coefficients, but it doesn’t nullify them. We control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [ridge regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) represents this $\lambda$ using a parameter alpha. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients. Lasso RegressionLasso Regression is another form of regularization. Again, we control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [lasso regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) represents this $\lambda$ using a parameter alpha. In Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients. Some Differences between Ridge and Lasso Regression1. Since Lasso regression tend to produce zero estimates for a number of model parameters - we say that Lasso solutions are **sparse** - we consider to be a method for variable selection.2. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients whereas in Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients.3. Ridge Regression has a closed form solution! Lasso Regression does not. We often have to solve this iteratively. In the sklearn package for Lasso regression, there is a parameter called `max_iter` that determines how many iterations we perform. Why Standardizing Variables was not a waste of timeLasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to standardize the variables. Let's use Ridge and Lasso to regularize our degree N polynomial **Exercise**: Play around with different values of alpha. Notice the new $R^2$ value and also the range of values that the predictors take in the plot.
###Code
from sklearn.linear_model import Ridge
# some values you can try out: 0.01, 0.1, 0.5, 1, 5, 10, 20, 40, 100, 200, 500, 1000, 10000
alpha = 100
ridge_model = Ridge(alpha=alpha).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Ridge with alpha={}: {}'.format(alpha, ridge_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(ridge_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Ridge Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
from sklearn.linear_model import Lasso
# some values you can try out: 0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.5, 1, 2, 5, 10, 20
alpha = 0.01
lasso_model = Lasso(alpha=alpha, max_iter = 1000).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Lasso with alpha={}: {}'.format(alpha, lasso_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(lasso_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Lasso Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
###Output
R squared score for our original OLS model: -1.8608470610311345
R squared score for Lasso with alpha=0.01: 0.5975930359800542
###Markdown
Model Selection and Cross-ValidationHere's our current setup so far: So we try out 10,000 different models on our validation set and pick the one that's the best? No! **Since we could also be overfitting the validation set!** One solution to the problems raised by using a single validation set is to evaluate each model on multiple validation sets and average the validation performance. This is the essence of cross-validation!Image source: [here](https://medium.com/@sebastiannorena/some-model-tuning-methods-bfef3e6544f0)Let's give this a try using [RidgeCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html) and [LassoCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LassoCV.html):
###Code
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoCV
alphas = (0.001, 0.01, 0.1, 10, 100, 1000, 10000)
# Let us do k-fold cross validation
k = 4
fitted_ridge = RidgeCV(alphas=alphas).fit(X_train_df_N, y_train)
fitted_lasso = LassoCV(alphas=alphas).fit(X_train_df_N, y_train)
print('R^2 score for our original OLS model: {}\n'.format(r2_val[-1]))
ridge_a = fitted_ridge.alpha_
print('Best alpha for ridge: {}'.format(ridge_a))
print('R^2 score for Ridge with alpha={}: {}\n'.format(ridge_a, fitted_ridge.score(X_val_df_N,y_val)))
lasso_a = fitted_lasso.alpha_
print('Best alpha for lasso: {}'.format(lasso_a))
print('R squared score for Lasso with alpha={}: {}'.format(lasso_a, fitted_lasso.score(X_val_df_N,y_val)))
###Output
R^2 score for our original OLS model: -1.8608470610311345
Best alpha for ridge: 1000.0
R^2 score for Ridge with alpha=1000.0: 0.5779474940635888
Best alpha for lasso: 0.01
R squared score for Lasso with alpha=0.01: 0.5975930359800542
###Markdown
CS109A Introduction to Data Science Standard Section 4: Regularization and Model Selection**Harvard University****Fall 2019****Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner**Section Leaders**: Marios Mattheakis, Abhimanyu (Abhi) Vasishth, Robbert (Rob) Struyven
###Code
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("http://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
###Output
_____no_output_____
###Markdown
For this section, our goal is to get you familiarized with Regularization in Multiple Linear Regression and to start thinking about Model and Hyper-Parameter Selection. Specifically, we will:- Load in the King County House Price Dataset- Perform some basic EDA- Split the data up into a training, **validation**, and test set (we'll see why we need a validation set)- Scale the variables (by standardizing them) and seeing why we need to do this- Make our multiple & polynomial regression models (like we did in the previous section)- Learn what **regularization** is and how it can help- Understand **ridge** and **lasso** regression- Get an introduction to **cross-validation** using RidgeCV and LassoCV
###Code
# Data and Stats packages
import numpy as np
import pandas as pd
pd.set_option('max_columns', 200)
# Visualization packages
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
EDA: House Prices Data From KaggleFor our dataset, we'll be using the house price dataset from [King County, WA](https://en.wikipedia.org/wiki/King_County,_Washington). The dataset is from [Kaggle](https://www.kaggle.com/harlfoxem/housesalesprediction). The task is to build a regression model to **predict the price**, based on different attributes. First, let's do some EDA.
###Code
# Load the dataset
house_df = pd.read_csv('../data/kc_house_data.csv')
house_df = house_df.sample(frac=1, random_state=42)[0:4000]
print(house_df.shape)
print(house_df.dtypes)
house_df.head()
###Output
(4000, 21)
id int64
date object
price float64
bedrooms int64
bathrooms float64
sqft_living int64
sqft_lot int64
floors float64
waterfront int64
view int64
condition int64
grade int64
sqft_above int64
sqft_basement int64
yr_built int64
yr_renovated int64
zipcode int64
lat float64
long float64
sqft_living15 int64
sqft_lot15 int64
dtype: object
###Markdown
Now let's check for null values and look at the datatypes within the dataset.
###Code
house_df.info()
house_df.describe()
###Output
_____no_output_____
###Markdown
Let's choose a subset of columns here. **NOTE**: The way I'm selecting columns here is not principled and is just for convenience. In your homework assignments (and in real life), we expect you to choose columns more rigorously.1. `bedrooms`2. `bathrooms`3. `sqft_living`4. `sqft_lot`5. `floors`6. `sqft_above`7. `sqft_basement`8. `lat`9. `long`10. **`price`**: Our response variable
###Code
cols_of_interest = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long', 'price']
house_df = house_df[cols_of_interest]
# Convert house price to 1000s of dollars
house_df['price'] = house_df['price']/1000
###Output
_____no_output_____
###Markdown
Let's see how the response variable (`price`) is distributed
###Code
fig, ax = plt.subplots(figsize=(12,5))
ax.hist(house_df['price'], bins=100)
ax.set_title('Histogram of house price (in 1000s of dollars)');
# This takes a bit of time but is worth it!!
# sns.pairplot(house_df);
###Output
_____no_output_____
###Markdown
Train-Validation-Test SplitUp until this point, we have only had a train-test split. Why are we introducing a validation set? What's the point?This is the general idea: 1. **Training Set**: Data you have seen. You train different types of models with various different hyper-parameters and regularization parameters on this data. 2. **Validation Set**: Used to compare different models. We use this step to tune our hyper-parameters i.e. find the optimal set of hyper-parameters (such as $k$ for k-NN or our $\beta_i$ values or number of degrees of our polynomial for linear regression). Pick your best model here. 3. **Test Set**: Using the best model from the previous step, simply report the score e.g. R^2 score, MSE or any metric that you care about, of that model on your test set. **DON'T TUNE YOUR PARAMETERS HERE!**. Why, I hear you ask? Because we want to know how our model might do on data it hasn't seen before. We don't have access to this data (because it may not exist yet) but the test set, which we haven't seen or touched so far, is a good way to mimic this new data. Let's do 60% train, 20% validation, 20% test for this dataset.
###Code
from sklearn.model_selection import train_test_split
# first split the data into a train-test split and don't touch the test set yet
train_df, test_df = train_test_split(house_df, test_size=0.2, random_state=42)
# next, split the training set into a train-validation split
# the test-size is 0.25 since we are splitting 80% of the data into 20% and 60% overall
train_df, val_df = train_test_split(train_df, test_size=0.25, random_state=42)
print('Train Set: {0:0.2f}%'.format(100*train_df.size/house_df.size))
print('Validation Set: {0:0.2f}%'.format(100*val_df.size/house_df.size))
print('Test Set: {0:0.2f}%'.format(100*test_df.size/house_df.size))
###Output
Train Set: 60.00%
Validation Set: 20.00%
Test Set: 20.00%
###Markdown
ModelingIn the [last section](https://github.com/Harvard-IACS/2019-CS109A/tree/master/content/sections/section3), we went over the mechanics of Multiple Linear Regression and created models that had interaction terms and polynomial terms. Specifically, we dealt with the following sorts of models. $$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_M x_M$$Let's adopt a similar process here and get a few different models. Creating a Design Matrix From our model setup in the equation in the previous section, we obtain the following: $$Y = \begin{bmatrix}y_1 \\y_2 \\\vdots \\y_n\end{bmatrix}, \quad X = \begin{bmatrix}x_{1,1} & x_{1,2} & \dots & x_{1,M} \\x_{2,1} & x_{2,2} & \dots & x_{2,M} \\\vdots & \vdots & \ddots & \vdots \\x_{n,1} & x_{n,2} & \dots & x_{n,M} \\\end{bmatrix}, \quad \beta = \begin{bmatrix}\beta_1 \\\beta_2 \\\vdots \\\beta_M\end{bmatrix}, \quad \epsilon = \begin{bmatrix}\epsilon_1 \\\epsilon_2 \\\vdots \\\epsilon_n\end{bmatrix},$$$X$ is an n$\times$M matrix: this is our **design matrix**, $\beta$ is an M-dimensional vector (an M$\times$1 matrix), and $Y$ is an n-dimensional vector (an n$\times$1 matrix). In addition, we know that $\epsilon$ is an n-dimensional vector (an n$\times$1 matrix).
###Code
X = train_df[cols_of_interest]
y = train_df['price']
print(X.shape)
print(y.shape)
###Output
(2400, 10)
(2400,)
###Markdown
Scaling our Design Matrix Warm-Up ExerciseWarm-Up Exercise: for which of the following do the units of the predictors matter (e.g., trip length in minutes vs seconds; temperature in F or C)? A similar question would be: for which of these models do the magnitudes of values taken by different predictors matter? (We will go over Ridge and Lasso Regression in greater detail later)- k-NN (Nearest Neighbors regression)- Linear regression- Lasso regression- Ridge regression**Solutions**- kNN: **yes**. Scaling affects distance metric, which determines what "neighbor" means- Linear regression: **no**. Multiply predictor by $c$ -> divide coef by $c$.- Lasso: **yes**: If we divided coef by $c$, then corresponding penalty term is also divided by $c$.- Ridge: **yes**: Same as Lasso, except penalty divided by $c^2$. Standard Scaler (Standardization) [Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? Hint: you may have seen this in STAT 110 or another statistics course multiple times.$$z = \frac{x-\mu}{\sigma}$$In the above setup: - $z$ is the standardized variable- $x$ is the variable before standardization- $\mu$ is the mean of the variable before standardization- $\sigma$ is the standard deviation of the variable before standardizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import StandardScaler
x = house_df['sqft_living']
mu = x.mean()
sigma = x.std()
z = (x-mu)/sigma
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for standardization
x_reshaped = np.array(x).reshape(-1,1)
z_sklearn = StandardScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before standardization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before standardization')
ax[1].hist(z, bins=100)
ax[1].set_title('Manually standardizing sqft_living')
ax[2].hist(z_sklearn, bins=100)
ax[2].set_title('Standardizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'z_manual': z, 'z_sklearn': z_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
Min-Max Scaler (Normalization)[Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? $$x_{new} = \frac{x-x_{min}}{x_{max}-x_{min}}$$In the above setup: - $x_{new}$ is the normalized variable- $x$ is the variable before normalized- $x_{max}$ is the max value of the variable before normalization- $x_{min}$ is the min value of the variable before normalizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import MinMaxScaler
x = house_df['sqft_living']
x_new = (x-x.min())/(x.max()-x.min())
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for normalization
x_reshaped = np.array(x).reshape(-1,1)
x_new_sklearn = MinMaxScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before normalization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before normalization')
ax[1].hist(x_new, bins=100)
ax[1].set_title('Manually normalizing sqft_living')
ax[2].hist(x_new_sklearn, bins=100)
ax[2].set_title('Normalizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'x_new_manual': x_new, 'x_new_sklearn': x_new_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
**The million dollar question**Should I standardize or normalize my data? [This](https://medium.com/@rrfd/standardize-or-normalize-examples-in-python-e3f174b65dfc), [this](https://medium.com/@swethalakshmanan14/how-when-and-why-should-you-normalize-standardize-rescale-your-data-3f083def38ff) and [this](https://stackoverflow.com/questions/32108179/linear-regression-normalization-vs-standardization) are useful resources that I highly recommend. But in a nutshell, what they say is the following: **Pros of Normalization**1. Normalization (which makes your data go from 0-1) is widely used in image processing and computer vision, where pixel intensities are non-negative and are typically scaled from a 0-255 scale to a 0-1 range for a lot of different algorithms. 2. Normalization is also very useful in neural networks (which we will see later in the course) as it leads to the algorithms converging faster.3. Normalization is useful when your data does not have a discernible distribution and you are not making assumptions about your data's distribution.**Pros of Standardization**1. Standardization maintains outliers (do you see why?) whereas normalization makes outliers less obvious. In applications where outliers are useful, standardization should be done.2. Standardization is useful when you assume your data comes from a Gaussian distribution (or something that is approximately Gaussian). **Some General Advice**1. We learn parameters for standardization ($\mu$ and $\sigma$) and for normalization ($x_{min}$ and $x_{max}$). Make sure these parameters are learned on the training set i.e use the training set parameters even when normalizing/standardizing the test set. In sklearn terms, fit your scaler on the training set and use the scaler to transform your test set and validation set (**don't re-fit your scaler on test set data!**).2. The point of standardization and normalization is to make your variables take on a more manageable scale. You should ideally standardize or normalize all your variables at the same time. 3. Standardization and normalization is not always needed and is not an automatic thing you have to do on any data science homework!! Do so sparingly and try to justify why this is needed.**Interpreting Coefficients**A great quote from [here](https://stats.stackexchange.com/questions/29781/when-conducting-multiple-regression-when-should-you-center-your-predictor-varia)> [Standardization] makes it so the intercept term is interpreted as the expected value of 𝑌𝑖 when the predictor values are set to their means. Otherwise, the intercept is interpreted as the expected value of 𝑌𝑖 when the predictors are set to 0, which may not be a realistic or interpretable situation (e.g. what if the predictors were height and weight?) Standardizing our Design Matrix
###Code
features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long']
X_train = train_df[features]
y_train = np.array(train_df['price']).reshape(-1,1)
X_val = val_df[features]
y_val = np.array(val_df['price']).reshape(-1,1)
X_test = test_df[features]
y_test = np.array(test_df['price']).reshape(-1,1)
scaler = StandardScaler().fit(X_train)
# This converts our matrices into numpy matrices
X_train_t = scaler.transform(X_train)
X_val_t = scaler.transform(X_val)
X_test_t = scaler.transform(X_test)
# Making the numpy matrices pandas dataframes
X_train_df = pd.DataFrame(X_train_t, columns=features)
X_val_df = pd.DataFrame(X_val_t, columns=features)
X_test_df = pd.DataFrame(X_test_t, columns=features)
display(X_train_df.describe())
display(X_val_df.describe())
display(X_test_df.describe())
scaler = StandardScaler().fit(y_train)
y_train = scaler.transform(y_train)
y_val = scaler.transform(y_val)
y_test = scaler.transform(y_test)
###Output
_____no_output_____
###Markdown
One-Degree Polynomial Model
###Code
import statsmodels.api as sm
from statsmodels.regression.linear_model import OLS
model_1 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df)).fit()
model_1.summary()
###Output
_____no_output_____
###Markdown
Two-Degree Polynomial Model
###Code
def add_square_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
return df
X_train_df_2 = add_square_terms(X_train)
X_val_df_2 = add_square_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_2.columns
scaler = StandardScaler().fit(X_train_df_2)
X_train_df_2 = pd.DataFrame(scaler.transform(X_train_df_2), columns=cols)
X_val_df_2 = pd.DataFrame(scaler.transform(X_val_df_2), columns=cols)
print(X_train_df.shape, X_train_df_2.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_2.head()
model_2 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_2)).fit()
model_2.summary()
###Output
_____no_output_____
###Markdown
Three-Degree Polynomial Model
###Code
# generalizing our function from above
def add_square_and_cube_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
df['{}^3'.format(col)] = df[col]**3
return df
X_train_df_3 = add_square_and_cube_terms(X_train)
X_val_df_3 = add_square_and_cube_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_3.columns
scaler = StandardScaler().fit(X_train_df_3)
X_train_df_3 = pd.DataFrame(scaler.transform(X_train_df_3), columns=cols)
X_val_df_3 = pd.DataFrame(scaler.transform(X_val_df_3), columns=cols)
print(X_train_df.shape, X_train_df_3.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_3.head()
model_3 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_3)).fit()
model_3.summary()
###Output
_____no_output_____
###Markdown
N-Degree Polynomial Model
###Code
# generalizing our function from above
def add_higher_order_polynomial_terms(df, N=7):
df = df.copy()
cols = df.columns.copy()
for col in cols:
for i in range(2, N+1):
df['{}^{}'.format(col, i)] = df[col]**i
return df
N = 8
X_train_df_N = add_higher_order_polynomial_terms(X_train,N)
X_val_df_N = add_higher_order_polynomial_terms(X_val,N)
# Standardizing our added coefficients
cols = X_train_df_N.columns
scaler = StandardScaler().fit(X_train_df_N)
X_train_df_N = pd.DataFrame(scaler.transform(X_train_df_N), columns=cols)
X_val_df_N = pd.DataFrame(scaler.transform(X_val_df_N), columns=cols)
print(X_train_df.shape, X_train_df_N.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_N.head()
model_N = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_N)).fit()
model_N.summary()
###Output
_____no_output_____
###Markdown
You can also create a model with interaction terms or any other higher order polynomial term of your choice. **Note:** Can you see how creating a function that takes in a dataframe and a degree and creates polynomial terms up until that degree can be useful? This is what we have you do in your homework! Regularization What is Regularization and why should I care?When we have a lot of predictors, we need to worry about overfitting. Let's check this out:
###Code
from sklearn.metrics import r2_score
x = [1,2,3,N]
models = [model_1, model_2, model_3, model_N]
X_trains = [X_train_df, X_train_df_2, X_train_df_3, X_train_df_N]
X_vals = [X_val_df, X_val_df_2, X_val_df_3, X_val_df_N]
r2_train = []
r2_val = []
for i,model in enumerate(models):
y_pred_tra = model.predict(sm.add_constant(X_trains[i]))
y_pred_val = model.predict(sm.add_constant(X_vals[i]))
r2_train.append(r2_score(y_train, y_pred_tra))
r2_val.append(r2_score(y_val, y_pred_val))
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, r2_train, 'o-', label=r'Training $R^2$')
ax.plot(x, r2_val, 'o-', label=r'Validation $R^2$')
ax.set_xlabel('Number of degree of polynomial')
ax.set_ylabel(r'$R^2$ score')
ax.set_title(r'$R^2$ score vs polynomial degree')
ax.legend();
###Output
_____no_output_____
###Markdown
We notice a big difference between training and validation R^2 scores: seems like we are overfitting. **Introducing: regularization.** What about Multicollinearity? There's seemingly a lot of multicollinearity in the data. Take a look at this warning that we got when showing our summary for our polynomial models: What is [multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity)? Why do we have it in our dataset? Why is this a problem? Does regularization help solve the issue of multicollinearity? What does Regularization help with?We have some pretty large and extreme coefficient values in our most recent models. These coefficient values also have very high variance. We can also clearly see some overfitting to the training set. In order to reduce the coefficients of our parameters, we can introduce a penalty term that penalizes some of these extreme coefficient values. Specifically, regularization helps us: 1. Avoid overfitting. Reduce features that have weak predictive power.2. Discourage the use of a model that is too complex Big Idea: Reduce Variance by Increasing BiasImage Source: [here](https://www.cse.wustl.edu/~m.neumann/sp2016/cse517/lecturenotes/lecturenote12.html) Ridge RegressionRidge Regression is one such form of regularization. In practice, the ridge estimator reduces the complexity of the model by shrinking the coefficients, but it doesn’t nullify them. We control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [ridge regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) represents this $\lambda$ using a parameter alpha. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients. Lasso RegressionLasso Regression is another form of regularization. Again, we control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [lasso regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) represents this $\lambda$ using a parameter alpha. In Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients. Some Differences between Ridge and Lasso Regression1. Since Lasso regression tend to produce zero estimates for a number of model parameters - we say that Lasso solutions are **sparse** - we consider to be a method for variable selection.2. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients whereas in Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients.3. Ridge Regression has a closed form solution! Lasso Regression does not. We often have to solve this iteratively. In the sklearn package for Lasso regression, there is a parameter called `max_iter` that determines how many iterations we perform. Why Standardizing Variables was not a waste of timeLasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to standardize the variables. Let's use Ridge and Lasso to regularize our degree N polynomial **Exercise**: Play around with different values of alpha. Notice the new $R^2$ value and also the range of values that the predictors take in the plot.
###Code
from sklearn.linear_model import Ridge
# some values you can try out: 0.01, 0.1, 0.5, 1, 5, 10, 20, 40, 100, 200, 500, 1000, 10000
alpha = 100
ridge_model = Ridge(alpha=alpha).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Ridge with alpha={}: {}'.format(alpha, ridge_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(ridge_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Ridge Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
from sklearn.linear_model import Lasso
# some values you can try out: 0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.5, 1, 2, 5, 10, 20
alpha = 0.01
lasso_model = Lasso(alpha=alpha, max_iter = 1000).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Lasso with alpha={}: {}'.format(alpha, lasso_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(lasso_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Lasso Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
###Output
R squared score for our original OLS model: -1.8608470610311345
R squared score for Lasso with alpha=0.01: 0.5975930359800542
###Markdown
Model Selection and Cross-ValidationHere's our current setup so far: So we try out 10,000 different models on our validation set and pick the one that's the best? No! **Since we could also be overfitting the validation set!** One solution to the problems raised by using a single validation set is to evaluate each model on multiple validation sets and average the validation performance. This is the essence of cross-validation!Image source: [here](https://medium.com/@sebastiannorena/some-model-tuning-methods-bfef3e6544f0)Let's give this a try using [RidgeCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html) and [LassoCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LassoCV.html):
###Code
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoCV
alphas = (0.001, 0.01, 0.1, 10, 100, 1000, 10000)
# Let us do k-fold cross validation
k = 4
fitted_ridge = RidgeCV(alphas=alphas).fit(X_train_df_N, y_train)
fitted_lasso = LassoCV(alphas=alphas).fit(X_train_df_N, y_train)
print('R^2 score for our original OLS model: {}\n'.format(r2_val[-1]))
ridge_a = fitted_ridge.alpha_
print('Best alpha for ridge: {}'.format(ridge_a))
print('R^2 score for Ridge with alpha={}: {}\n'.format(ridge_a, fitted_ridge.score(X_val_df_N,y_val)))
lasso_a = fitted_lasso.alpha_
print('Best alpha for lasso: {}'.format(lasso_a))
print('R squared score for Lasso with alpha={}: {}'.format(lasso_a, fitted_lasso.score(X_val_df_N,y_val)))
###Output
R^2 score for our original OLS model: -1.8608470610311345
Best alpha for ridge: 1000.0
R^2 score for Ridge with alpha=1000.0: 0.5779474940635888
Best alpha for lasso: 0.01
R squared score for Lasso with alpha=0.01: 0.5975930359800542
###Markdown
CS109A Introduction to Data Science Standard Section 4: Regularization and Model Selection**Harvard University****Fall 2019****Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner**Section Leaders**: Marios Mattheakis, Abhimanyu (Abhi) Vasishth, Robbert (Rob) Struyven
###Code
#RUN THIS CELL
import requests
from IPython.core.display import HTML
styles = requests.get("http://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
###Output
_____no_output_____
###Markdown
For this section, our goal is to get you familiarized with Regularization in Multiple Linear Regression and to start thinking about Model and Hyper-Parameter Selection. Specifically, we will:- Load in the King County House Price Dataset- Perform some basic EDA- Split the data up into a training, **validation**, and test set (we'll see why we need a validation set)- Scale the variables (by standardizing them) and seeing why we need to do this- Make our multiple & polynomial regression models (like we did in the previous section)- Learn what **regularization** is and how it can help- Understand **ridge** and **lasso** regression- Get an introduction to **cross-validation** using RidgeCV and LassoCV
###Code
# Data and Stats packages
import numpy as np
import pandas as pd
pd.set_option('max_columns', 200)
# Visualization packages
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import warnings
warnings.filterwarnings("ignore")
# ignore all warnings
###Output
_____no_output_____
###Markdown
EDA: House Prices Data From KaggleFor our dataset, we'll be using the house price dataset from [King County, WA](https://en.wikipedia.org/wiki/King_County,_Washington). The dataset is from [Kaggle](https://www.kaggle.com/harlfoxem/housesalesprediction). The task is to build a regression model to **predict the price**, based on different attributes. First, let's do some EDA.
###Code
# Load the dataset
house_df = pd.read_csv('../data/kc_house_data.csv')
house_df = house_df.sample(frac=1, random_state=42)[0:4000]
print(house_df.shape)
print(house_df.dtypes)
house_df.head()
###Output
(4000, 21)
id int64
date object
price float64
bedrooms int64
bathrooms float64
sqft_living int64
sqft_lot int64
floors float64
waterfront int64
view int64
condition int64
grade int64
sqft_above int64
sqft_basement int64
yr_built int64
yr_renovated int64
zipcode int64
lat float64
long float64
sqft_living15 int64
sqft_lot15 int64
dtype: object
###Markdown
Now let's check for null values and look at the datatypes within the dataset.
###Code
house_df.info()
# some storing info
house_df.describe()
###Output
_____no_output_____
###Markdown
Let's choose a subset of columns here. **NOTE**: The way I'm selecting columns here is not principled and is just for convenience. In your homework assignments (and in real life), we expect you to choose columns more rigorously.1. `bedrooms`2. `bathrooms`3. `sqft_living`4. `sqft_lot`5. `floors`6. `sqft_above`7. `sqft_basement`8. `lat`9. `long`10. **`price`**: Our response variable
###Code
cols_of_interest = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long', 'price']
house_df = house_df[cols_of_interest]
# Convert house price to 1000s of dollars
house_df['price'] = house_df['price']/1000
###Output
_____no_output_____
###Markdown
Let's see how the response variable (`price`) is distributed
###Code
fig, ax = plt.subplots(figsize=(12,5))
ax.hist(house_df['price'], bins=100)
ax.set_title('Histogram of house price (in 1000s of dollars)');
# This takes a bit of time but is worth it!!
#sns.pairplot(house_df);
###Output
_____no_output_____
###Markdown
Train-Validation-Test SplitUp until this point, we have only had a train-test split. Why are we introducing a validation set? What's the point?This is the general idea: 1. **Training Set**: Data you have seen. You train different types of models with various different hyper-parameters and regularization parameters on this data. 2. **Validation Set**: Used to compare different models. We use this step to tune our hyper-parameters i.e. find the optimal set of hyper-parameters (such as $k$ for k-NN or our $\beta_i$ values or number of degrees of our polynomial for linear regression). Pick your best model here. 3. **Test Set**: Using the best model from the previous step, simply report the score e.g. R^2 score, MSE or any metric that you care about, of that model on your test set. **DON'T TUNE YOUR PARAMETERS HERE!**. Why, I hear you ask? Because we want to know how our model might do on data it hasn't seen before. We don't have access to this data (because it may not exist yet) but the test set, which we haven't seen or touched so far, is a good way to mimic this new data. Let's do 60% train, 20% validation, 20% test for this dataset.
###Code
from sklearn.model_selection import train_test_split
# first split the data into a train-test split and don't touch the test set yet
train_df, test_df = train_test_split(house_df, test_size=0.2, random_state=42)
# next, split the training set into a train-validation split
# the test-size is 0.25 since we are splitting 80% of the data into 20% and 60% overall
train_df, val_df = train_test_split(train_df, test_size=0.25, random_state=42)
print('Train Set: {0:0.2f}%'.format(100*train_df.size/house_df.size))
print('Validation Set: {0:0.2f}%'.format(100*val_df.size/house_df.size))
print('Test Set: {0:0.2f}%'.format(100*test_df.size/house_df.size))
###Output
Train Set: 60.00%
Validation Set: 20.00%
Test Set: 20.00%
###Markdown
ModelingIn the [last section](https://github.com/Harvard-IACS/2019-CS109A/tree/master/content/sections/section3), we went over the mechanics of Multiple Linear Regression and created models that had interaction terms and polynomial terms. Specifically, we dealt with the following sorts of models. $$y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_M x_M$$Let's adopt a similar process here and get a few different models. Creating a Design Matrix From our model setup in the equation in the previous section, we obtain the following: $$Y = \begin{bmatrix}y_1 \\y_2 \\\vdots \\y_n\end{bmatrix}, \quad X = \begin{bmatrix}x_{1,1} & x_{1,2} & \dots & x_{1,M} \\x_{2,1} & x_{2,2} & \dots & x_{2,M} \\\vdots & \vdots & \ddots & \vdots \\x_{n,1} & x_{n,2} & \dots & x_{n,M} \\\end{bmatrix}, \quad \beta = \begin{bmatrix}\beta_1 \\\beta_2 \\\vdots \\\beta_M\end{bmatrix}, \quad \epsilon = \begin{bmatrix}\epsilon_1 \\\epsilon_2 \\\vdots \\\epsilon_n\end{bmatrix},$$$X$ is an n$\times$M matrix: this is our **design matrix**, $\beta$ is an M-dimensional vector (an M$\times$1 matrix), and $Y$ is an n-dimensional vector (an n$\times$1 matrix). In addition, we know that $\epsilon$ is an n-dimensional vector (an n$\times$1 matrix).
###Code
X = train_df[cols_of_interest]
y = train_df['price']
print(X.shape)
print(y.shape)
###Output
(2400, 10)
(2400,)
###Markdown
Scaling our Design Matrix Warm-Up ExerciseWarm-Up Exercise: for which of the following do the units of the predictors matter (e.g., trip length in minutes vs seconds; temperature in F or C)? A similar question would be: for which of these models do the magnitudes of values taken by different predictors matter? Note: "Matter" here means that different magnitudes might cause biases or inaccuracy in prediction.(We will go over Ridge and Lasso Regression in greater detail later)- k-NN (Nearest Neighbors regression)- Linear regression- Lasso regression- Ridge regression**Solutions**- kNN: **yes**. Scaling affects distance metric, which determines what "neighbor" means- Linear regression: **no**. Multiply predictor by $c$ -> divide coef by $c$.- Lasso: **yes**: If we divided coef by $c$, then corresponding penalty term is also divided by $c$.- Ridge: **yes**: Same as Lasso, except penalty divided by $c^2$. Standard Scaler (Standardization) [Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? Hint: you may have seen this in STAT 110 or another statistics course multiple times.$$z = \frac{x-\mu}{\sigma}$$In the above setup: - $z$ is the standardized variable- $x$ is the variable before standardization- $\mu$ is the mean of the variable before standardization- $\sigma$ is the standard deviation of the variable before standardizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import StandardScaler
x = house_df['sqft_living']
mu = x.mean()
sigma = x.std()
z = (x-mu)/sigma
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for standardization
x_reshaped = np.array(x).reshape(-1,1)
z_sklearn = StandardScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before standardization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before standardization')
ax[1].hist(z, bins=100)
ax[1].set_title('Manually standardizing sqft_living')
ax[2].hist(z_sklearn, bins=100)
ax[2].set_title('Standardizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'z_manual': z, 'z_sklearn': z_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
Min-Max Scaler (Normalization)[Here's](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) the scikit-learn implementation of the standard scaler. What is it doing though? $$x_{new} = \frac{x-x_{min}}{x_{max}-x_{min}}$$In the above setup: - $x_{new}$ is the normalized variable- $x$ is the variable before normalized- $x_{max}$ is the max value of the variable before normalization- $x_{min}$ is the min value of the variable before normalizationLet's see an example of how this works:
###Code
from sklearn.preprocessing import MinMaxScaler
x = house_df['sqft_living']
x_new = (x-x.min())/(x.max()-x.min())
# reshaping x to be a n by 1 matrix since that's how scikit learn likes data for normalization
x_reshaped = np.array(x).reshape(-1,1)
x_new_sklearn = MinMaxScaler().fit_transform(x_reshaped)
# Plotting the histogram of the variable before normalization
fig, ax = plt.subplots(nrows=1, ncols=3, figsize=(24,5))
ax = ax.ravel()
ax[0].hist(x, bins=100)
ax[0].set_title('Histogram of sqft_living before normalization')
ax[1].hist(x_new, bins=100)
ax[1].set_title('Manually normalizing sqft_living')
ax[2].hist(x_new_sklearn, bins=100)
ax[2].set_title('Normalizing sqft_living using scikit learn');
# making things a dataframe to check if they work
pd.DataFrame({'x': x, 'x_new_manual': x_new, 'x_new_sklearn': x_new_sklearn.flatten()}).describe()
###Output
_____no_output_____
###Markdown
**The million dollar question**Should I standardize or normalize my data? [This](https://medium.com/@rrfd/standardize-or-normalize-examples-in-python-e3f174b65dfc), [this](https://medium.com/@swethalakshmanan14/how-when-and-why-should-you-normalize-standardize-rescale-your-data-3f083def38ff) and [this](https://stackoverflow.com/questions/32108179/linear-regression-normalization-vs-standardization) are useful resources that I highly recommend. But in a nutshell, what they say is the following: **Pros of Normalization**1. Normalization (which makes your data go from 0-1) is widely used in image processing and computer vision, where pixel intensities are non-negative and are typically scaled from a 0-255 scale to a 0-1 range for a lot of different algorithms. 2. Normalization is also very useful in neural networks (which we will see later in the course) as it leads to the algorithms converging faster.3. Normalization is useful when your data does not have a discernible distribution and you are not making assumptions about your data's distribution.**Pros of Standardization**1. Standardization maintains outliers (do you see why?) whereas normalization makes outliers less obvious. In applications where outliers are useful, standardization should be done.2. Standardization is useful when you assume your data comes from a Gaussian distribution (or something that is approximately Gaussian). **Some General Advice**1. We learn parameters for standardization ($\mu$ and $\sigma$) and for normalization ($x_{min}$ and $x_{max}$). Make sure these parameters are learned on the training set i.e use the training set parameters even when normalizing/standardizing the test set. In sklearn terms, fit your scaler on the training set and use the scaler to transform your test set and validation set (**don't re-fit your scaler on test set data!**).2. The point of standardization and normalization is to make your variables take on a more manageable scale. You should ideally standardize or normalize all your variables at the same time. 3. Standardization and normalization is not always needed and is not an automatic thing you have to do on any data science homework!! Do so sparingly and try to justify why this is needed.**Interpreting Coefficients**A great quote from [here](https://stats.stackexchange.com/questions/29781/when-conducting-multiple-regression-when-should-you-center-your-predictor-varia)> [Standardization] makes it so the intercept term is interpreted as the expected value of 𝑌𝑖 when the predictor values are set to their means. Otherwise, the intercept is interpreted as the expected value of 𝑌𝑖 when the predictors are set to 0, which may not be a realistic or interpretable situation (e.g. what if the predictors were height and weight?) Standardizing our Design Matrix
###Code
features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'sqft_above', 'sqft_basement',
'lat', 'long']
X_train = train_df[features]
y_train = np.array(train_df['price']).reshape(-1,1)
X_val = val_df[features]
y_val = np.array(val_df['price']).reshape(-1,1)
X_test = test_df[features]
y_test = np.array(test_df['price']).reshape(-1,1)
scaler = StandardScaler().fit(X_train)
# This converts our matrices into numpy matrices
X_train_t = scaler.transform(X_train)
# Only use X_train to fit
X_val_t = scaler.transform(X_val)
X_test_t = scaler.transform(X_test)
# Making the numpy matrices pandas dataframes
X_train_df = pd.DataFrame(X_train_t, columns=features)
X_val_df = pd.DataFrame(X_val_t, columns=features)
X_test_df = pd.DataFrame(X_test_t, columns=features)
display(X_train_df.describe())
display(X_val_df.describe())
display(X_test_df.describe())
scaler = StandardScaler().fit(y_train)
y_train = scaler.transform(y_train)
y_val = scaler.transform(y_val)
y_test = scaler.transform(y_test)
###Output
_____no_output_____
###Markdown
One-Degree Polynomial Model
###Code
import statsmodels.api as sm
from statsmodels.regression.linear_model import OLS
model_1 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df)).fit()
model_1.summary()
###Output
_____no_output_____
###Markdown
Two-Degree Polynomial Model
###Code
def add_square_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
return df
X_train_df_2 = add_square_terms(X_train)
X_val_df_2 = add_square_terms(X_val)
# Standardizing our added coefficients
cols = X_train_df_2.columns
scaler = StandardScaler().fit(X_train_df_2)
X_train_df_2 = pd.DataFrame(scaler.transform(X_train_df_2), columns=cols)
X_val_df_2 = pd.DataFrame(scaler.transform(X_val_df_2), columns=cols)
print(X_train_df.shape, X_train_df_2.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_2.head()
model_2 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_2)).fit()
model_2.summary()
###Output
_____no_output_____
###Markdown
Three-Degree Polynomial Model
###Code
# generalizing our function from above
def add_square_and_cube_terms(df):
df = df.copy()
cols = df.columns.copy()
for col in cols:
df['{}^2'.format(col)] = df[col]**2
df['{}^3'.format(col)] = df[col]**3
return df
X_train_df_3 = add_square_and_cube_terms(X_train_df)
X_val_df_3 = add_square_and_cube_terms(X_val_df)
# Standardizing our added coefficients
cols = X_train_df_3.columns
scaler = StandardScaler().fit(X_train_df_3)
X_train_df_3 = pd.DataFrame(scaler.transform(X_train_df_3), columns=cols)
X_val_df_3 = pd.DataFrame(scaler.transform(X_val_df_3), columns=cols)
print(X_train_df.shape, X_train_df_3.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_3.head()
model_3 = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_3)).fit()
model_3.summary()
###Output
_____no_output_____
###Markdown
N-Degree Polynomial Model
###Code
# generalizing our function from above
def add_higher_order_polynomial_terms(df, N=7):
df = df.copy()
cols = df.columns.copy()
for col in cols:
for i in range(2, N+1):
df['{}^{}'.format(col, i)] = df[col]**i
return df
N = 6
X_train_df_N = add_higher_order_polynomial_terms(X_train_df,N)
X_val_df_N = add_higher_order_polynomial_terms(X_val_df,N)
# Standardizing our added coefficients
cols = X_train_df_N.columns
scaler = StandardScaler().fit(X_train_df_N)
X_train_df_N = pd.DataFrame(scaler.transform(X_train_df_N), columns=cols)
X_val_df_N = pd.DataFrame(scaler.transform(X_val_df_N), columns=cols)
print(X_train_df.shape, X_train_df_N.shape)
# Also check using the describe() function that the mean and standard deviations are the way we want them
X_train_df_N.head()
model_N = OLS(np.array(y_train).reshape(-1,1), sm.add_constant(X_train_df_N)).fit()
model_N.summary()
###Output
_____no_output_____
###Markdown
You can also create a model with interaction terms or any other higher order polynomial term of your choice. **Note:** Can you see how creating a function that takes in a dataframe and a degree and creates polynomial terms up until that degree can be useful? This is what we have you do in your homework! Regularization What is Regularization and why should I care?When we have a lot of predictors, we need to worry about overfitting. Let's check this out:
###Code
from sklearn.metrics import r2_score
x = [1,2,3,N]
models = [model_1, model_2, model_3, model_N]
X_trains = [X_train_df, X_train_df_2, X_train_df_3, X_train_df_N]
X_vals = [X_val_df, X_val_df_2, X_val_df_3, X_val_df_N]
r2_train = []
r2_val = []
for i,model in enumerate(models):
y_pred_tra = model.predict(sm.add_constant(X_trains[i]))
y_pred_val = model.predict(sm.add_constant(X_vals[i]))
r2_train.append(r2_score(y_train, y_pred_tra))
r2_val.append(r2_score(y_val, y_pred_val))
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, r2_train, 'o-', label=r'Training $R^2$')
ax.plot(x, r2_val, 'o-', label=r'Validation $R^2$')
ax.set_xlabel('Number of degree of polynomial')
ax.set_ylabel(r'$R^2$ score')
ax.set_title(r'$R^2$ score vs polynomial degree')
ax.legend();
###Output
_____no_output_____
###Markdown
We notice a big difference between training and validation R^2 scores: seems like we are overfitting. **Introducing: regularization.** What about Multicollinearity? There's seemingly a lot of multicollinearity in the data. Take a look at this warning that we got when showing our summary for our polynomial models: What is [multicollinearity](https://en.wikipedia.org/wiki/Multicollinearity)? Why do we have it in our dataset? Why is this a problem? Does regularization help solve the issue of multicollinearity? What does Regularization help with?We have some pretty large and extreme coefficient values in our most recent models. These coefficient values also have very high variance. We can also clearly see some overfitting to the training set. In order to reduce the coefficients of our parameters, we can introduce a penalty term that penalizes some of these extreme coefficient values. Specifically, regularization helps us: 1. Avoid overfitting. Reduce features that have weak predictive power.2. Discourage the use of a model that is too complex Big Idea: Reduce Variance by Increasing BiasImage Source: [here](https://www.cse.wustl.edu/~m.neumann/sp2016/cse517/lecturenotes/lecturenote12.html) Ridge RegressionRidge Regression is one such form of regularization. In practice, the ridge estimator reduces the complexity of the model by shrinking the coefficients, but it doesn’t nullify them. We control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [ridge regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html) represents this $\lambda$ using a parameter alpha. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients. Lasso RegressionLasso Regression is another form of regularization. Again, we control the amount of regularization using a parameter $\lambda$. **NOTE**: sklearn's [lasso regression package](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html) represents this $\lambda$ using a parameter alpha. In Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients. Some Differences between Ridge and Lasso Regression1. Since Lasso regression tend to produce zero estimates for a number of model parameters - we say that Lasso solutions are **sparse** - we consider to be a method for variable selection.2. In Ridge Regression, the penalty term is proportional to the L2-norm of the coefficients whereas in Lasso Regression, the penalty term is proportional to the L1-norm of the coefficients.3. Ridge Regression has a closed form solution! Lasso Regression does not. We often have to solve this iteratively. In the sklearn package for Lasso regression, there is a parameter called `max_iter` that determines how many iterations we perform. Why Standardizing Variables was not a waste of timeLasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to standardize the variables. Let's use Ridge and Lasso to regularize our degree N polynomial **Exercise**: Play around with different values of alpha. Notice the new $R^2$ value and also the range of values that the predictors take in the plot.
###Code
from sklearn.linear_model import Ridge
# some values you can try out: 0.01, 0.1, 0.5, 1, 5, 10, 20, 40, 100, 200, 500, 1000, 10000
alpha = 100
ridge_model = Ridge(alpha=alpha).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Ridge with alpha={}: {}'.format(alpha, ridge_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(ridge_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Ridge Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
from sklearn.linear_model import Lasso
# some values you can try out: 0.00001, 0.0001, 0.001, 0.01, 0.05, 0.1, 0.5, 1, 2, 5, 10, 20
alpha = 0.01
lasso_model = Lasso(alpha=alpha, max_iter = 1000).fit(X_train_df_N, y_train)
print('R squared score for our original OLS model: {}'.format(r2_val[-1]))
print('R squared score for Lasso with alpha={}: {}'.format(alpha, lasso_model.score(X_val_df_N,y_val)))
fig, ax = plt.subplots(figsize=(18,8), ncols=2)
ax = ax.ravel()
ax[0].hist(model_N.params, bins=10, alpha=0.5)
ax[0].set_title('Histogram of predictor values for Original model with N: {}'.format(N))
ax[0].set_xlabel('Predictor values')
ax[0].set_ylabel('Frequency')
ax[1].hist(lasso_model.coef_.flatten(), bins=20, alpha=0.5)
ax[1].set_title('Histogram of predictor values for Lasso Model with alpha: {}'.format(alpha))
ax[1].set_xlabel('Predictor values')
ax[1].set_ylabel('Frequency');
###Output
R squared score for our original OLS model: 0.11935480685103261
R squared score for Lasso with alpha=0.01: 0.6651205006878168
###Markdown
Model Selection and Cross-ValidationHere's our current setup so far: So we try out 10,000 different models on our validation set and pick the one that's the best? No! **Since we could also be overfitting the validation set!** One solution to the problems raised by using a single validation set is to evaluate each model on multiple validation sets and average the validation performance. This is the essence of cross-validation!Image source: [here](https://medium.com/@sebastiannorena/some-model-tuning-methods-bfef3e6544f0)Let's give this a try using [RidgeCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html) and [LassoCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LassoCV.html):
###Code
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoCV
alphas = (0.001, 0.01, 0.1, 10, 100, 1000, 10000)
# Let us do k-fold cross validation
k = 4
fitted_ridge = RidgeCV(alphas=alphas).fit(X_train_df_N, y_train)
fitted_lasso = LassoCV(alphas=alphas).fit(X_train_df_N, y_train)
print('R^2 score for our original OLS model: {}\n'.format(r2_val[-1]))
ridge_a = fitted_ridge.alpha_
print('Best alpha for ridge: {}'.format(ridge_a))
print('R^2 score for Ridge with alpha={}: {}\n'.format(ridge_a, fitted_ridge.score(X_val_df_N,y_val)))
lasso_a = fitted_lasso.alpha_
print('Best alpha for lasso: {}'.format(lasso_a))
print('R squared score for Lasso with alpha={}: {}'.format(lasso_a, fitted_lasso.score(X_val_df_N,y_val)))
y_train
###Output
R^2 score for our original OLS model: 0.11935480685103261
Best alpha for ridge: 1000.0
R^2 score for Ridge with alpha=1000.0: 0.6148040375619114
Best alpha for lasso: 0.01
R squared score for Lasso with alpha=0.01: 0.6651205006878168
|
sagemaker_processing/scikit_learn_data_processing_and_model_evaluation/scikit_learn_data_processing_and_model_evaluation.ipynb | ###Markdown
Amazon SageMaker Processing jobsWith Amazon SageMaker Processing jobs, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform.A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job.This notebook shows how you can:1. Run a processing job to run a scikit-learn script that cleans, pre-processes, performs feature engineering, and splits the input data into train and test sets.2. Run a training job on the pre-processed training data to train a model3. Run a processing job on the pre-processed test data to evaluate the trained model's performance4. Use your own custom container to run processing jobs with your own Python libraries and dependencies.The dataset used here is the [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29). You select features from this dataset, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model, and split the data into train and test sets. The task is to predict whether rows representing census responders have an income greater than `$50,000`, or less than `$50,000`. The dataset is heavily class imbalanced, with most records being labeled as earning less than `$50,000`. After training a logistic regression model, you evaluate the model against a hold-out test dataset, and save the classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model. Data pre-processing and feature engineering To run the scikit-learn preprocessing script as a processing job, create a `SKLearnProcessor`, which lets you run scripts inside of processing jobs using the scikit-learn image provided.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sklearn.processing import SKLearnProcessor
region = boto3.session.Session().region_name
role = get_execution_role()
sklearn_processor = SKLearnProcessor(framework_version='0.20.0',
role=role,
instance_type='ml.m5.xlarge',
instance_count=1)
###Output
_____no_output_____
###Markdown
Before introducing the script you use for data cleaning, pre-processing, and feature engineering, inspect the first 20 rows of the dataset. The target is predicting the `income` category. The features from the dataset you select are `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, and `dividends from stocks`.
###Code
import pandas as pd
input_data = 's3://sagemaker-sample-data-{}/processing/census/census-income.csv'.format(region)
df = pd.read_csv(input_data, nrows=10)
df.head(n=10)
###Output
_____no_output_____
###Markdown
This notebook cell writes a file `preprocessing.py`, which contains the pre-processing script. You can update the script, and rerun this cell to overwrite `preprocessing.py`. You run this as a processing job in the next cell. In this script, you* Remove duplicates and rows with conflicting data* transform the target `income` column into a column containing two labels.* transform the `age` and `num persons worked for employer` numerical columns into categorical features by binning them* scale the continuous `capital gains`, `capital losses`, and `dividends from stocks` so they're suitable for training* encode the `education`, `major industry code`, `class of worker` so they're suitable for training* split the data into training and test datasets, and saves the training features and labels and test features and labels.Our training script will use the pre-processed training features and labels to train a model, and our model evaluation script will use the trained model and pre-processed test features and labels to evaluate the model.
###Code
%%writefile preprocessing.py
import argparse
import os
import warnings
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
columns = ['age', 'education', 'major industry code', 'class of worker', 'num persons worked for employer',
'capital gains', 'capital losses', 'dividends from stocks', 'income']
class_labels = [' - 50000.', ' 50000+.']
def print_shape(df):
negative_examples, positive_examples = np.bincount(df['income'])
print('Data shape: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
if __name__=='__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--train-test-split-ratio', type=float, default=0.3)
args, _ = parser.parse_known_args()
print('Received arguments {}'.format(args))
input_data_path = os.path.join('/opt/ml/processing/input', 'census-income.csv')
print('Reading input data from {}'.format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df['income'])
print('Data after cleaning: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
split_ratio = args.train_test_split_ratio
print('Splitting data into train and test sets with ratio {}'.format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(df.drop('income', axis=1), df['income'], test_size=split_ratio, random_state=0)
preprocess = make_column_transformer(
(['age', 'num persons worked for employer'], KBinsDiscretizer(encode='onehot-dense', n_bins=10)),
(['capital gains', 'capital losses', 'dividends from stocks'], StandardScaler()),
(['education', 'major industry code', 'class of worker'], OneHotEncoder(sparse=False))
)
print('Running preprocessing and feature engineering transformations')
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print('Train data shape after preprocessing: {}'.format(train_features.shape))
print('Test data shape after preprocessing: {}'.format(test_features.shape))
train_features_output_path = os.path.join('/opt/ml/processing/train', 'train_features.csv')
train_labels_output_path = os.path.join('/opt/ml/processing/train', 'train_labels.csv')
test_features_output_path = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_output_path = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
print('Saving training features to {}'.format(train_features_output_path))
pd.DataFrame(train_features).to_csv(train_features_output_path, header=False, index=False)
print('Saving test features to {}'.format(test_features_output_path))
pd.DataFrame(test_features).to_csv(test_features_output_path, header=False, index=False)
print('Saving training labels to {}'.format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print('Saving test labels to {}'.format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
###Output
_____no_output_____
###Markdown
Run this script as a processing job. Use the `SKLearnProcessor.run()` method. You give the `run()` method one `ProcessingInput` where the `source` is the census dataset in Amazon S3, and the `destination` is where the script reads this data from, in this case `/opt/ml/processing/input`. These local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output/<output_name/`. You also give the ProcessingOutputs values for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The `arguments` parameter in the `run()` method are command-line arguments in our `preprocessing.py` script.
###Code
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description['ProcessingOutputConfig']
for output in output_config['Outputs']:
if output['OutputName'] == 'train_data':
preprocessed_training_data = output['S3Output']['S3Uri']
if output['OutputName'] == 'test_data':
preprocessed_test_data = output['S3Output']['S3Uri']
###Output
_____no_output_____
###Markdown
Now inspect the output of the pre-processing job, which consists of the processed features.
###Code
training_features = pd.read_csv(preprocessed_training_data + '/train_features.csv', nrows=10)
print('Training features shape: {}'.format(training_features.shape))
training_features.head(n=10)
###Output
_____no_output_____
###Markdown
Training using the pre-processed dataWe create a `SKLearn` instance, which we will use to run a training job using the training script `train.py`.
###Code
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point='train.py',
framework_version='0.20.0',
instance_type="ml.m5.xlarge",
role=role)
###Output
_____no_output_____
###Markdown
The training script `train.py` trains a logistic regression model on the training data, and saves the model to the `/opt/ml/model` directory, which Amazon SageMaker tars and uploads into a `model.tar.gz` file into S3 at the end of the training job.
###Code
%%writefile train.py
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
if __name__=="__main__":
training_data_directory = '/opt/ml/input/data/train'
train_features_data = os.path.join(training_data_directory, 'train_features.csv')
train_labels_data = os.path.join(training_data_directory, 'train_labels.csv')
print('Reading input data')
X_train = pd.read_csv(train_features_data, header=None)
y_train = pd.read_csv(train_labels_data, header=None)
model = LogisticRegression(class_weight='balanced', solver='lbfgs')
print('Training LR model')
model.fit(X_train, y_train)
model_output_directory = os.path.join('/opt/ml/model', "model.joblib")
print('Saving model to {}'.format(model_output_directory))
joblib.dump(model, model_output_directory)
###Output
_____no_output_____
###Markdown
Run the training job using `train.py` on the preprocessed training data.
###Code
sklearn.fit({'train': preprocessed_training_data})
training_job_description = sklearn.jobs[-1].describe()
model_data_s3_uri = '{}{}/{}'.format(
training_job_description['OutputDataConfig']['S3OutputPath'],
training_job_description['TrainingJobName'],
'output/model.tar.gz')
###Output
_____no_output_____
###Markdown
Model Evaluation`evaluation.py` is the model evaluation script. Since the script also runs using scikit-learn as a dependency, run this using the `SKLearnProcessor` you created previously. This script takes the trained model and the test dataset as input, and produces a JSON file containing classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model.
###Code
%%writefile evaluation.py
import json
import os
import tarfile
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report, roc_auc_score, accuracy_score
if __name__=="__main__":
model_path = os.path.join('/opt/ml/processing/model', 'model.tar.gz')
print('Extracting model from path: {}'.format(model_path))
with tarfile.open(model_path) as tar:
tar.extractall(path='.')
print('Loading model')
model = joblib.load('model.joblib')
print('Loading test input data')
test_features_data = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_data = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
X_test = pd.read_csv(test_features_data, header=None)
y_test = pd.read_csv(test_labels_data, header=None)
predictions = model.predict(X_test)
print('Creating classification evaluation report')
report_dict = classification_report(y_test, predictions, output_dict=True)
report_dict['accuracy'] = accuracy_score(y_test, predictions)
report_dict['roc_auc'] = roc_auc_score(y_test, predictions)
print('Classification report:\n{}'.format(report_dict))
evaluation_output_path = os.path.join('/opt/ml/processing/evaluation', 'evaluation.json')
print('Saving classification report to {}'.format(evaluation_output_path))
with open(evaluation_output_path, 'w') as f:
f.write(json.dumps(report_dict))
import json
from sagemaker.s3 import S3Downloader
sklearn_processor.run(code='evaluation.py',
inputs=[ProcessingInput(
source=model_data_s3_uri,
destination='/opt/ml/processing/model'),
ProcessingInput(
source=preprocessed_test_data,
destination='/opt/ml/processing/test')],
outputs=[ProcessingOutput(output_name='evaluation',
source='/opt/ml/processing/evaluation')]
)
evaluation_job_description = sklearn_processor.jobs[-1].describe()
###Output
_____no_output_____
###Markdown
Now retrieve the file `evaluation.json` from Amazon S3, which contains the evaluation report.
###Code
evaluation_output_config = evaluation_job_description['ProcessingOutputConfig']
for output in evaluation_output_config['Outputs']:
if output['OutputName'] == 'evaluation':
evaluation_s3_uri = output['S3Output']['S3Uri'] + '/evaluation.json'
break
evaluation_output = S3Downloader.read_file(evaluation_s3_uri)
evaluation_output_dict = json.loads(evaluation_output)
print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Running processing jobs with your own dependenciesAbove, you used a processing container that has scikit-learn installed, but you can run your own processing container in your processing job as well, and still provide a script to run within your processing container.Below, you walk through how to create a processing container, and how to use a `ScriptProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with.
###Code
!mkdir docker
###Output
_____no_output_____
###Markdown
This is the Dockerfile to create the processing container. Install `pandas` and `scikit-learn` into it. You can install your own dependencies.
###Code
%%writefile docker/Dockerfile
FROM python:3.7-slim-buster
RUN pip3 install pandas==0.25.3 scikit-learn==0.21.3
ENV PYTHONUNBUFFERED=TRUE
ENTRYPOINT ["python3"]
###Output
_____no_output_____
###Markdown
This block of code builds the container using the `docker` command, creates an Amazon Elastic Container Registry (Amazon ECR) repository, and pushes the image to Amazon ECR.
###Code
import boto3
account_id = boto3.client('sts').get_caller_identity().get('Account')
ecr_repository = 'sagemaker-processing-container'
tag = ':latest'
uri_suffix = 'amazonaws.com'
if region in ['cn-north-1', 'cn-northwest-1']:
uri_suffix = 'amazonaws.com.cn'
processing_repository_uri = '{}.dkr.ecr.{}.{}/{}'.format(account_id, region, uri_suffix, ecr_repository + tag)
# Create ECR repository and push docker image
!docker build -t $ecr_repository docker
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $processing_repository_uri
!docker push $processing_repository_uri
###Output
_____no_output_____
###Markdown
The `ScriptProcessor` class lets you run a command inside this container, which you can use to run your own script.
###Code
from sagemaker.processing import ScriptProcessor
script_processor = ScriptProcessor(command=['python3'],
image_uri=processing_repository_uri,
role=role,
instance_count=1,
instance_type='ml.m5.xlarge')
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside of the Docker container you built in this notebook, not the scikit-learn image maintained by Amazon SageMaker. You can add the dependencies to the Docker image, and run your own pre-processing, feature-engineering, and model evaluation scripts inside of this container.
###Code
script_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____
###Markdown
Amazon SageMaker Processing jobsWith Amazon SageMaker Processing jobs, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform.A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job.This notebook shows how you can:1. Run a processing job to run a scikit-learn script that cleans, pre-processes, performs feature engineering, and splits the input data into train and test sets.2. Run a training job on the pre-processed training data to train a model3. Run a processing job on the pre-processed test data to evaluate the trained model's performance4. Use your own custom container to run processing jobs with your own Python libraries and dependencies.The dataset used here is the [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29). You select features from this dataset, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model, and split the data into train and test sets. The task is to predict whether rows representing census responders have an income greater than `$50,000`, or less than `$50,000`. The dataset is heavily class imbalanced, with most records being labeled as earning less than `$50,000`. After training a logistic regression model, you evaluate the model against a hold-out test dataset, and save the classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model. Data pre-processing and feature engineering To run the scikit-learn preprocessing script as a processing job, create a `SKLearnProcessor`, which lets you run scripts inside of processing jobs using the scikit-learn image provided.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sklearn.processing import SKLearnProcessor
region = boto3.session.Session().region_name
role = get_execution_role()
sklearn_processor = SKLearnProcessor(
framework_version="0.20.0", role=role, instance_type="ml.m5.xlarge", instance_count=1
)
###Output
_____no_output_____
###Markdown
Before introducing the script you use for data cleaning, pre-processing, and feature engineering, inspect the first 20 rows of the dataset. The target is predicting the `income` category. The features from the dataset you select are `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, and `dividends from stocks`.
###Code
import pandas as pd
input_data = "s3://sagemaker-sample-data-{}/processing/census/census-income.csv".format(region)
df = pd.read_csv(input_data, nrows=10)
df.head(n=10)
###Output
_____no_output_____
###Markdown
This notebook cell writes a file `preprocessing.py`, which contains the pre-processing script. You can update the script, and rerun this cell to overwrite `preprocessing.py`. You run this as a processing job in the next cell. In this script, you* Remove duplicates and rows with conflicting data* transform the target `income` column into a column containing two labels.* transform the `age` and `num persons worked for employer` numerical columns into categorical features by binning them* scale the continuous `capital gains`, `capital losses`, and `dividends from stocks` so they're suitable for training* encode the `education`, `major industry code`, `class of worker` so they're suitable for training* split the data into training and test datasets, and saves the training features and labels and test features and labels.Our training script will use the pre-processed training features and labels to train a model, and our model evaluation script will use the trained model and pre-processed test features and labels to evaluate the model.
###Code
%%writefile preprocessing.py
import argparse
import os
import warnings
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action="ignore", category=DataConversionWarning)
columns = [
"age",
"education",
"major industry code",
"class of worker",
"num persons worked for employer",
"capital gains",
"capital losses",
"dividends from stocks",
"income",
]
class_labels = [" - 50000.", " 50000+."]
def print_shape(df):
negative_examples, positive_examples = np.bincount(df["income"])
print(
"Data shape: {}, {} positive examples, {} negative examples".format(
df.shape, positive_examples, negative_examples
)
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--train-test-split-ratio", type=float, default=0.3)
args, _ = parser.parse_known_args()
print("Received arguments {}".format(args))
input_data_path = os.path.join("/opt/ml/processing/input", "census-income.csv")
print("Reading input data from {}".format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df["income"])
print(
"Data after cleaning: {}, {} positive examples, {} negative examples".format(
df.shape, positive_examples, negative_examples
)
)
split_ratio = args.train_test_split_ratio
print("Splitting data into train and test sets with ratio {}".format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(
df.drop("income", axis=1), df["income"], test_size=split_ratio, random_state=0
)
preprocess = make_column_transformer(
(
["age", "num persons worked for employer"],
KBinsDiscretizer(encode="onehot-dense", n_bins=10),
),
(["capital gains", "capital losses", "dividends from stocks"], StandardScaler()),
(["education", "major industry code", "class of worker"], OneHotEncoder(sparse=False)),
)
print("Running preprocessing and feature engineering transformations")
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print("Train data shape after preprocessing: {}".format(train_features.shape))
print("Test data shape after preprocessing: {}".format(test_features.shape))
train_features_output_path = os.path.join("/opt/ml/processing/train", "train_features.csv")
train_labels_output_path = os.path.join("/opt/ml/processing/train", "train_labels.csv")
test_features_output_path = os.path.join("/opt/ml/processing/test", "test_features.csv")
test_labels_output_path = os.path.join("/opt/ml/processing/test", "test_labels.csv")
print("Saving training features to {}".format(train_features_output_path))
pd.DataFrame(train_features).to_csv(train_features_output_path, header=False, index=False)
print("Saving test features to {}".format(test_features_output_path))
pd.DataFrame(test_features).to_csv(test_features_output_path, header=False, index=False)
print("Saving training labels to {}".format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print("Saving test labels to {}".format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
###Output
_____no_output_____
###Markdown
Run this script as a processing job. Use the `SKLearnProcessor.run()` method. You give the `run()` method one `ProcessingInput` where the `source` is the census dataset in Amazon S3, and the `destination` is where the script reads this data from, in this case `/opt/ml/processing/input`. These local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output/<output_name/`. You also give the ProcessingOutputs values for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The `arguments` parameter in the `run()` method are command-line arguments in our `preprocessing.py` script.
###Code
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(
code="preprocessing.py",
inputs=[ProcessingInput(source=input_data, destination="/opt/ml/processing/input")],
outputs=[
ProcessingOutput(output_name="train_data", source="/opt/ml/processing/train"),
ProcessingOutput(output_name="test_data", source="/opt/ml/processing/test"),
],
arguments=["--train-test-split-ratio", "0.2"],
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description["ProcessingOutputConfig"]
for output in output_config["Outputs"]:
if output["OutputName"] == "train_data":
preprocessed_training_data = output["S3Output"]["S3Uri"]
if output["OutputName"] == "test_data":
preprocessed_test_data = output["S3Output"]["S3Uri"]
###Output
_____no_output_____
###Markdown
Now inspect the output of the pre-processing job, which consists of the processed features.
###Code
training_features = pd.read_csv(preprocessed_training_data + "/train_features.csv", nrows=10)
print("Training features shape: {}".format(training_features.shape))
training_features.head(n=10)
###Output
_____no_output_____
###Markdown
Training using the pre-processed dataWe create a `SKLearn` instance, which we will use to run a training job using the training script `train.py`.
###Code
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point="train.py", framework_version="0.20.0", instance_type="ml.m5.xlarge", role=role
)
###Output
_____no_output_____
###Markdown
The training script `train.py` trains a logistic regression model on the training data, and saves the model to the `/opt/ml/model` directory, which Amazon SageMaker tars and uploads into a `model.tar.gz` file into S3 at the end of the training job.
###Code
%%writefile train.py
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
if __name__ == "__main__":
training_data_directory = "/opt/ml/input/data/train"
train_features_data = os.path.join(training_data_directory, "train_features.csv")
train_labels_data = os.path.join(training_data_directory, "train_labels.csv")
print("Reading input data")
X_train = pd.read_csv(train_features_data, header=None)
y_train = pd.read_csv(train_labels_data, header=None)
model = LogisticRegression(class_weight="balanced", solver="lbfgs")
print("Training LR model")
model.fit(X_train, y_train)
model_output_directory = os.path.join("/opt/ml/model", "model.joblib")
print("Saving model to {}".format(model_output_directory))
joblib.dump(model, model_output_directory)
###Output
_____no_output_____
###Markdown
Run the training job using `train.py` on the preprocessed training data.
###Code
sklearn.fit({"train": preprocessed_training_data})
training_job_description = sklearn.jobs[-1].describe()
model_data_s3_uri = "{}{}/{}".format(
training_job_description["OutputDataConfig"]["S3OutputPath"],
training_job_description["TrainingJobName"],
"output/model.tar.gz",
)
###Output
_____no_output_____
###Markdown
Model Evaluation`evaluation.py` is the model evaluation script. Since the script also runs using scikit-learn as a dependency, run this using the `SKLearnProcessor` you created previously. This script takes the trained model and the test dataset as input, and produces a JSON file containing classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model.
###Code
%%writefile evaluation.py
import json
import os
import tarfile
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report, roc_auc_score, accuracy_score
if __name__ == "__main__":
model_path = os.path.join("/opt/ml/processing/model", "model.tar.gz")
print("Extracting model from path: {}".format(model_path))
with tarfile.open(model_path) as tar:
tar.extractall(path=".")
print("Loading model")
model = joblib.load("model.joblib")
print("Loading test input data")
test_features_data = os.path.join("/opt/ml/processing/test", "test_features.csv")
test_labels_data = os.path.join("/opt/ml/processing/test", "test_labels.csv")
X_test = pd.read_csv(test_features_data, header=None)
y_test = pd.read_csv(test_labels_data, header=None)
predictions = model.predict(X_test)
print("Creating classification evaluation report")
report_dict = classification_report(y_test, predictions, output_dict=True)
report_dict["accuracy"] = accuracy_score(y_test, predictions)
report_dict["roc_auc"] = roc_auc_score(y_test, predictions)
print("Classification report:\n{}".format(report_dict))
evaluation_output_path = os.path.join("/opt/ml/processing/evaluation", "evaluation.json")
print("Saving classification report to {}".format(evaluation_output_path))
with open(evaluation_output_path, "w") as f:
f.write(json.dumps(report_dict))
import json
from sagemaker.s3 import S3Downloader
sklearn_processor.run(
code="evaluation.py",
inputs=[
ProcessingInput(source=model_data_s3_uri, destination="/opt/ml/processing/model"),
ProcessingInput(source=preprocessed_test_data, destination="/opt/ml/processing/test"),
],
outputs=[ProcessingOutput(output_name="evaluation", source="/opt/ml/processing/evaluation")],
)
evaluation_job_description = sklearn_processor.jobs[-1].describe()
###Output
_____no_output_____
###Markdown
Now retrieve the file `evaluation.json` from Amazon S3, which contains the evaluation report.
###Code
evaluation_output_config = evaluation_job_description["ProcessingOutputConfig"]
for output in evaluation_output_config["Outputs"]:
if output["OutputName"] == "evaluation":
evaluation_s3_uri = output["S3Output"]["S3Uri"] + "/evaluation.json"
break
evaluation_output = S3Downloader.read_file(evaluation_s3_uri)
evaluation_output_dict = json.loads(evaluation_output)
print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Running processing jobs with FrameworkProcessor to include custom dependenciesAbove, you used a processing container that has scikit-learn installed, but there was no way to add dependencies to the processing container.A new sub-class of existing `ScriptProcessor` called `FrameworkProcessor` has been added to SageMaker SDK which allows to specify `source_dir` using which `requirements.txt` file can be included. The processor object will first install listed dependencies inside the target container prior to triggering processing job.Below is a simple example of how to create a processing container, and how to use a `FrameworkProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with `source_dir` and `requirements.txt`.
###Code
from sagemaker.processing import FrameworkProcessor
est_cls = sagemaker.sklearn.estimator.SKLearn
framework_version_str="0.20.0"
script_processor = FrameworkProcessor(
role=role,
instance_count=1,
instance_type="ml.m5.xlarge",
estimator_cls=est_cls,
framework_version=framework_version_str
)
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside the container that now has additional dependencies installed. You can update the processing script to use these dependencies and perform custom processing. This approach can be applied to your own pre-processing, feature-engineering, and model evaluation scripts inside of this container. First let's copy the previously created `preprocessing.py` script to `code` folder for next `run` method
###Code
!cp preprocessing.py ./code
from sagemaker.processing import ProcessingInput, ProcessingOutput
script_processor.run(
code="preprocessing.py",
source_dir="code",
inputs=[ProcessingInput(source=input_data, destination="/opt/ml/processing/input")],
outputs=[
ProcessingOutput(output_name="train_data", source="/opt/ml/processing/train"),
ProcessingOutput(output_name="test_data", source="/opt/ml/processing/test"),
],
arguments=["--train-test-split-ratio", "0.2"],
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____
###Markdown
SummaryAs you can see all the dependencies from `requirements.txt` file have been installed on the target container prior to starting the processing script. `FrameworkProcessor` makes it easy to use existing built-in framework processors like SciKit Learn, MXNet, PyTorch etc along with specifying custom dependencies for processing script. (Optional)Next StepsAs an optional step, this notebook shows an example below to build custom container from scratch using `docker`. By default optional steps do not run automatically, set `run_optional_steps` to True if you want to execute optional steps
###Code
run_optional_steps = False
# This will stop the below cells from executing if "Run All Cells" was used on the notebook.
if not run_optional_steps:
raise SystemExit("Stop here. Do not automatically execute optional steps.")
###Output
_____no_output_____
###Markdown
(Optional) Running processing jobs with your own dependenciesAbove, you used a processing container that has scikit-learn installed, but you can run your own processing container in your processing job as well, and still provide a script to run within your processing container.Below, you walk through how to create a processing container, and how to use a `ScriptProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with.
###Code
!mkdir docker
###Output
_____no_output_____
###Markdown
This is the Dockerfile to create the processing container. Install `pandas` and `scikit-learn` into it. You can install your own dependencies.
###Code
%%writefile docker/Dockerfile
FROM python:3.7-slim-buster
RUN pip3 install pandas==0.25.3 scikit-learn==0.21.3
ENV PYTHONUNBUFFERED=TRUE
ENTRYPOINT ["python3"]
###Output
_____no_output_____
###Markdown
This block of code builds the container using the `docker` command, creates an Amazon Elastic Container Registry (Amazon ECR) repository, and pushes the image to Amazon ECR.
###Code
import boto3
account_id = boto3.client("sts").get_caller_identity().get("Account")
ecr_repository = "sagemaker-processing-container"
tag = ":latest"
uri_suffix = "amazonaws.com"
if region in ["cn-north-1", "cn-northwest-1"]:
uri_suffix = "amazonaws.com.cn"
processing_repository_uri = "{}.dkr.ecr.{}.{}/{}".format(
account_id, region, uri_suffix, ecr_repository + tag
)
# Create ECR repository and push docker image
!docker build -t $ecr_repository docker
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $processing_repository_uri
!docker push $processing_repository_uri
###Output
_____no_output_____
###Markdown
The `ScriptProcessor` class lets you run a command inside this container, which you can use to run your own script.
###Code
from sagemaker.processing import ScriptProcessor
script_processor = ScriptProcessor(
command=["python3"],
image_uri=processing_repository_uri,
role=role,
instance_count=1,
instance_type="ml.m5.xlarge",
)
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside of the Docker container you built in this notebook, not the scikit-learn image maintained by Amazon SageMaker. You can add the dependencies to the Docker image, and run your own pre-processing, feature-engineering, and model evaluation scripts inside of this container.
###Code
script_processor.run(
code="preprocessing.py",
inputs=[ProcessingInput(source=input_data, destination="/opt/ml/processing/input")],
outputs=[
ProcessingOutput(output_name="train_data", source="/opt/ml/processing/train"),
ProcessingOutput(output_name="test_data", source="/opt/ml/processing/test"),
],
arguments=["--train-test-split-ratio", "0.2"],
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____
###Markdown
Amazon SageMaker Processing jobsWith Amazon SageMaker Processing jobs, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform.A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job.This notebook shows how you can:1. Run a processing job to run a scikit-learn script that cleans, pre-processes, performs feature engineering, and splits the input data into train and test sets.2. Run a training job on the pre-processed training data to train a model3. Run a processing job on the pre-processed test data to evaluate the trained model's performance4. Use your own custom container to run processing jobs with your own Python libraries and dependencies.The dataset used here is the [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29). You select features from this dataset, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model, and split the data into train and test sets. The task is to predict whether rows representing census responders have an income greater than `$50,000`, or less than `$50,000`. The dataset is heavily class imbalanced, with most records being labeled as earning less than `$50,000`. After training a logistic regression model, you evaluate the model against a hold-out test dataset, and save the classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model. Data pre-processing and feature engineering To run the scikit-learn preprocessing script as a processing job, create a `SKLearnProcessor`, which lets you run scripts inside of processing jobs using the scikit-learn image provided.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sklearn.processing import SKLearnProcessor
region = boto3.session.Session().region_name
role = get_execution_role()
sklearn_processor = SKLearnProcessor(framework_version='0.20.0',
role=role,
instance_type='ml.m5.xlarge',
instance_count=1)
###Output
_____no_output_____
###Markdown
Before introducing the script you use for data cleaning, pre-processing, and feature engineering, inspect the first 20 rows of the dataset. The target is predicting the `income` category. The features from the dataset you select are `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, and `dividends from stocks`.
###Code
import pandas as pd
input_data = 's3://sagemaker-sample-data-{}/processing/census/census-income.csv'.format(region)
df = pd.read_csv(input_data, nrows=10)
df.head(n=10)
###Output
_____no_output_____
###Markdown
This notebook cell writes a file `preprocessing.py`, which contains the pre-processing script. You can update the script, and rerun this cell to overwrite `preprocessing.py`. You run this as a processing job in the next cell. In this script, you* Remove duplicates and rows with conflicting data* transform the target `income` column into a column containing two labels.* transform the `age` and `num persons worked for employer` numerical columns into categorical features by binning them* scale the continuous `capital gains`, `capital losses`, and `dividends from stocks` so they're suitable for training* encode the `education`, `major industry code`, `class of worker` so they're suitable for training* split the data into training and test datasets, and saves the training features and labels and test features and labels.Our training script will use the pre-processed training features and labels to train a model, and our model evaluation script will use the trained model and pre-processed test features and labels to evaluate the model.
###Code
%%writefile preprocessing.py
import argparse
import os
import warnings
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
columns = ['age', 'education', 'major industry code', 'class of worker', 'num persons worked for employer',
'capital gains', 'capital losses', 'dividends from stocks', 'income']
class_labels = [' - 50000.', ' 50000+.']
def print_shape(df):
negative_examples, positive_examples = np.bincount(df['income'])
print('Data shape: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
if __name__=='__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--train-test-split-ratio', type=float, default=0.3)
args, _ = parser.parse_known_args()
print('Received arguments {}'.format(args))
input_data_path = os.path.join('/opt/ml/processing/input', 'census-income.csv')
print('Reading input data from {}'.format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df['income'])
print('Data after cleaning: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
split_ratio = args.train_test_split_ratio
print('Splitting data into train and test sets with ratio {}'.format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(df.drop('income', axis=1), df['income'], test_size=split_ratio, random_state=0)
preprocess = make_column_transformer(
(['age', 'num persons worked for employer'], KBinsDiscretizer(encode='onehot-dense', n_bins=10)),
(['capital gains', 'capital losses', 'dividends from stocks'], StandardScaler()),
(['education', 'major industry code', 'class of worker'], OneHotEncoder(sparse=False))
)
print('Running preprocessing and feature engineering transformations')
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print('Train data shape after preprocessing: {}'.format(train_features.shape))
print('Test data shape after preprocessing: {}'.format(test_features.shape))
train_features_output_path = os.path.join('/opt/ml/processing/train', 'train_features.csv')
train_labels_output_path = os.path.join('/opt/ml/processing/train', 'train_labels.csv')
test_features_output_path = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_output_path = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
print('Saving training features to {}'.format(train_features_output_path))
pd.DataFrame(train_features).to_csv(train_features_output_path, header=False, index=False)
print('Saving test features to {}'.format(test_features_output_path))
pd.DataFrame(test_features).to_csv(test_features_output_path, header=False, index=False)
print('Saving training labels to {}'.format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print('Saving test labels to {}'.format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
###Output
_____no_output_____
###Markdown
Run this script as a processing job. Use the `SKLearnProcessor.run()` method. You give the `run()` method one `ProcessingInput` where the `source` is the census dataset in Amazon S3, and the `destination` is where the script reads this data from, in this case `/opt/ml/processing/input`. These local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output/<output_name/`. You also give the ProcessingOutputs values for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The `arguments` parameter in the `run()` method are command-line arguments in our `preprocessing.py` script.
###Code
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description['ProcessingOutputConfig']
for output in output_config['Outputs']:
if output['OutputName'] == 'train_data':
preprocessed_training_data = output['S3Output']['S3Uri']
if output['OutputName'] == 'test_data':
preprocessed_test_data = output['S3Output']['S3Uri']
###Output
_____no_output_____
###Markdown
Now inspect the output of the pre-processing job, which consists of the processed features.
###Code
training_features = pd.read_csv(preprocessed_training_data + '/train_features.csv', nrows=10)
print('Training features shape: {}'.format(training_features.shape))
training_features.head(n=10)
###Output
_____no_output_____
###Markdown
Training using the pre-processed dataWe create a `SKLearn` instance, which we will use to run a training job using the training script `train.py`.
###Code
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point='train.py',
train_instance_type="ml.m5.xlarge",
role=role)
###Output
_____no_output_____
###Markdown
The training script `train.py` trains a logistic regression model on the training data, and saves the model to the `/opt/ml/model` directory, which Amazon SageMaker tars and uploads into a `model.tar.gz` file into S3 at the end of the training job.
###Code
%%writefile train.py
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
if __name__=="__main__":
training_data_directory = '/opt/ml/input/data/train'
train_features_data = os.path.join(training_data_directory, 'train_features.csv')
train_labels_data = os.path.join(training_data_directory, 'train_labels.csv')
print('Reading input data')
X_train = pd.read_csv(train_features_data, header=None)
y_train = pd.read_csv(train_labels_data, header=None)
model = LogisticRegression(class_weight='balanced', solver='lbfgs')
print('Training LR model')
model.fit(X_train, y_train)
model_output_directory = os.path.join('/opt/ml/model', "model.joblib")
print('Saving model to {}'.format(model_output_directory))
joblib.dump(model, model_output_directory)
###Output
_____no_output_____
###Markdown
Run the training job using `train.py` on the preprocessed training data.
###Code
sklearn.fit({'train': preprocessed_training_data})
training_job_description = sklearn.jobs[-1].describe()
model_data_s3_uri = '{}{}/{}'.format(
training_job_description['OutputDataConfig']['S3OutputPath'],
training_job_description['TrainingJobName'],
'output/model.tar.gz')
###Output
_____no_output_____
###Markdown
Model Evaluation`evaluation.py` is the model evaluation script. Since the script also runs using scikit-learn as a dependency, run this using the `SKLearnProcessor` you created previously. This script takes the trained model and the test dataset as input, and produces a JSON file containing classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model.
###Code
%%writefile evaluation.py
import json
import os
import tarfile
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report, roc_auc_score, accuracy_score
if __name__=="__main__":
model_path = os.path.join('/opt/ml/processing/model', 'model.tar.gz')
print('Extracting model from path: {}'.format(model_path))
with tarfile.open(model_path) as tar:
tar.extractall(path='.')
print('Loading model')
model = joblib.load('model.joblib')
print('Loading test input data')
test_features_data = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_data = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
X_test = pd.read_csv(test_features_data, header=None)
y_test = pd.read_csv(test_labels_data, header=None)
predictions = model.predict(X_test)
print('Creating classification evaluation report')
report_dict = classification_report(y_test, predictions, output_dict=True)
report_dict['accuracy'] = accuracy_score(y_test, predictions)
report_dict['roc_auc'] = roc_auc_score(y_test, predictions)
print('Classification report:\n{}'.format(report_dict))
evaluation_output_path = os.path.join('/opt/ml/processing/evaluation', 'evaluation.json')
print('Saving classification report to {}'.format(evaluation_output_path))
with open(evaluation_output_path, 'w') as f:
f.write(json.dumps(report_dict))
import json
from sagemaker.s3 import S3Downloader
sklearn_processor.run(code='evaluation.py',
inputs=[ProcessingInput(
source=model_data_s3_uri,
destination='/opt/ml/processing/model'),
ProcessingInput(
source=preprocessed_test_data,
destination='/opt/ml/processing/test')],
outputs=[ProcessingOutput(output_name='evaluation',
source='/opt/ml/processing/evaluation')]
)
evaluation_job_description = sklearn_processor.jobs[-1].describe()
###Output
_____no_output_____
###Markdown
Now retrieve the file `evaluation.json` from Amazon S3, which contains the evaluation report.
###Code
evaluation_output_config = evaluation_job_description['ProcessingOutputConfig']
for output in evaluation_output_config['Outputs']:
if output['OutputName'] == 'evaluation':
evaluation_s3_uri = output['S3Output']['S3Uri'] + '/evaluation.json'
break
evaluation_output = S3Downloader.read_file(evaluation_s3_uri)
evaluation_output_dict = json.loads(evaluation_output)
print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Running processing jobs with your own dependenciesAbove, you used a processing container that has scikit-learn installed, but you can run your own processing container in your processing job as well, and still provide a script to run within your processing container.Below, you walk through how to create a processing container, and how to use a `ScriptProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with.
###Code
!mkdir docker
###Output
_____no_output_____
###Markdown
This is the Dockerfile to create the processing container. Install `pandas` and `scikit-learn` into it. You can install your own dependencies.
###Code
%%writefile docker/Dockerfile
FROM python:3.7-slim-buster
RUN pip3 install pandas==0.25.3 scikit-learn==0.21.3
ENV PYTHONUNBUFFERED=TRUE
ENTRYPOINT ["python3"]
###Output
_____no_output_____
###Markdown
This block of code builds the container using the `docker` command, creates an Amazon Elastic Container Registry (Amazon ECR) repository, and pushes the image to Amazon ECR.
###Code
import boto3
account_id = boto3.client('sts').get_caller_identity().get('Account')
ecr_repository = 'sagemaker-processing-container'
tag = ':latest'
uri_suffix = 'amazonaws.com'
if region == 'cn-north-1' or 'cn-northwest-1':
uri_suffix = 'amazonaws.com.cn'
processing_repository_uri = '{}.dkr.ecr.{}.{}/{}'.format(account_id, region, uri_suffix, ecr_repository + tag)
# Create ECR repository and push docker image
!docker build -t $ecr_repository docker
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $processing_repository_uri
!docker push $processing_repository_uri
###Output
_____no_output_____
###Markdown
The `ScriptProcessor` class lets you run a command inside this container, which you can use to run your own script.
###Code
from sagemaker.processing import ScriptProcessor
script_processor = ScriptProcessor(command=['python3'],
image_uri=processing_repository_uri,
role=role,
instance_count=1,
instance_type='ml.m5.xlarge')
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside of the Docker container you built in this notebook, not the scikit-learn image maintained by Amazon SageMaker. You can add the dependencies to the Docker image, and run your own pre-processing, feature-engineering, and model evaluation scripts inside of this container.
###Code
script_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____
###Markdown
Amazon SageMaker Processing jobsWith Amazon SageMaker Processing jobs, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform.A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job.This notebook shows how you can:1. Run a processing job to run a scikit-learn script that cleans, pre-processes, performs feature engineering, and splits the input data into train and test sets.2. Run a training job on the pre-processed training data to train a model3. Run a processing job on the pre-processed test data to evaluate the trained model's performance4. Use your own custom container to run processing jobs with your own Python libraries and dependencies.The dataset used here is the [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29). You select features from this dataset, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model, and split the data into train and test sets. The task is to predict whether rows representing census responders have an income greater than `$50,000`, or less than `$50,000`. The dataset is heavily class imbalanced, with most records being labeled as earning less than `$50,000`. After training a logistic regression model, you evaluate the model against a hold-out test dataset, and save the classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model. Data pre-processing and feature engineering To run the scikit-learn preprocessing script as a processing job, create a `SKLearnProcessor`, which lets you run scripts inside of processing jobs using the scikit-learn image provided.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sklearn.processing import SKLearnProcessor
region = boto3.session.Session().region_name
role = get_execution_role()
sklearn_processor = SKLearnProcessor(framework_version='0.20.0',
role=role,
instance_type='ml.m5.xlarge',
instance_count=1)
###Output
_____no_output_____
###Markdown
Before introducing the script you use for data cleaning, pre-processing, and feature engineering, inspect the first 20 rows of the dataset. The target is predicting the `income` category. The features from the dataset you select are `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, and `dividends from stocks`.
###Code
import pandas as pd
input_data = 's3://sagemaker-sample-data-{}/processing/census/census-income.csv'.format(region)
df = pd.read_csv(input_data, nrows=10)
df.head(n=10)
###Output
_____no_output_____
###Markdown
This notebook cell writes a file `preprocessing.py`, which contains the pre-processing script. You can update the script, and rerun this cell to overwrite `preprocessing.py`. You run this as a processing job in the next cell. In this script, you* Remove duplicates and rows with conflicting data* transform the target `income` column into a column containing two labels.* transform the `age` and `num persons worked for employer` numerical columns into categorical features by binning them* scale the continuous `capital gains`, `capital losses`, and `dividends from stocks` so they're suitable for training* encode the `education`, `major industry code`, `class of worker` so they're suitable for training* split the data into training and test datasets, and saves the training features and labels and test features and labels.Our training script will use the pre-processed training features and labels to train a model, and our model evaluation script will use the trained model and pre-processed test features and labels to evaluate the model.
###Code
%%writefile preprocessing.py
import argparse
import os
import warnings
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
columns = ['age', 'education', 'major industry code', 'class of worker', 'num persons worked for employer',
'capital gains', 'capital losses', 'dividends from stocks', 'income']
class_labels = [' - 50000.', ' 50000+.']
def print_shape(df):
negative_examples, positive_examples = np.bincount(df['income'])
print('Data shape: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
if __name__=='__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--train-test-split-ratio', type=float, default=0.3)
args, _ = parser.parse_known_args()
print('Received arguments {}'.format(args))
input_data_path = os.path.join('/opt/ml/processing/input', 'census-income.csv')
print('Reading input data from {}'.format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df['income'])
print('Data after cleaning: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
split_ratio = args.train_test_split_ratio
print('Splitting data into train and test sets with ratio {}'.format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(df.drop('income', axis=1), df['income'], test_size=split_ratio, random_state=0)
preprocess = make_column_transformer(
(['age', 'num persons worked for employer'], KBinsDiscretizer(encode='onehot-dense', n_bins=10)),
(['capital gains', 'capital losses', 'dividends from stocks'], StandardScaler()),
(['education', 'major industry code', 'class of worker'], OneHotEncoder(sparse=False))
)
print('Running preprocessing and feature engineering transformations')
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print('Train data shape after preprocessing: {}'.format(train_features.shape))
print('Test data shape after preprocessing: {}'.format(test_features.shape))
train_features_output_path = os.path.join('/opt/ml/processing/train', 'train_features.csv')
train_labels_output_path = os.path.join('/opt/ml/processing/train', 'train_labels.csv')
test_features_output_path = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_output_path = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
print('Saving training features to {}'.format(train_features_output_path))
pd.DataFrame(train_features).to_csv(train_features_output_path, header=False, index=False)
print('Saving test features to {}'.format(test_features_output_path))
pd.DataFrame(test_features).to_csv(test_features_output_path, header=False, index=False)
print('Saving training labels to {}'.format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print('Saving test labels to {}'.format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
###Output
_____no_output_____
###Markdown
Run this script as a processing job. Use the `SKLearnProcessor.run()` method. You give the `run()` method one `ProcessingInput` where the `source` is the census dataset in Amazon S3, and the `destination` is where the script reads this data from, in this case `/opt/ml/processing/input`. These local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output/<output_name/`. You also give the ProcessingOutputs values for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The `arguments` parameter in the `run()` method are command-line arguments in our `preprocessing.py` script.
###Code
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description['ProcessingOutputConfig']
for output in output_config['Outputs']:
if output['OutputName'] == 'train_data':
preprocessed_training_data = output['S3Output']['S3Uri']
if output['OutputName'] == 'test_data':
preprocessed_test_data = output['S3Output']['S3Uri']
###Output
_____no_output_____
###Markdown
Now inspect the output of the pre-processing job, which consists of the processed features.
###Code
training_features = pd.read_csv(preprocessed_training_data + '/train_features.csv', nrows=10)
print('Training features shape: {}'.format(training_features.shape))
training_features.head(n=10)
###Output
_____no_output_____
###Markdown
Training using the pre-processed dataWe create a `SKLearn` instance, which we will use to run a training job using the training script `train.py`.
###Code
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point='train.py',
train_instance_type="ml.m5.xlarge",
role=role)
###Output
_____no_output_____
###Markdown
The training script `train.py` trains a logistic regression model on the training data, and saves the model to the `/opt/ml/model` directory, which Amazon SageMaker tars and uploads into a `model.tar.gz` file into S3 at the end of the training job.
###Code
%%writefile train.py
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
if __name__=="__main__":
training_data_directory = '/opt/ml/input/data/train'
train_features_data = os.path.join(training_data_directory, 'train_features.csv')
train_labels_data = os.path.join(training_data_directory, 'train_labels.csv')
print('Reading input data')
X_train = pd.read_csv(train_features_data, header=None)
y_train = pd.read_csv(train_labels_data, header=None)
model = LogisticRegression(class_weight='balanced', solver='lbfgs')
print('Training LR model')
model.fit(X_train, y_train)
model_output_directory = os.path.join('/opt/ml/model', "model.joblib")
print('Saving model to {}'.format(model_output_directory))
joblib.dump(model, model_output_directory)
###Output
_____no_output_____
###Markdown
Run the training job using `train.py` on the preprocessed training data.
###Code
sklearn.fit({'train': preprocessed_training_data})
training_job_description = sklearn.jobs[-1].describe()
model_data_s3_uri = '{}{}/{}'.format(
training_job_description['OutputDataConfig']['S3OutputPath'],
training_job_description['TrainingJobName'],
'output/model.tar.gz')
###Output
_____no_output_____
###Markdown
Model Evaluation`evaluation.py` is the model evaluation script. Since the script also runs using scikit-learn as a dependency, run this using the `SKLearnProcessor` you created previously. This script takes the trained model and the test dataset as input, and produces a JSON file containing classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model.
###Code
%%writefile evaluation.py
import json
import os
import tarfile
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report, roc_auc_score, accuracy_score
if __name__=="__main__":
model_path = os.path.join('/opt/ml/processing/model', 'model.tar.gz')
print('Extracting model from path: {}'.format(model_path))
with tarfile.open(model_path) as tar:
tar.extractall(path='.')
print('Loading model')
model = joblib.load('model.joblib')
print('Loading test input data')
test_features_data = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_data = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
X_test = pd.read_csv(test_features_data, header=None)
y_test = pd.read_csv(test_labels_data, header=None)
predictions = model.predict(X_test)
print('Creating classification evaluation report')
report_dict = classification_report(y_test, predictions, output_dict=True)
report_dict['accuracy'] = accuracy_score(y_test, predictions)
report_dict['roc_auc'] = roc_auc_score(y_test, predictions)
print('Classification report:\n{}'.format(report_dict))
evaluation_output_path = os.path.join('/opt/ml/processing/evaluation', 'evaluation.json')
print('Saving classification report to {}'.format(evaluation_output_path))
with open(evaluation_output_path, 'w') as f:
f.write(json.dumps(report_dict))
import json
from sagemaker.s3 import S3Downloader
sklearn_processor.run(code='evaluation.py',
inputs=[ProcessingInput(
source=model_data_s3_uri,
destination='/opt/ml/processing/model'),
ProcessingInput(
source=preprocessed_test_data,
destination='/opt/ml/processing/test')],
outputs=[ProcessingOutput(output_name='evaluation',
source='/opt/ml/processing/evaluation')]
)
evaluation_job_description = sklearn_processor.jobs[-1].describe()
###Output
_____no_output_____
###Markdown
Now retrieve the file `evaluation.json` from Amazon S3, which contains the evaluation report.
###Code
evaluation_output_config = evaluation_job_description['ProcessingOutputConfig']
for output in evaluation_output_config['Outputs']:
if output['OutputName'] == 'evaluation':
evaluation_s3_uri = output['S3Output']['S3Uri'] + '/evaluation.json'
break
evaluation_output = S3Downloader.read_file(evaluation_s3_uri)
evaluation_output_dict = json.loads(evaluation_output)
print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Running processing jobs with your own dependenciesAbove, you used a processing container that has scikit-learn installed, but you can run your own processing container in your processing job as well, and still provide a script to run within your processing container.Below, you walk through how to create a processing container, and how to use a `ScriptProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with.
###Code
!mkdir docker
###Output
_____no_output_____
###Markdown
This is the Dockerfile to create the processing container. Install `pandas` and `scikit-learn` into it. You can install your own dependencies.
###Code
%%writefile docker/Dockerfile
FROM python:3.7-slim-buster
RUN pip3 install pandas==0.25.3 scikit-learn==0.21.3
ENV PYTHONUNBUFFERED=TRUE
ENTRYPOINT ["python3"]
###Output
_____no_output_____
###Markdown
This block of code builds the container using the `docker` command, creates an Amazon Elastic Container Registry (Amazon ECR) repository, and pushes the image to Amazon ECR.
###Code
import boto3
account_id = boto3.client('sts').get_caller_identity().get('Account')
ecr_repository = 'sagemaker-processing-container'
tag = ':latest'
processing_repository_uri = '{}.dkr.ecr.{}.amazonaws.com/{}'.format(account_id, region, ecr_repository + tag)
# Create ECR repository and push docker image
!docker build -t $ecr_repository docker
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $processing_repository_uri
!docker push $processing_repository_uri
###Output
_____no_output_____
###Markdown
The `ScriptProcessor` class lets you run a command inside this container, which you can use to run your own script.
###Code
from sagemaker.processing import ScriptProcessor
script_processor = ScriptProcessor(command=['python3'],
image_uri=processing_repository_uri,
role=role,
instance_count=1,
instance_type='ml.m5.xlarge')
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside of the Docker container you built in this notebook, not the scikit-learn image maintained by Amazon SageMaker. You can add the dependencies to the Docker image, and run your own pre-processing, feature-engineering, and model evaluation scripts inside of this container.
###Code
script_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____
###Markdown
Amazon SageMaker Processing jobsWith Amazon SageMaker Processing jobs, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform.A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job.This notebook shows how you can:1. Run a processing job to run a scikit-learn script that cleans, pre-processes, performs feature engineering, and splits the input data into train and test sets.2. Run a training job on the pre-processed training data to train a model3. Run a processing job on the pre-processed test data to evaluate the trained model's performance4. Use your own custom container to run processing jobs with your own Python libraries and dependencies.The dataset used here is the [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29). You select features from this dataset, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model, and split the data into train and test sets. The task is to predict whether rows representing census responders have an income greater than `$50,000`, or less than `$50,000`. The dataset is heavily class imbalanced, with most records being labeled as earning less than `$50,000`. After training a logistic regression model, you evaluate the model against a hold-out test dataset, and save the classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model. Data pre-processing and feature engineering To run the scikit-learn preprocessing script as a processing job, create a `SKLearnProcessor`, which lets you run scripts inside of processing jobs using the scikit-learn image provided.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sklearn.processing import SKLearnProcessor
region = boto3.session.Session().region_name
role = get_execution_role()
sklearn_processor = SKLearnProcessor(framework_version='0.20.0',
role=role,
instance_type='ml.m5.xlarge',
instance_count=1)
###Output
_____no_output_____
###Markdown
Before introducing the script you use for data cleaning, pre-processing, and feature engineering, inspect the first 20 rows of the dataset. The target is predicting the `income` category. The features from the dataset you select are `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, and `dividends from stocks`.
###Code
import pandas as pd
input_data = 's3://sagemaker-sample-data-{}/processing/census/census-income.csv'.format(region)
df = pd.read_csv(input_data, nrows=10)
df.head(n=10)
###Output
_____no_output_____
###Markdown
This notebook cell writes a file `preprocessing.py`, which contains the pre-processing script. You can update the script, and rerun this cell to overwrite `preprocessing.py`. You run this as a processing job in the next cell. In this script, you* Remove duplicates and rows with conflicting data* transform the target `income` column into a column containing two labels.* transform the `age` and `num persons worked for employer` numerical columns into categorical features by binning them* scale the continuous `capital gains`, `capital losses`, and `dividends from stocks` so they're suitable for training* encode the `education`, `major industry code`, `class of worker` so they're suitable for training* split the data into training and test datasets, and saves the training features and labels and test features and labels.Our training script will use the pre-processed training features and labels to train a model, and our model evaluation script will use the trained model and pre-processed test features and labels to evaluate the model.
###Code
%%writefile preprocessing.py
import argparse
import os
import warnings
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
columns = ['age', 'education', 'major industry code', 'class of worker', 'num persons worked for employer',
'capital gains', 'capital losses', 'dividends from stocks', 'income']
class_labels = [' - 50000.', ' 50000+.']
def print_shape(df):
negative_examples, positive_examples = np.bincount(df['income'])
print('Data shape: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
if __name__=='__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--train-test-split-ratio', type=float, default=0.3)
args, _ = parser.parse_known_args()
print('Received arguments {}'.format(args))
input_data_path = os.path.join('/opt/ml/processing/input', 'census-income.csv')
print('Reading input data from {}'.format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df['income'])
print('Data after cleaning: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
split_ratio = args.train_test_split_ratio
print('Splitting data into train and test sets with ratio {}'.format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(df.drop('income', axis=1), df['income'], test_size=split_ratio, random_state=0)
preprocess = make_column_transformer(
(['age', 'num persons worked for employer'], KBinsDiscretizer(encode='onehot-dense', n_bins=10)),
(['capital gains', 'capital losses', 'dividends from stocks'], StandardScaler()),
(['education', 'major industry code', 'class of worker'], OneHotEncoder(sparse=False))
)
print('Running preprocessing and feature engineering transformations')
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print('Train data shape after preprocessing: {}'.format(train_features.shape))
print('Test data shape after preprocessing: {}'.format(test_features.shape))
train_features_output_path = os.path.join('/opt/ml/processing/train', 'train_features.csv')
train_labels_output_path = os.path.join('/opt/ml/processing/train', 'train_labels.csv')
test_features_output_path = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_output_path = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
print('Saving training features to {}'.format(train_features_output_path))
pd.DataFrame(train_features).to_csv(train_features_output_path, header=False, index=False)
print('Saving test features to {}'.format(test_features_output_path))
pd.DataFrame(test_features).to_csv(test_features_output_path, header=False, index=False)
print('Saving training labels to {}'.format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print('Saving test labels to {}'.format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
###Output
_____no_output_____
###Markdown
Run this script as a processing job. Use the `SKLearnProcessor.run()` method. You give the `run()` method one `ProcessingInput` where the `source` is the census dataset in Amazon S3, and the `destination` is where the script reads this data from, in this case `/opt/ml/processing/input`. These local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output/<output_name/`. You also give the ProcessingOutputs values for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The `arguments` parameter in the `run()` method are command-line arguments in our `preprocessing.py` script.
###Code
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description['ProcessingOutputConfig']
for output in output_config['Outputs']:
if output['OutputName'] == 'train_data':
preprocessed_training_data = output['S3Output']['S3Uri']
if output['OutputName'] == 'test_data':
preprocessed_test_data = output['S3Output']['S3Uri']
###Output
_____no_output_____
###Markdown
Now inspect the output of the pre-processing job, which consists of the processed features.
###Code
training_features = pd.read_csv(preprocessed_training_data + '/train_features.csv', nrows=10)
print('Training features shape: {}'.format(training_features.shape))
training_features.head(n=10)
###Output
_____no_output_____
###Markdown
Training using the pre-processed dataWe create a `SKLearn` instance, which we will use to run a training job using the training script `train.py`.
###Code
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point='train.py',
framework_version='0.20.0',
train_instance_type="ml.m5.xlarge",
role=role)
###Output
_____no_output_____
###Markdown
The training script `train.py` trains a logistic regression model on the training data, and saves the model to the `/opt/ml/model` directory, which Amazon SageMaker tars and uploads into a `model.tar.gz` file into S3 at the end of the training job.
###Code
%%writefile train.py
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
if __name__=="__main__":
training_data_directory = '/opt/ml/input/data/train'
train_features_data = os.path.join(training_data_directory, 'train_features.csv')
train_labels_data = os.path.join(training_data_directory, 'train_labels.csv')
print('Reading input data')
X_train = pd.read_csv(train_features_data, header=None)
y_train = pd.read_csv(train_labels_data, header=None)
model = LogisticRegression(class_weight='balanced', solver='lbfgs')
print('Training LR model')
model.fit(X_train, y_train)
model_output_directory = os.path.join('/opt/ml/model', "model.joblib")
print('Saving model to {}'.format(model_output_directory))
joblib.dump(model, model_output_directory)
###Output
_____no_output_____
###Markdown
Run the training job using `train.py` on the preprocessed training data.
###Code
sklearn.fit({'train': preprocessed_training_data})
training_job_description = sklearn.jobs[-1].describe()
model_data_s3_uri = '{}{}/{}'.format(
training_job_description['OutputDataConfig']['S3OutputPath'],
training_job_description['TrainingJobName'],
'output/model.tar.gz')
###Output
_____no_output_____
###Markdown
Model Evaluation`evaluation.py` is the model evaluation script. Since the script also runs using scikit-learn as a dependency, run this using the `SKLearnProcessor` you created previously. This script takes the trained model and the test dataset as input, and produces a JSON file containing classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model.
###Code
%%writefile evaluation.py
import json
import os
import tarfile
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report, roc_auc_score, accuracy_score
if __name__=="__main__":
model_path = os.path.join('/opt/ml/processing/model', 'model.tar.gz')
print('Extracting model from path: {}'.format(model_path))
with tarfile.open(model_path) as tar:
tar.extractall(path='.')
print('Loading model')
model = joblib.load('model.joblib')
print('Loading test input data')
test_features_data = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_data = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
X_test = pd.read_csv(test_features_data, header=None)
y_test = pd.read_csv(test_labels_data, header=None)
predictions = model.predict(X_test)
print('Creating classification evaluation report')
report_dict = classification_report(y_test, predictions, output_dict=True)
report_dict['accuracy'] = accuracy_score(y_test, predictions)
report_dict['roc_auc'] = roc_auc_score(y_test, predictions)
print('Classification report:\n{}'.format(report_dict))
evaluation_output_path = os.path.join('/opt/ml/processing/evaluation', 'evaluation.json')
print('Saving classification report to {}'.format(evaluation_output_path))
with open(evaluation_output_path, 'w') as f:
f.write(json.dumps(report_dict))
import json
from sagemaker.s3 import S3Downloader
sklearn_processor.run(code='evaluation.py',
inputs=[ProcessingInput(
source=model_data_s3_uri,
destination='/opt/ml/processing/model'),
ProcessingInput(
source=preprocessed_test_data,
destination='/opt/ml/processing/test')],
outputs=[ProcessingOutput(output_name='evaluation',
source='/opt/ml/processing/evaluation')]
)
evaluation_job_description = sklearn_processor.jobs[-1].describe()
###Output
_____no_output_____
###Markdown
Now retrieve the file `evaluation.json` from Amazon S3, which contains the evaluation report.
###Code
evaluation_output_config = evaluation_job_description['ProcessingOutputConfig']
for output in evaluation_output_config['Outputs']:
if output['OutputName'] == 'evaluation':
evaluation_s3_uri = output['S3Output']['S3Uri'] + '/evaluation.json'
break
evaluation_output = S3Downloader.read_file(evaluation_s3_uri)
evaluation_output_dict = json.loads(evaluation_output)
print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Running processing jobs with your own dependenciesAbove, you used a processing container that has scikit-learn installed, but you can run your own processing container in your processing job as well, and still provide a script to run within your processing container.Below, you walk through how to create a processing container, and how to use a `ScriptProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with.
###Code
!mkdir docker
###Output
_____no_output_____
###Markdown
This is the Dockerfile to create the processing container. Install `pandas` and `scikit-learn` into it. You can install your own dependencies.
###Code
%%writefile docker/Dockerfile
FROM python:3.7-slim-buster
RUN pip3 install pandas==0.25.3 scikit-learn==0.21.3
ENV PYTHONUNBUFFERED=TRUE
ENTRYPOINT ["python3"]
###Output
_____no_output_____
###Markdown
This block of code builds the container using the `docker` command, creates an Amazon Elastic Container Registry (Amazon ECR) repository, and pushes the image to Amazon ECR.
###Code
import boto3
account_id = boto3.client('sts').get_caller_identity().get('Account')
ecr_repository = 'sagemaker-processing-container'
tag = ':latest'
uri_suffix = 'amazonaws.com'
if region in ['cn-north-1', 'cn-northwest-1']:
uri_suffix = 'amazonaws.com.cn'
processing_repository_uri = '{}.dkr.ecr.{}.{}/{}'.format(account_id, region, uri_suffix, ecr_repository + tag)
# Create ECR repository and push docker image
!docker build -t $ecr_repository docker
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $processing_repository_uri
!docker push $processing_repository_uri
###Output
_____no_output_____
###Markdown
The `ScriptProcessor` class lets you run a command inside this container, which you can use to run your own script.
###Code
from sagemaker.processing import ScriptProcessor
script_processor = ScriptProcessor(command=['python3'],
image_uri=processing_repository_uri,
role=role,
instance_count=1,
instance_type='ml.m5.xlarge')
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside of the Docker container you built in this notebook, not the scikit-learn image maintained by Amazon SageMaker. You can add the dependencies to the Docker image, and run your own pre-processing, feature-engineering, and model evaluation scripts inside of this container.
###Code
script_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____
###Markdown
Amazon SageMaker Processing jobsWith Amazon SageMaker Processing jobs, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform.A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job.This notebook shows how you can:1. Run a processing job to run a scikit-learn script that cleans, pre-processes, performs feature engineering, and splits the input data into train and test sets.2. Run a training job on the pre-processed training data to train a model3. Run a processing job on the pre-processed test data to evaluate the trained model's performance4. Use your own custom container to run processing jobs with your own Python libraries and dependencies.The dataset used here is the [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29). You select features from this dataset, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model, and split the data into train and test sets. The task is to predict whether rows representing census responders have an income greater than `$50,000`, or less than `$50,000`. The dataset is heavily class imbalanced, with most records being labeled as earning less than `$50,000`. After training a logistic regression model, you evaluate the model against a hold-out test dataset, and save the classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model. Data pre-processing and feature engineering To run the scikit-learn preprocessing script as a processing job, create a `SKLearnProcessor`, which lets you run scripts inside of processing jobs using the scikit-learn image provided.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sklearn.processing import SKLearnProcessor
region = boto3.session.Session().region_name
role = get_execution_role()
sklearn_processor = SKLearnProcessor(framework_version='0.20.0',
role=role,
instance_type='ml.m5.xlarge',
instance_count=1)
###Output
_____no_output_____
###Markdown
Before introducing the script you use for data cleaning, pre-processing, and feature engineering, inspect the first 20 rows of the dataset. The target is predicting the `income` category. The features from the dataset you select are `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, and `dividends from stocks`.
###Code
import pandas as pd
input_data = 's3://sagemaker-sample-data-{}/processing/census/census-income.csv'.format(region)
df = pd.read_csv(input_data, nrows=10)
df.head(n=10)
###Output
_____no_output_____
###Markdown
This notebook cell writes a file `preprocessing.py`, which contains the pre-processing script. You can update the script, and rerun this cell to overwrite `preprocessing.py`. You run this as a processing job in the next cell. In this script, you* Remove duplicates and rows with conflicting data* transform the target `income` column into a column containing two labels.* transform the `age` and `num persons worked for employer` numerical columns into categorical features by binning them* scale the continuous `capital gains`, `capital losses`, and `dividends from stocks` so they're suitable for training* encode the `education`, `major industry code`, `class of worker` so they're suitable for training* split the data into training and test datasets, and saves the training features and labels and test features and labels.Our training script will use the pre-processed training features and labels to train a model, and our model evaluation script will use the trained model and pre-processed test features and labels to evaluate the model.
###Code
%%writefile preprocessing.py
import argparse
import os
import warnings
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
columns = ['age', 'education', 'major industry code', 'class of worker', 'num persons worked for employer',
'capital gains', 'capital losses', 'dividends from stocks', 'income']
class_labels = [' - 50000.', ' 50000+.']
def print_shape(df):
negative_examples, positive_examples = np.bincount(df['income'])
print('Data shape: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
if __name__=='__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--train-test-split-ratio', type=float, default=0.3)
args, _ = parser.parse_known_args()
print('Received arguments {}'.format(args))
input_data_path = os.path.join('/opt/ml/processing/input', 'census-income.csv')
print('Reading input data from {}'.format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df['income'])
print('Data after cleaning: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
split_ratio = args.train_test_split_ratio
print('Splitting data into train and test sets with ratio {}'.format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(df.drop('income', axis=1), df['income'], test_size=split_ratio, random_state=0)
preprocess = make_column_transformer(
(['age', 'num persons worked for employer'], KBinsDiscretizer(encode='onehot-dense', n_bins=10)),
(['capital gains', 'capital losses', 'dividends from stocks'], StandardScaler()),
(['education', 'major industry code', 'class of worker'], OneHotEncoder(sparse=False))
)
print('Running preprocessing and feature engineering transformations')
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print('Train data shape after preprocessing: {}'.format(train_features.shape))
print('Test data shape after preprocessing: {}'.format(test_features.shape))
train_features_output_path = os.path.join('/opt/ml/processing/train', 'train_features.csv')
train_labels_output_path = os.path.join('/opt/ml/processing/train', 'train_labels.csv')
test_features_output_path = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_output_path = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
print('Saving training features to {}'.format(train_features_output_path))
pd.DataFrame(train_features).to_csv(train_features_output_path, header=False, index=False)
print('Saving test features to {}'.format(test_features_output_path))
pd.DataFrame(test_features).to_csv(test_features_output_path, header=False, index=False)
print('Saving training labels to {}'.format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print('Saving test labels to {}'.format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
###Output
_____no_output_____
###Markdown
Run this script as a processing job. Use the `SKLearnProcessor.run()` method. You give the `run()` method one `ProcessingInput` where the `source` is the census dataset in Amazon S3, and the `destination` is where the script reads this data from, in this case `/opt/ml/processing/input`. These local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output/<output_name/`. You also give the ProcessingOutputs values for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The `arguments` parameter in the `run()` method are command-line arguments in our `preprocessing.py` script.
###Code
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description['ProcessingOutputConfig']
for output in output_config['Outputs']:
if output['OutputName'] == 'train_data':
preprocessed_training_data = output['S3Output']['S3Uri']
if output['OutputName'] == 'test_data':
preprocessed_test_data = output['S3Output']['S3Uri']
###Output
_____no_output_____
###Markdown
Now inspect the output of the pre-processing job, which consists of the processed features.
###Code
training_features = pd.read_csv(preprocessed_training_data + '/train_features.csv', nrows=10)
print('Training features shape: {}'.format(training_features.shape))
training_features.head(n=10)
###Output
_____no_output_____
###Markdown
Training using the pre-processed dataWe create a `SKLearn` instance, which we will use to run a training job using the training script `train.py`.
###Code
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point='train.py',
train_instance_type="ml.m5.xlarge",
role=role)
###Output
_____no_output_____
###Markdown
The training script `train.py` trains a logistic regression model on the training data, and saves the model to the `/opt/ml/model` directory, which Amazon SageMaker tars and uploads into a `model.tar.gz` file into S3 at the end of the training job.
###Code
%%writefile train.py
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
if __name__=="__main__":
training_data_directory = '/opt/ml/input/data/train'
train_features_data = os.path.join(training_data_directory, 'train_features.csv')
train_labels_data = os.path.join(training_data_directory, 'train_labels.csv')
print('Reading input data')
X_train = pd.read_csv(train_features_data, header=None)
y_train = pd.read_csv(train_labels_data, header=None)
model = LogisticRegression(class_weight='balanced', solver='lbfgs')
print('Training LR model')
model.fit(X_train, y_train)
model_output_directory = os.path.join('/opt/ml/model', "model.joblib")
print('Saving model to {}'.format(model_output_directory))
joblib.dump(model, model_output_directory)
###Output
_____no_output_____
###Markdown
Run the training job using `train.py` on the preprocessed training data.
###Code
sklearn.fit({'train': preprocessed_training_data})
training_job_description = sklearn.jobs[-1].describe()
model_data_s3_uri = '{}{}/{}'.format(
training_job_description['OutputDataConfig']['S3OutputPath'],
training_job_description['TrainingJobName'],
'output/model.tar.gz')
###Output
_____no_output_____
###Markdown
Model Evaluation`evaluation.py` is the model evaluation script. Since the script also runs using scikit-learn as a dependency, run this using the `SKLearnProcessor` you created previously. This script takes the trained model and the test dataset as input, and produces a JSON file containing classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model.
###Code
%%writefile evaluation.py
import json
import os
import tarfile
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report, roc_auc_score, accuracy_score
if __name__=="__main__":
model_path = os.path.join('/opt/ml/processing/model', 'model.tar.gz')
print('Extracting model from path: {}'.format(model_path))
with tarfile.open(model_path) as tar:
tar.extractall(path='.')
print('Loading model')
model = joblib.load('model.joblib')
print('Loading test input data')
test_features_data = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_data = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
X_test = pd.read_csv(test_features_data, header=None)
y_test = pd.read_csv(test_labels_data, header=None)
predictions = model.predict(X_test)
print('Creating classification evaluation report')
report_dict = classification_report(y_test, predictions, output_dict=True)
report_dict['accuracy'] = accuracy_score(y_test, predictions)
report_dict['roc_auc'] = roc_auc_score(y_test, predictions)
print('Classification report:\n{}'.format(report_dict))
evaluation_output_path = os.path.join('/opt/ml/processing/evaluation', 'evaluation.json')
print('Saving classification report to {}'.format(evaluation_output_path))
with open(evaluation_output_path, 'w') as f:
f.write(json.dumps(report_dict))
import json
from sagemaker.s3 import S3Downloader
sklearn_processor.run(code='evaluation.py',
inputs=[ProcessingInput(
source=model_data_s3_uri,
destination='/opt/ml/processing/model'),
ProcessingInput(
source=preprocessed_test_data,
destination='/opt/ml/processing/test')],
outputs=[ProcessingOutput(output_name='evaluation',
source='/opt/ml/processing/evaluation')]
)
evaluation_job_description = sklearn_processor.jobs[-1].describe()
###Output
_____no_output_____
###Markdown
Now retrieve the file `evaluation.json` from Amazon S3, which contains the evaluation report.
###Code
evaluation_output_config = evaluation_job_description['ProcessingOutputConfig']
for output in evaluation_output_config['Outputs']:
if output['OutputName'] == 'evaluation':
evaluation_s3_uri = output['S3Output']['S3Uri'] + '/evaluation.json'
break
evaluation_output = S3Downloader.read_file(evaluation_s3_uri)
evaluation_output_dict = json.loads(evaluation_output)
print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Running processing jobs with your own dependenciesAbove, you used a processing container that has scikit-learn installed, but you can run your own processing container in your processing job as well, and still provide a script to run within your processing container.Below, you walk through how to create a processing container, and how to use a `ScriptProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with.
###Code
!mkdir docker
###Output
_____no_output_____
###Markdown
This is the Dockerfile to create the processing container. Install `pandas` and `scikit-learn` into it. You can install your own dependencies.
###Code
%%writefile docker/Dockerfile
FROM python:3.7-slim-buster
RUN pip3 install pandas==0.25.3 scikit-learn==0.21.3
ENV PYTHONUNBUFFERED=TRUE
ENTRYPOINT ["python3"]
###Output
_____no_output_____
###Markdown
This block of code builds the container using the `docker` command, creates an Amazon Elastic Container Registry (Amazon ECR) repository, and pushes the image to Amazon ECR.
###Code
import boto3
account_id = boto3.client('sts').get_caller_identity().get('Account')
ecr_repository = 'sagemaker-processing-container'
tag = ':latest'
uri_suffix = 'amazonaws.com'
if region in ['cn-north-1', 'cn-northwest-1']:
uri_suffix = 'amazonaws.com.cn'
processing_repository_uri = '{}.dkr.ecr.{}.{}/{}'.format(account_id, region, uri_suffix, ecr_repository + tag)
# Create ECR repository and push docker image
!docker build -t $ecr_repository docker
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $processing_repository_uri
!docker push $processing_repository_uri
###Output
_____no_output_____
###Markdown
The `ScriptProcessor` class lets you run a command inside this container, which you can use to run your own script.
###Code
from sagemaker.processing import ScriptProcessor
script_processor = ScriptProcessor(command=['python3'],
image_uri=processing_repository_uri,
role=role,
instance_count=1,
instance_type='ml.m5.xlarge')
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside of the Docker container you built in this notebook, not the scikit-learn image maintained by Amazon SageMaker. You can add the dependencies to the Docker image, and run your own pre-processing, feature-engineering, and model evaluation scripts inside of this container.
###Code
script_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____
###Markdown
Amazon SageMaker Processing jobsWith Amazon SageMaker Processing jobs, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform.A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job.This notebook shows how you can:1. Run a processing job to run a scikit-learn script that cleans, pre-processes, performs feature engineering, and splits the input data into train and test sets.2. Run a training job on the pre-processed training data to train a model3. Run a processing job on the pre-processed test data to evaluate the trained model's performance4. Use your own custom container to run processing jobs with your own Python libraries and dependencies.The dataset used here is the [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29). You select features from this dataset, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model, and split the data into train and test sets. The task is to predict whether rows representing census responders have an income greater than `$50,000`, or less than `$50,000`. The dataset is heavily class imbalanced, with most records being labeled as earning less than `$50,000`. After training a logistic regression model, you evaluate the model against a hold-out test dataset, and save the classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model. Data pre-processing and feature engineering To run the scikit-learn preprocessing script as a processing job, create a `SKLearnProcessor`, which lets you run scripts inside of processing jobs using the scikit-learn image provided.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sklearn.processing import SKLearnProcessor
region = boto3.session.Session().region_name
role = get_execution_role()
sklearn_processor = SKLearnProcessor(
framework_version="0.20.0", role=role, instance_type="ml.m5.xlarge", instance_count=1
)
###Output
_____no_output_____
###Markdown
Before introducing the script you use for data cleaning, pre-processing, and feature engineering, inspect the first 20 rows of the dataset. The target is predicting the `income` category. The features from the dataset you select are `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, and `dividends from stocks`.
###Code
import pandas as pd
input_data = "s3://sagemaker-sample-data-{}/processing/census/census-income.csv".format(region)
df = pd.read_csv(input_data, nrows=10)
df.head(n=10)
###Output
_____no_output_____
###Markdown
This notebook cell writes a file `preprocessing.py`, which contains the pre-processing script. You can update the script, and rerun this cell to overwrite `preprocessing.py`. You run this as a processing job in the next cell. In this script, you* Remove duplicates and rows with conflicting data* transform the target `income` column into a column containing two labels.* transform the `age` and `num persons worked for employer` numerical columns into categorical features by binning them* scale the continuous `capital gains`, `capital losses`, and `dividends from stocks` so they're suitable for training* encode the `education`, `major industry code`, `class of worker` so they're suitable for training* split the data into training and test datasets, and saves the training features and labels and test features and labels.Our training script will use the pre-processed training features and labels to train a model, and our model evaluation script will use the trained model and pre-processed test features and labels to evaluate the model.
###Code
%%writefile preprocessing.py
import argparse
import os
import warnings
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action="ignore", category=DataConversionWarning)
columns = [
"age",
"education",
"major industry code",
"class of worker",
"num persons worked for employer",
"capital gains",
"capital losses",
"dividends from stocks",
"income",
]
class_labels = [" - 50000.", " 50000+."]
def print_shape(df):
negative_examples, positive_examples = np.bincount(df["income"])
print(
"Data shape: {}, {} positive examples, {} negative examples".format(
df.shape, positive_examples, negative_examples
)
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--train-test-split-ratio", type=float, default=0.3)
args, _ = parser.parse_known_args()
print("Received arguments {}".format(args))
input_data_path = os.path.join("/opt/ml/processing/input", "census-income.csv")
print("Reading input data from {}".format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df["income"])
print(
"Data after cleaning: {}, {} positive examples, {} negative examples".format(
df.shape, positive_examples, negative_examples
)
)
split_ratio = args.train_test_split_ratio
print("Splitting data into train and test sets with ratio {}".format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(
df.drop("income", axis=1), df["income"], test_size=split_ratio, random_state=0
)
preprocess = make_column_transformer(
(
["age", "num persons worked for employer"],
KBinsDiscretizer(encode="onehot-dense", n_bins=10),
),
(["capital gains", "capital losses", "dividends from stocks"], StandardScaler()),
(["education", "major industry code", "class of worker"], OneHotEncoder(sparse=False)),
)
print("Running preprocessing and feature engineering transformations")
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print("Train data shape after preprocessing: {}".format(train_features.shape))
print("Test data shape after preprocessing: {}".format(test_features.shape))
train_features_output_path = os.path.join("/opt/ml/processing/train", "train_features.csv")
train_labels_output_path = os.path.join("/opt/ml/processing/train", "train_labels.csv")
test_features_output_path = os.path.join("/opt/ml/processing/test", "test_features.csv")
test_labels_output_path = os.path.join("/opt/ml/processing/test", "test_labels.csv")
print("Saving training features to {}".format(train_features_output_path))
pd.DataFrame(train_features).to_csv(train_features_output_path, header=False, index=False)
print("Saving test features to {}".format(test_features_output_path))
pd.DataFrame(test_features).to_csv(test_features_output_path, header=False, index=False)
print("Saving training labels to {}".format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print("Saving test labels to {}".format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
###Output
_____no_output_____
###Markdown
Run this script as a processing job. Use the `SKLearnProcessor.run()` method. You give the `run()` method one `ProcessingInput` where the `source` is the census dataset in Amazon S3, and the `destination` is where the script reads this data from, in this case `/opt/ml/processing/input`. These local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output/<output_name/`. You also give the ProcessingOutputs values for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The `arguments` parameter in the `run()` method are command-line arguments in our `preprocessing.py` script.
###Code
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(
code="preprocessing.py",
inputs=[ProcessingInput(source=input_data, destination="/opt/ml/processing/input")],
outputs=[
ProcessingOutput(output_name="train_data", source="/opt/ml/processing/train"),
ProcessingOutput(output_name="test_data", source="/opt/ml/processing/test"),
],
arguments=["--train-test-split-ratio", "0.2"],
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description["ProcessingOutputConfig"]
for output in output_config["Outputs"]:
if output["OutputName"] == "train_data":
preprocessed_training_data = output["S3Output"]["S3Uri"]
if output["OutputName"] == "test_data":
preprocessed_test_data = output["S3Output"]["S3Uri"]
###Output
_____no_output_____
###Markdown
Now inspect the output of the pre-processing job, which consists of the processed features.
###Code
training_features = pd.read_csv(preprocessed_training_data + "/train_features.csv", nrows=10)
print("Training features shape: {}".format(training_features.shape))
training_features.head(n=10)
###Output
_____no_output_____
###Markdown
Training using the pre-processed dataWe create a `SKLearn` instance, which we will use to run a training job using the training script `train.py`.
###Code
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point="train.py", framework_version="0.20.0", instance_type="ml.m5.xlarge", role=role
)
###Output
_____no_output_____
###Markdown
The training script `train.py` trains a logistic regression model on the training data, and saves the model to the `/opt/ml/model` directory, which Amazon SageMaker tars and uploads into a `model.tar.gz` file into S3 at the end of the training job.
###Code
%%writefile train.py
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
if __name__ == "__main__":
training_data_directory = "/opt/ml/input/data/train"
train_features_data = os.path.join(training_data_directory, "train_features.csv")
train_labels_data = os.path.join(training_data_directory, "train_labels.csv")
print("Reading input data")
X_train = pd.read_csv(train_features_data, header=None)
y_train = pd.read_csv(train_labels_data, header=None)
model = LogisticRegression(class_weight="balanced", solver="lbfgs")
print("Training LR model")
model.fit(X_train, y_train)
model_output_directory = os.path.join("/opt/ml/model", "model.joblib")
print("Saving model to {}".format(model_output_directory))
joblib.dump(model, model_output_directory)
###Output
_____no_output_____
###Markdown
Run the training job using `train.py` on the preprocessed training data.
###Code
sklearn.fit({"train": preprocessed_training_data})
training_job_description = sklearn.jobs[-1].describe()
model_data_s3_uri = "{}{}/{}".format(
training_job_description["OutputDataConfig"]["S3OutputPath"],
training_job_description["TrainingJobName"],
"output/model.tar.gz",
)
###Output
_____no_output_____
###Markdown
Model Evaluation`evaluation.py` is the model evaluation script. Since the script also runs using scikit-learn as a dependency, run this using the `SKLearnProcessor` you created previously. This script takes the trained model and the test dataset as input, and produces a JSON file containing classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model.
###Code
%%writefile evaluation.py
import json
import os
import tarfile
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report, roc_auc_score, accuracy_score
if __name__ == "__main__":
model_path = os.path.join("/opt/ml/processing/model", "model.tar.gz")
print("Extracting model from path: {}".format(model_path))
with tarfile.open(model_path) as tar:
tar.extractall(path=".")
print("Loading model")
model = joblib.load("model.joblib")
print("Loading test input data")
test_features_data = os.path.join("/opt/ml/processing/test", "test_features.csv")
test_labels_data = os.path.join("/opt/ml/processing/test", "test_labels.csv")
X_test = pd.read_csv(test_features_data, header=None)
y_test = pd.read_csv(test_labels_data, header=None)
predictions = model.predict(X_test)
print("Creating classification evaluation report")
report_dict = classification_report(y_test, predictions, output_dict=True)
report_dict["accuracy"] = accuracy_score(y_test, predictions)
report_dict["roc_auc"] = roc_auc_score(y_test, predictions)
print("Classification report:\n{}".format(report_dict))
evaluation_output_path = os.path.join("/opt/ml/processing/evaluation", "evaluation.json")
print("Saving classification report to {}".format(evaluation_output_path))
with open(evaluation_output_path, "w") as f:
f.write(json.dumps(report_dict))
import json
from sagemaker.s3 import S3Downloader
sklearn_processor.run(
code="evaluation.py",
inputs=[
ProcessingInput(source=model_data_s3_uri, destination="/opt/ml/processing/model"),
ProcessingInput(source=preprocessed_test_data, destination="/opt/ml/processing/test"),
],
outputs=[ProcessingOutput(output_name="evaluation", source="/opt/ml/processing/evaluation")],
)
evaluation_job_description = sklearn_processor.jobs[-1].describe()
###Output
_____no_output_____
###Markdown
Now retrieve the file `evaluation.json` from Amazon S3, which contains the evaluation report.
###Code
evaluation_output_config = evaluation_job_description["ProcessingOutputConfig"]
for output in evaluation_output_config["Outputs"]:
if output["OutputName"] == "evaluation":
evaluation_s3_uri = output["S3Output"]["S3Uri"] + "/evaluation.json"
break
evaluation_output = S3Downloader.read_file(evaluation_s3_uri)
evaluation_output_dict = json.loads(evaluation_output)
print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Running processing jobs with FrameworkProcessor to include custom dependenciesAbove, you used a processing container that has scikit-learn installed, but there was no way to add dependencies to the processing container.A new sub-class of existing `ScriptProcessor` called `FrameworkProcessor` has been added to SageMaker SDK which allows to specify `source_dir` using which `requirements.txt` file can be included. The processor object will first install listed dependencies inside the target container prior to triggering processing job.Below is a simple example of how to create a processing container, and how to use a `FrameworkProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with `source_dir` and `requirements.txt`.
###Code
from sagemaker.processing import FrameworkProcessor
est_cls = sagemaker.sklearn.estimator.SKLearn
framework_version_str="0.20.0"
script_processor = FrameworkProcessor(
role=role,
instance_count=1,
instance_type="ml.m5.xlarge",
estimator_cls=est_cls,
framework_version=framework_version_str
)
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside the container that now has additional dependencies installed. You can update the processing script to use these dependencies and perform custom processing. This approach can be applied to your own pre-processing, feature-engineering, and model evaluation scripts inside of this container. First let's copy the previously created `preprocessing.py` script to `code` folder for next `run` method
###Code
!cp preprocessing.py ./code
from sagemaker.processing import ProcessingInput, ProcessingOutput
script_processor.run(
code="preprocessing.py",
source_dir="code",
inputs=[ProcessingInput(source=input_data, destination="/opt/ml/processing/input")],
outputs=[
ProcessingOutput(output_name="train_data", source="/opt/ml/processing/train"),
ProcessingOutput(output_name="test_data", source="/opt/ml/processing/test"),
],
arguments=["--train-test-split-ratio", "0.2"],
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____
###Markdown
SummaryAs you can see all the dependencies from `requirements.txt` file have been installed on the target container prior to starting the processing script. `FrameworkProcessor` makes it easy to use existing built-in framework processors like SciKit Learn, MXNet, PyTorch etc along with specifying custom dependencies for processing script. (Optional)Next StepsAs an optional step, this notebook shows an example below to build custom container from scratch using `docker`. By default optional steps do not run automatically, set `run_optional_steps` to True if you want to execute optional steps
###Code
run_optional_steps = False
# This will stop the below cells from executing if "Run All Cells" was used on the notebook.
if not run_optional_steps:
raise SystemExit("Stop here. Do not automatically execute optional steps.")
###Output
_____no_output_____
###Markdown
(Optional) Running processing jobs with your own dependenciesAbove, you used a processing container that has scikit-learn installed, but you can run your own processing container in your processing job as well, and still provide a script to run within your processing container.Below, you walk through how to create a processing container, and how to use a `ScriptProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with.
###Code
!mkdir docker
###Output
_____no_output_____
###Markdown
This is the Dockerfile to create the processing container. Install `pandas` and `scikit-learn` into it. You can install your own dependencies.
###Code
%%writefile docker/Dockerfile
FROM python:3.7-slim-buster
RUN pip3 install pandas==0.25.3 scikit-learn==0.21.3
ENV PYTHONUNBUFFERED=TRUE
ENTRYPOINT ["python3"]
###Output
_____no_output_____
###Markdown
This block of code builds the container using the `docker` command, creates an Amazon Elastic Container Registry (Amazon ECR) repository, and pushes the image to Amazon ECR.
###Code
import boto3
account_id = boto3.client("sts").get_caller_identity().get("Account")
ecr_repository = "sagemaker-processing-container"
tag = ":latest"
uri_suffix = "amazonaws.com"
if region in ["cn-north-1", "cn-northwest-1"]:
uri_suffix = "amazonaws.com.cn"
processing_repository_uri = "{}.dkr.ecr.{}.{}/{}".format(
account_id, region, uri_suffix, ecr_repository + tag
)
# Create ECR repository and push docker image
!docker build -t $ecr_repository docker
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $processing_repository_uri
!docker push $processing_repository_uri
###Output
_____no_output_____
###Markdown
The `ScriptProcessor` class lets you run a command inside this container, which you can use to run your own script.
###Code
from sagemaker.processing import ScriptProcessor
script_processor = ScriptProcessor(
command=["python3"],
image_uri=processing_repository_uri,
role=role,
instance_count=1,
instance_type="ml.m5.xlarge",
)
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside of the Docker container you built in this notebook, not the scikit-learn image maintained by Amazon SageMaker. You can add the dependencies to the Docker image, and run your own pre-processing, feature-engineering, and model evaluation scripts inside of this container.
###Code
script_processor.run(
code="preprocessing.py",
inputs=[ProcessingInput(source=input_data, destination="/opt/ml/processing/input")],
outputs=[
ProcessingOutput(output_name="train_data", source="/opt/ml/processing/train"),
ProcessingOutput(output_name="test_data", source="/opt/ml/processing/test"),
],
arguments=["--train-test-split-ratio", "0.2"],
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____
###Markdown
Amazon SageMaker Processing jobsWith Amazon SageMaker Processing jobs, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform.A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job.This notebook shows how you can:1. Run a processing job to run a scikit-learn script that cleans, pre-processes, performs feature engineering, and splits the input data into train and test sets.2. Run a training job on the pre-processed training data to train a model3. Run a processing job on the pre-processed test data to evaluate the trained model's performance4. Use your own custom container to run processing jobs with your own Python libraries and dependencies.The dataset used here is the [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29). You select features from this dataset, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model, and split the data into train and test sets. The task is to predict whether rows representing census responders have an income greater than `$50,000`, or less than `$50,000`. The dataset is heavily class imbalanced, with most records being labeled as earning less than `$50,000`. After training a logistic regression model, you evaluate the model against a hold-out test dataset, and save the classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model. Data pre-processing and feature engineering To run the scikit-learn preprocessing script as a processing job, create a `SKLearnProcessor`, which lets you run scripts inside of processing jobs using the scikit-learn image provided.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sklearn.processing import SKLearnProcessor
region = boto3.session.Session().region_name
role = get_execution_role()
sklearn_processor = SKLearnProcessor(
framework_version="0.20.0", role=role, instance_type="ml.m5.xlarge", instance_count=1
)
###Output
_____no_output_____
###Markdown
Before introducing the script you use for data cleaning, pre-processing, and feature engineering, inspect the first 20 rows of the dataset. The target is predicting the `income` category. The features from the dataset you select are `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, and `dividends from stocks`.
###Code
import pandas as pd
input_data = "s3://sagemaker-sample-data-{}/processing/census/census-income.csv".format(region)
df = pd.read_csv(input_data, nrows=10)
df.head(n=10)
###Output
_____no_output_____
###Markdown
This notebook cell writes a file `preprocessing.py`, which contains the pre-processing script. You can update the script, and rerun this cell to overwrite `preprocessing.py`. You run this as a processing job in the next cell. In this script, you* Remove duplicates and rows with conflicting data* transform the target `income` column into a column containing two labels.* transform the `age` and `num persons worked for employer` numerical columns into categorical features by binning them* scale the continuous `capital gains`, `capital losses`, and `dividends from stocks` so they're suitable for training* encode the `education`, `major industry code`, `class of worker` so they're suitable for training* split the data into training and test datasets, and saves the training features and labels and test features and labels.Our training script will use the pre-processed training features and labels to train a model, and our model evaluation script will use the trained model and pre-processed test features and labels to evaluate the model.
###Code
%%writefile preprocessing.py
import argparse
import os
import warnings
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action="ignore", category=DataConversionWarning)
columns = [
"age",
"education",
"major industry code",
"class of worker",
"num persons worked for employer",
"capital gains",
"capital losses",
"dividends from stocks",
"income",
]
class_labels = [" - 50000.", " 50000+."]
def print_shape(df):
negative_examples, positive_examples = np.bincount(df["income"])
print(
"Data shape: {}, {} positive examples, {} negative examples".format(
df.shape, positive_examples, negative_examples
)
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--train-test-split-ratio", type=float, default=0.3)
args, _ = parser.parse_known_args()
print("Received arguments {}".format(args))
input_data_path = os.path.join("/opt/ml/processing/input", "census-income.csv")
print("Reading input data from {}".format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df["income"])
print(
"Data after cleaning: {}, {} positive examples, {} negative examples".format(
df.shape, positive_examples, negative_examples
)
)
split_ratio = args.train_test_split_ratio
print("Splitting data into train and test sets with ratio {}".format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(
df.drop("income", axis=1), df["income"], test_size=split_ratio, random_state=0
)
preprocess = make_column_transformer(
(
["age", "num persons worked for employer"],
KBinsDiscretizer(encode="onehot-dense", n_bins=10),
),
(["capital gains", "capital losses", "dividends from stocks"], StandardScaler()),
(["education", "major industry code", "class of worker"], OneHotEncoder(sparse=False)),
)
print("Running preprocessing and feature engineering transformations")
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print("Train data shape after preprocessing: {}".format(train_features.shape))
print("Test data shape after preprocessing: {}".format(test_features.shape))
train_features_output_path = os.path.join("/opt/ml/processing/train", "train_features.csv")
train_labels_output_path = os.path.join("/opt/ml/processing/train", "train_labels.csv")
test_features_output_path = os.path.join("/opt/ml/processing/test", "test_features.csv")
test_labels_output_path = os.path.join("/opt/ml/processing/test", "test_labels.csv")
print("Saving training features to {}".format(train_features_output_path))
pd.DataFrame(train_features).to_csv(train_features_output_path, header=False, index=False)
print("Saving test features to {}".format(test_features_output_path))
pd.DataFrame(test_features).to_csv(test_features_output_path, header=False, index=False)
print("Saving training labels to {}".format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print("Saving test labels to {}".format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
###Output
_____no_output_____
###Markdown
Run this script as a processing job. Use the `SKLearnProcessor.run()` method. You give the `run()` method one `ProcessingInput` where the `source` is the census dataset in Amazon S3, and the `destination` is where the script reads this data from, in this case `/opt/ml/processing/input`. These local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output/<output_name/`. You also give the ProcessingOutputs values for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The `arguments` parameter in the `run()` method are command-line arguments in our `preprocessing.py` script.
###Code
from sagemaker.processing import ProcessingInput, ProcessingOutput
sklearn_processor.run(
code="preprocessing.py",
inputs=[ProcessingInput(source=input_data, destination="/opt/ml/processing/input")],
outputs=[
ProcessingOutput(output_name="train_data", source="/opt/ml/processing/train"),
ProcessingOutput(output_name="test_data", source="/opt/ml/processing/test"),
],
arguments=["--train-test-split-ratio", "0.2"],
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description["ProcessingOutputConfig"]
for output in output_config["Outputs"]:
if output["OutputName"] == "train_data":
preprocessed_training_data = output["S3Output"]["S3Uri"]
if output["OutputName"] == "test_data":
preprocessed_test_data = output["S3Output"]["S3Uri"]
###Output
_____no_output_____
###Markdown
Now inspect the output of the pre-processing job, which consists of the processed features.
###Code
training_features = pd.read_csv(preprocessed_training_data + "/train_features.csv", nrows=10)
print("Training features shape: {}".format(training_features.shape))
training_features.head(n=10)
###Output
_____no_output_____
###Markdown
Training using the pre-processed dataWe create a `SKLearn` instance, which we will use to run a training job using the training script `train.py`.
###Code
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point="train.py", framework_version="0.20.0", instance_type="ml.m5.xlarge", role=role
)
###Output
_____no_output_____
###Markdown
The training script `train.py` trains a logistic regression model on the training data, and saves the model to the `/opt/ml/model` directory, which Amazon SageMaker tars and uploads into a `model.tar.gz` file into S3 at the end of the training job.
###Code
%%writefile train.py
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
if __name__ == "__main__":
training_data_directory = "/opt/ml/input/data/train"
train_features_data = os.path.join(training_data_directory, "train_features.csv")
train_labels_data = os.path.join(training_data_directory, "train_labels.csv")
print("Reading input data")
X_train = pd.read_csv(train_features_data, header=None)
y_train = pd.read_csv(train_labels_data, header=None)
model = LogisticRegression(class_weight="balanced", solver="lbfgs")
print("Training LR model")
model.fit(X_train, y_train)
model_output_directory = os.path.join("/opt/ml/model", "model.joblib")
print("Saving model to {}".format(model_output_directory))
joblib.dump(model, model_output_directory)
###Output
_____no_output_____
###Markdown
Run the training job using `train.py` on the preprocessed training data.
###Code
sklearn.fit({"train": preprocessed_training_data})
training_job_description = sklearn.jobs[-1].describe()
model_data_s3_uri = "{}{}/{}".format(
training_job_description["OutputDataConfig"]["S3OutputPath"],
training_job_description["TrainingJobName"],
"output/model.tar.gz",
)
###Output
_____no_output_____
###Markdown
Model Evaluation`evaluation.py` is the model evaluation script. Since the script also runs using scikit-learn as a dependency, run this using the `SKLearnProcessor` you created previously. This script takes the trained model and the test dataset as input, and produces a JSON file containing classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model.
###Code
%%writefile evaluation.py
import json
import os
import tarfile
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report, roc_auc_score, accuracy_score
if __name__ == "__main__":
model_path = os.path.join("/opt/ml/processing/model", "model.tar.gz")
print("Extracting model from path: {}".format(model_path))
with tarfile.open(model_path) as tar:
tar.extractall(path=".")
print("Loading model")
model = joblib.load("model.joblib")
print("Loading test input data")
test_features_data = os.path.join("/opt/ml/processing/test", "test_features.csv")
test_labels_data = os.path.join("/opt/ml/processing/test", "test_labels.csv")
X_test = pd.read_csv(test_features_data, header=None)
y_test = pd.read_csv(test_labels_data, header=None)
predictions = model.predict(X_test)
print("Creating classification evaluation report")
report_dict = classification_report(y_test, predictions, output_dict=True)
report_dict["accuracy"] = accuracy_score(y_test, predictions)
report_dict["roc_auc"] = roc_auc_score(y_test, predictions)
print("Classification report:\n{}".format(report_dict))
evaluation_output_path = os.path.join("/opt/ml/processing/evaluation", "evaluation.json")
print("Saving classification report to {}".format(evaluation_output_path))
with open(evaluation_output_path, "w") as f:
f.write(json.dumps(report_dict))
import json
from sagemaker.s3 import S3Downloader
sklearn_processor.run(
code="evaluation.py",
inputs=[
ProcessingInput(source=model_data_s3_uri, destination="/opt/ml/processing/model"),
ProcessingInput(source=preprocessed_test_data, destination="/opt/ml/processing/test"),
],
outputs=[ProcessingOutput(output_name="evaluation", source="/opt/ml/processing/evaluation")],
)
evaluation_job_description = sklearn_processor.jobs[-1].describe()
###Output
_____no_output_____
###Markdown
Now retrieve the file `evaluation.json` from Amazon S3, which contains the evaluation report.
###Code
evaluation_output_config = evaluation_job_description["ProcessingOutputConfig"]
for output in evaluation_output_config["Outputs"]:
if output["OutputName"] == "evaluation":
evaluation_s3_uri = output["S3Output"]["S3Uri"] + "/evaluation.json"
break
evaluation_output = S3Downloader.read_file(evaluation_s3_uri)
evaluation_output_dict = json.loads(evaluation_output)
print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Running processing jobs with your own dependenciesAbove, you used a processing container that has scikit-learn installed, but you can run your own processing container in your processing job as well, and still provide a script to run within your processing container.Below, you walk through how to create a processing container, and how to use a `ScriptProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with.
###Code
!mkdir docker
###Output
_____no_output_____
###Markdown
This is the Dockerfile to create the processing container. Install `pandas` and `scikit-learn` into it. You can install your own dependencies.
###Code
%%writefile docker/Dockerfile
FROM python:3.7-slim-buster
RUN pip3 install pandas==0.25.3 scikit-learn==0.21.3
ENV PYTHONUNBUFFERED=TRUE
ENTRYPOINT ["python3"]
###Output
_____no_output_____
###Markdown
This block of code builds the container using the `docker` command, creates an Amazon Elastic Container Registry (Amazon ECR) repository, and pushes the image to Amazon ECR.
###Code
import boto3
account_id = boto3.client("sts").get_caller_identity().get("Account")
ecr_repository = "sagemaker-processing-container"
tag = ":latest"
uri_suffix = "amazonaws.com"
if region in ["cn-north-1", "cn-northwest-1"]:
uri_suffix = "amazonaws.com.cn"
processing_repository_uri = "{}.dkr.ecr.{}.{}/{}".format(
account_id, region, uri_suffix, ecr_repository + tag
)
# Create ECR repository and push docker image
!docker build -t $ecr_repository docker
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $processing_repository_uri
!docker push $processing_repository_uri
###Output
_____no_output_____
###Markdown
The `ScriptProcessor` class lets you run a command inside this container, which you can use to run your own script.
###Code
from sagemaker.processing import ScriptProcessor
script_processor = ScriptProcessor(
command=["python3"],
image_uri=processing_repository_uri,
role=role,
instance_count=1,
instance_type="ml.m5.xlarge",
)
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside of the Docker container you built in this notebook, not the scikit-learn image maintained by Amazon SageMaker. You can add the dependencies to the Docker image, and run your own pre-processing, feature-engineering, and model evaluation scripts inside of this container.
###Code
script_processor.run(
code="preprocessing.py",
inputs=[ProcessingInput(source=input_data, destination="/opt/ml/processing/input")],
outputs=[
ProcessingOutput(output_name="train_data", source="/opt/ml/processing/train"),
ProcessingOutput(output_name="test_data", source="/opt/ml/processing/test"),
],
arguments=["--train-test-split-ratio", "0.2"],
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____
###Markdown
Amazon SageMaker Processing jobsWith Amazon SageMaker Processing jobs, you can leverage a simplified, managed experience to run data pre- or post-processing and model evaluation workloads on the Amazon SageMaker platform.A processing job downloads input from Amazon Simple Storage Service (Amazon S3), then uploads outputs to Amazon S3 during or after the processing job.This notebook shows how you can:1. Run a processing job to run a scikit-learn script that cleans, pre-processes, performs feature engineering, and splits the input data into train and test sets.2. Run a training job on the pre-processed training data to train a model3. Run a processing job on the pre-processed test data to evaluate the trained model's performance4. Use your own custom container to run processing jobs with your own Python libraries and dependencies.The dataset used here is the [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29). You select features from this dataset, clean the data, and turn the data into features that the training algorithm can use to train a binary classification model, and split the data into train and test sets. The task is to predict whether rows representing census responders have an income greater than `$50,000`, or less than `$50,000`. The dataset is heavily class imbalanced, with most records being labeled as earning less than `$50,000`. After training a logistic regression model, you evaluate the model against a hold-out test dataset, and save the classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model. Data pre-processing and feature engineering To run the scikit-learn preprocessing script as a processing job, create a `SKLearnProcessor`, which lets you run scripts inside of processing jobs using the scikit-learn image provided.
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.sklearn.estimator import SKLearn
import json
from sagemaker.s3 import S3Downloader
from sagemaker.processing import ScriptProcessor
region = boto3.session.Session().region_name
role = get_execution_role()
sklearn_processor = SKLearnProcessor(framework_version='0.20.0',
role=role,
instance_type='ml.m5.xlarge',
instance_count=1)
###Output
_____no_output_____
###Markdown
Before introducing the script you use for data cleaning, pre-processing, and feature engineering, inspect the first 20 rows of the dataset. The target is predicting the `income` category. The features from the dataset you select are `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, and `dividends from stocks`.
###Code
import pandas as pd
sess = boto3.Session()
sm = sess.client('sagemaker')
input_data = 's3://sagemaker-sample-data-{}/processing/census/census-income.csv'.format(region)
S3Downloader.download(input_data, 'data')
df = pd.read_csv('data/census-income.csv', nrows=10)
df.head(n=10)
###Output
_____no_output_____
###Markdown
This notebook cell writes a file `preprocessing.py`, which contains the pre-processing script. You can update the script, and rerun this cell to overwrite `preprocessing.py`. You run this as a processing job in the next cell. In this script, you* Remove duplicates and rows with conflicting data* transform the target `income` column into a column containing two labels.* transform the `age` and `num persons worked for employer` numerical columns into categorical features by binning them* scale the continuous `capital gains`, `capital losses`, and `dividends from stocks` so they're suitable for training* encode the `education`, `major industry code`, `class of worker` so they're suitable for training* split the data into training and test datasets, and saves the training features and labels and test features and labels.Our training script will use the pre-processed training features and labels to train a model, and our model evaluation script will use the trained model and pre-processed test features and labels to evaluate the model.
###Code
%%writefile preprocessing.py
import argparse
import os
import warnings
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action='ignore', category=DataConversionWarning)
columns = ['age', 'education', 'major industry code', 'class of worker', 'num persons worked for employer',
'capital gains', 'capital losses', 'dividends from stocks', 'income']
class_labels = [' - 50000.', ' 50000+.']
def print_shape(df):
negative_examples, positive_examples = np.bincount(df['income'])
print('Data shape: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
if __name__=='__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--train-test-split-ratio', type=float, default=0.3)
args, _ = parser.parse_known_args()
print('Received arguments {}'.format(args))
input_data_path = os.path.join('/opt/ml/processing/input', 'census-income.csv')
print('Reading input data from {}'.format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df['income'])
print('Data after cleaning: {}, {} positive examples, {} negative examples'.format(df.shape, positive_examples, negative_examples))
split_ratio = args.train_test_split_ratio
print('Splitting data into train and test sets with ratio {}'.format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(df.drop('income', axis=1), df['income'], test_size=split_ratio, random_state=0)
preprocess = make_column_transformer(
(['age', 'num persons worked for employer'], KBinsDiscretizer(encode='onehot-dense', n_bins=10)),
(['capital gains', 'capital losses', 'dividends from stocks'], StandardScaler()),
(['education', 'major industry code', 'class of worker'], OneHotEncoder(sparse=False))
)
print('Running preprocessing and feature engineering transformations')
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print('Train data shape after preprocessing: {}'.format(train_features.shape))
print('Test data shape after preprocessing: {}'.format(test_features.shape))
train_features_output_path = os.path.join('/opt/ml/processing/train', 'train_features.csv')
train_labels_output_path = os.path.join('/opt/ml/processing/train', 'train_labels.csv')
test_features_output_path = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_output_path = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
print('Saving training features to {}'.format(train_features_output_path))
pd.DataFrame(train_features).to_csv(train_features_output_path, header=False, index=False)
print('Saving test features to {}'.format(test_features_output_path))
pd.DataFrame(test_features).to_csv(test_features_output_path, header=False, index=False)
print('Saving training labels to {}'.format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print('Saving test labels to {}'.format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
###Output
_____no_output_____
###Markdown
Run this script as a processing job. Use the `SKLearnProcessor.run()` method. You give the `run()` method one `ProcessingInput` where the `source` is the census dataset in Amazon S3, and the `destination` is where the script reads this data from, in this case `/opt/ml/processing/input`. These local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output/<output_name/`. You also give the ProcessingOutputs values for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The `arguments` parameter in the `run()` method are command-line arguments in our `preprocessing.py` script.
###Code
sklearn_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
output_config = preprocessing_job_description['ProcessingOutputConfig']
for output in output_config['Outputs']:
if output['OutputName'] == 'train_data':
preprocessed_training_data = output['S3Output']['S3Uri']
if output['OutputName'] == 'test_data':
preprocessed_test_data = output['S3Output']['S3Uri']
###Output
_____no_output_____
###Markdown
Now inspect the output of the pre-processing job, which consists of the processed features.
###Code
S3Downloader.download(preprocessed_training_data + '/train_features.csv', 'data')
training_features = pd.read_csv('data/train_features.csv', nrows=10)
print('Training features shape: {}'.format(training_features.shape))
training_features.head(n=10)
###Output
_____no_output_____
###Markdown
Training using the pre-processed dataWe create a `SKLearn` instance, which we will use to run a training job using the training script `train.py`.
###Code
sklearn = SKLearn(
entry_point='train.py',
framework_version='0.20.0',
train_instance_type="ml.m5.xlarge",
role=role)
###Output
_____no_output_____
###Markdown
The training script `train.py` trains a logistic regression model on the training data, and saves the model to the `/opt/ml/model` directory, which Amazon SageMaker tars and uploads into a `model.tar.gz` file into S3 at the end of the training job.
###Code
%%writefile train.py
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
if __name__=="__main__":
training_data_directory = '/opt/ml/input/data/train'
train_features_data = os.path.join(training_data_directory, 'train_features.csv')
train_labels_data = os.path.join(training_data_directory, 'train_labels.csv')
print('Reading input data')
X_train = pd.read_csv(train_features_data, header=None)
y_train = pd.read_csv(train_labels_data, header=None)
model = LogisticRegression(class_weight='balanced', solver='lbfgs')
print('Training LR model')
model.fit(X_train, y_train)
model_output_directory = os.path.join('/opt/ml/model', "model.joblib")
print('Saving model to {}'.format(model_output_directory))
joblib.dump(model, model_output_directory)
###Output
_____no_output_____
###Markdown
Run the training job using `train.py` on the preprocessed training data.
###Code
sklearn.fit({'train': preprocessed_training_data})
training_job_description = sklearn.jobs[-1].describe()
model_data_s3_uri = '{}{}/{}'.format(
training_job_description['OutputDataConfig']['S3OutputPath'],
training_job_description['TrainingJobName'],
'output/model.tar.gz')
###Output
_____no_output_____
###Markdown
Model Evaluation`evaluation.py` is the model evaluation script. Since the script also runs using scikit-learn as a dependency, run this using the `SKLearnProcessor` you created previously. This script takes the trained model and the test dataset as input, and produces a JSON file containing classification evaluation metrics, including precision, recall, and F1 score for each label, and accuracy and ROC AUC for the model.
###Code
%%writefile evaluation.py
import json
import os
import tarfile
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report, roc_auc_score, accuracy_score
if __name__=="__main__":
model_path = os.path.join('/opt/ml/processing/model', 'model.tar.gz')
print('Extracting model from path: {}'.format(model_path))
with tarfile.open(model_path) as tar:
tar.extractall(path='.')
print('Loading model')
model = joblib.load('model.joblib')
print('Loading test input data')
test_features_data = os.path.join('/opt/ml/processing/test', 'test_features.csv')
test_labels_data = os.path.join('/opt/ml/processing/test', 'test_labels.csv')
X_test = pd.read_csv(test_features_data, header=None)
y_test = pd.read_csv(test_labels_data, header=None)
predictions = model.predict(X_test)
print('Creating classification evaluation report')
report_dict = classification_report(y_test, predictions, output_dict=True)
report_dict['accuracy'] = accuracy_score(y_test, predictions)
report_dict['roc_auc'] = roc_auc_score(y_test, predictions)
print('Classification report:\n{}'.format(report_dict))
evaluation_output_path = os.path.join('/opt/ml/processing/evaluation', 'evaluation.json')
print('Saving classification report to {}'.format(evaluation_output_path))
with open(evaluation_output_path, 'w') as f:
f.write(json.dumps(report_dict))
sklearn_processor.run(code='evaluation.py',
inputs=[ProcessingInput(
source=model_data_s3_uri,
destination='/opt/ml/processing/model'),
ProcessingInput(
source=preprocessed_test_data,
destination='/opt/ml/processing/test')],
outputs=[ProcessingOutput(output_name='evaluation',
source='/opt/ml/processing/evaluation')]
)
evaluation_job_description = sklearn_processor.jobs[-1].describe()
###Output
_____no_output_____
###Markdown
Now retrieve the file `evaluation.json` from Amazon S3, which contains the evaluation report.
###Code
evaluation_output_config = evaluation_job_description['ProcessingOutputConfig']
for output in evaluation_output_config['Outputs']:
if output['OutputName'] == 'evaluation':
evaluation_s3_uri = output['S3Output']['S3Uri'] + '/evaluation.json'
break
evaluation_output = S3Downloader.read_file(evaluation_s3_uri)
evaluation_output_dict = json.loads(evaluation_output)
print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4))
###Output
_____no_output_____
###Markdown
Running processing jobs with your own dependenciesAbove, you used a processing container that has scikit-learn installed, but you can run your own processing container in your processing job as well, and still provide a script to run within your processing container.Below, you walk through how to create a processing container, and how to use a `ScriptProcessor` to run your own code within a container. Create a scikit-learn container and run a processing job using the same `preprocessing.py` script you used above. You can provide your own dependencies inside this container to run your processing script with.
###Code
!mkdir docker
###Output
_____no_output_____
###Markdown
This is the Dockerfile to create the processing container. Install `pandas` and `scikit-learn` into it. You can install your own dependencies.
###Code
%%writefile docker/Dockerfile
FROM python:3.7-slim-buster
RUN pip3 install pandas==0.25.3 scikit-learn==0.21.3
ENV PYTHONUNBUFFERED=TRUE
ENTRYPOINT ["python3"]
###Output
_____no_output_____
###Markdown
This block of code builds the container using the `sagemaker-studio-image-build` CLI tool which builds the docker image, creates an Amazon Elastic Container Registry (Amazon ECR) repository and pushes the image to Amazon ECR; all with a single command. To use the CLI, you need to ensure the Amazon SageMaker execution role used by your notebook environment (or another AWS Identity and Access Management (IAM) role, if you prefer) has the required permissions to interact with the resources used by the CLI, including access to CodeBuild and Amazon ECR. For further details on how to set up that, please read [here](https://aws.amazon.com/blogs/machine-learning/using-the-amazon-sagemaker-studio-image-build-cli-to-build-container-images-from-your-studio-notebooks/)
###Code
!pip install sagemaker-studio-image-build
#setting up the repo name and tag
ecr_repository = 'sagemaker-processing-container'
tag = ':latest'
#building and pushing the docker image
!sm-docker build . --file docker/Dockerfile --repository {ecr_repository+tag}
#preparing the ECR repo URL
account_id = boto3.client('sts').get_caller_identity().get('Account')
uri_suffix = 'amazonaws.com'
if region in ['cn-north-1', 'cn-northwest-1']:
uri_suffix = 'amazonaws.com.cn'
processing_repository_uri = '{}.dkr.ecr.{}.{}/{}'.format(account_id, region, uri_suffix, ecr_repository+tag)
###Output
_____no_output_____
###Markdown
The `ScriptProcessor` class lets you run a command inside this container, which you can use to run your own script.
###Code
script_processor = ScriptProcessor(command=['python3'],
image_uri=processing_repository_uri,
role=role,
instance_count=1,
instance_type='ml.m5.xlarge')
###Output
_____no_output_____
###Markdown
Run the same `preprocessing.py` script you ran above, but now, this code is running inside of the Docker container you built in this notebook, not the scikit-learn image maintained by Amazon SageMaker. You can add the dependencies to the Docker image, and run your own pre-processing, feature-engineering, and model evaluation scripts inside of this container.
###Code
script_processor.run(code='preprocessing.py',
inputs=[ProcessingInput(
source=input_data,
destination='/opt/ml/processing/input')],
outputs=[ProcessingOutput(output_name='train_data',
source='/opt/ml/processing/train'),
ProcessingOutput(output_name='test_data',
source='/opt/ml/processing/test')],
arguments=['--train-test-split-ratio', '0.2']
)
script_processor_job_description = script_processor.jobs[-1].describe()
print(script_processor_job_description)
###Output
_____no_output_____ |
notebooks/graph_manifold.ipynb | ###Markdown
ChebLieNet: building graphs from sampled Lie groupsIn this tutorial, we introduce the notion of group manifold graph, a discretization of a Riemannian manifold. At the moment, four manifolds are available: the translation group $\mathbb{R}^2$, the roto-translation group $SE(2)$, the 3d rotation group $SO(3)$ and the 1-sphere $S(2)$.We define such a graph as following:- the vertices corresponds to **uniformly sampled** elements on the manifold,- the edges connects each vertex to its **K nearest neighbors**, w.r.t an **anisotropic riemannian distance**,- the edges' weights are computed by a **gaussian weight kernel** applied on the riemannian distance between vertices.
###Code
import torch
import matplotlib.pyplot as plt
import matplotlib.cm as cm
###Output
_____no_output_____
###Markdown
Create a graph manifold
###Code
from cheblienet.graphs.graphs import SE2GEGraph, SO3GEGraph, S2GEGraph, R2GEGraph, RandomSubGraph
r2_graph = R2GEGraph(
size=[28, 28, 1],
K=8,
sigmas=(1., 1., 1.),
path_to_graph="saved_graphs",
)
eps = 0.1
se2_graph = SE2GEGraph(
size=[28, 28, 6],
K=16,
sigmas=(1., 1/eps**2, 2.048 / (28 ** 2)),
path_to_graph="saved_graphs"
)
s2_graph = S2GEGraph(
size=[642, 1],
K=8,
sigmas=(1., 1., 1.),
path_to_graph="saved_graphs"
)
so3_graph = SO3GEGraph(
size=[642, 6],
K=32,
sigmas=(1., .1, 10/642),
path_to_graph="saved_graphs"
)
###Output
_____no_output_____
###Markdown
Get informations
###Code
s2_graph.is_connected
s2_graph.is_undirected
s2_graph.manifold
s2_graph.num_vertices
s2_graph.num_edges # number of directed edges
s2_graph.vertex_index[:10]
s2_graph.vertex_attributes
s2_graph.vertex_beta[:10], s2_graph.vertex_gamma[:10]
s2_graph.edge_index[:10] # dim 0 is source, dim 1 is target
s2_graph.edge_weight[:10] # dim 0 is source, dim 1 is target
s2_graph.edge_sqdist[:10] # dim 0 is source, dim 1 is target
s2_graph.neighborhood(9) # neighbors index, edges' weights and squared riemannian distance
###Output
_____no_output_____
###Markdown
Static visualization
###Code
def plot_graph(graph, size):
M, L = size
fig = plt.figure(figsize=(5*L, 5))
X, Y, Z = graph.cartesian_pos()
for l in range(L):
ax = fig.add_subplot(1, L, l + 1, projection="3d")
ax.scatter(X[l*M:(l+1)*M], Y[l*M:(l+1)*M], Z[l*M:(l+1)*M], c="firebrick")
ax.axis("off")
fig.tight_layout()
def plot_graph_neighborhood(graph, index, size):
M, L = size
fig = plt.figure(figsize=(5, 5))
X, Y, Z = graph.cartesian_pos()
neighbors_indices, neighbors_weights, _ = graph.neighborhood(index)
weights = torch.zeros(graph.num_vertices)
weights[neighbors_indices] = neighbors_weights
for l in range(L):
ax = fig.add_subplot(L, 1, l + 1, projection="3d")
ax.scatter(X[l*M:(l+1)*M], Y[l*M:(l+1)*M], Z[l*M:(l+1)*M], c=weights[l*M:(l+1)*M], cmap=cm.PuRd)
ax.axis("off")
fig.tight_layout()
plot_graph(s2_graph, [642, 1])
plot_graph_neighborhood(s2_graph, 406, [642, 1])
###Output
_____no_output_____
###Markdown
Dynamic visualization
###Code
from cheblienet.graphs.viz import visualize_graph, visualize_graph_neighborhood, visualize_graph_signal
eps = 0.1
xi = 6 / (28 ** 2)
se2_graph = SE2GEGraph(
size=[28, 28, 6],
K=32,
sigmas=(1., 1/eps, xi),
path_to_graph="saved_graphs"
)
visualize_graph_neighborhood(se2_graph, 156)
so3_graph = SO3GEGraph(
size=[642, 6],
K=16,
sigmas=(1., .1, 10/642),
path_to_graph="saved_graphs"
)
visualize_graph(so3_graph)
signal = torch.rand(s2_graph.num_vertices)
visualize_graph_signal(s2_graph, signal)
###Output
_____no_output_____
###Markdown
Random sub graph
###Code
random_subgraph = RandomSubGraph(s2_graph)
random_subgraph.num_vertices, random_subgraph.num_edges
random_subgraph.reinit()
random_subgraph.edges_sampling(0.9)
random_subgraph.num_vertices, random_subgraph.num_edges
random_subgraph.reinit()
random_subgraph.vertices_sampling(0.5)
random_subgraph.num_vertices, random_subgraph.num_edges
###Output
_____no_output_____ |
Projects in Python with Scikit-Learn- XGBoost- Pandas- Statsmodels- etc./Loan prediction (Data Exploration and Visualization) .ipynb | ###Markdown
Data Exploration and Visualization: - Univariable study of target and features (Continuous & Categorical features, separately) - Multivariate study of target and features - Testing the statistical assumptions: Normality, Homoscedasticity, etc. - Basic cleaning: Outliers, Missing data, Duplicate values - Chi-square test to examine dependency of target on categorical features (helpful for Feature Selection, if required)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
from sklearn import preprocessing
%matplotlib inline
from scipy import stats
import warnings
warnings.filterwarnings("ignore")
# Functions to detect & plot Outliers with different approaches:
def zscore_based_outliers(ys, threshold = 3):
mean_y = np.mean(ys)
stdev_y = np.std(ys)
z_scores = [(y - mean_y) / stdev_y for y in ys]
return np.abs(z_scores) > threshold
def mad_based_outlier(ys, thresh=3.5):
median = np.median(ys, axis=0)
mad=np.median(np.array([np.abs(y - median) for y in ys]))
modified_z_score=[0.6745 *(y - median) / mad for y in ys]
return np.abs(modified_z_score) > thresh
def iqr_based_outliers(ys):
quartile_1, quartile_3 = np.percentile(ys, [25, 75])
iqr = np.abs(quartile_3 - quartile_1)
lower_bound = quartile_1 - (iqr * 1.5)
upper_bound = quartile_3 + (iqr * 1.5)
return (ys > upper_bound) | (ys < lower_bound)
def plot_outliers(x):
fig, axes = plt.subplots(nrows=3)
fig.set_size_inches(6, 6)
for ax, func in zip(axes, [zscore_based_outliers, mad_based_outlier, iqr_based_outliers]):
sns.distplot(x, ax=ax, rug=True, hist=True)
outliers = x[func(x)]
ax.plot(outliers, np.zeros_like(outliers), 'ro', clip_on=False)
kwargs = dict(y=0.95, x=0.05, ha='left', va='top')
axes[0].set_title('Zscore-based Outliers', **kwargs)
axes[1].set_title('MAD-based Outliers', **kwargs)
axes[2].set_title('IQR-based Outliers', **kwargs)
fig.suptitle('Comparing Outlier Tests with n={}'.format(len(x)), size=14)
df=pd.read_csv('C:/Users/rhash/Documents/Datasets/Loan prediction/train_loanPrediction.csv')
df.drop('Loan_ID', axis=1, inplace=True)
df.info()
L_cat=['Gender', 'Married', 'Dependents', 'Education', 'Self_Employed', 'Credit_History', 'Property_Area', 'Loan_Status' ]
L_con=['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term']
# To detect and see the Missing Values:
sns.heatmap(df.isnull())
df.isnull().sum()
df['Credit_History'].fillna(value=1, inplace=True)
df['Dependents'].fillna(value=str(0), inplace=True)
df['Self_Employed'].fillna(value='No', inplace=True)
df['Gender'].fillna(value='Male', inplace=True)
df['LoanAmount'].fillna(value=df['LoanAmount'].mean(), inplace=True)
df.dropna(axis=0, inplace=True)
df.shape
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df, name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
for i in ['Gender', 'Married', 'Dependents', 'Education', 'Self_Employed', 'Property_Area', 'Loan_Status' ]:
encode_text_index(df, i)
df.head(3)
df.info()
# Imbalanced Data Set:
df["Loan_Status"].value_counts()
# Univariate analysis of Continuous Faetures: Statistical description (mean, std, skewness, Kurtosis) & Distribution plots
L=[]
for i in L_con:
print('_'*70 )
print('variable name: ', i, '\n')
print('Statistical description: \n', df[i].describe(), '\n', sep='')
if df[i].min()==0:
L.append(i)
print("Skewness = ", df[i].skew())
print("Kurtosis = ", df[i].kurt())
plot_outliers(np.array(df[i]))
plt.show()
# Multi-variable analysis of Continuous Features: Pairplot of all continuous features for different classes of target
sns.pairplot(pd.concat((df[L_con], df['Loan_Status']), axis=1 ), hue='Loan_Status')
# Multivariable study: heatmap of correlation between continuous features
fig, ax = plt.subplots(figsize=(10,10))
sns.heatmap(df[L_con].corr(), annot=True, linewidths=1.5, ax=ax )
sns.clustermap(df[L_con].corr(), annot=True, linewidths=1.5 )
# Multivariable analysis of Contineous Features:
for i in L_con:
print('_'*70 )
print('variable name: ', i)
S0=df[df['Loan_Status']==0][i]
S1=df[df['Loan_Status']==1][i]
t_test=stats.ttest_ind(S0, S1, equal_var = False)
print('z_statistic = ', round(t_test[0], 3))
print('p_value = ', round(t_test[1], 3), '\n')
if t_test[1]<=0.05:
print('This feature is significantly effective')
else:
print('This feature is NOT significantly effective')
fig = plt.figure(figsize=(9, 4))
ax1 = fig.add_subplot(121)
sns.barplot(x='Loan_Status', y=i, data=df)
ax2 = fig.add_subplot(122)
sns.boxplot( x="Loan_Status", y=i, data=df)
fig.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.25, wspace=0.7)
plt.show()
# To test the Statistical Assumptions on Continuous variables: We Check if our data meets the assumptions reuired by most mutivariate techniques _________
for i in L_con:
print('_'*70 )
print('variable name: ', i)
fig = plt.figure(figsize=(8, 6))
ax1 = fig.add_subplot(221)
ax1=sns.distplot(df[i], fit=stats.norm)
ax1.set_title('Before transformation:')
ax2 = fig.add_subplot(222)
res=stats.probplot(df[i], plot=ax2, rvalue=True)
b=0
if i in L:
b=0.1
ax3 = fig.add_subplot(223)
ax3=sns.distplot(stats.boxcox(b+df[i])[0], fit=stats.norm)
ax3.set_title('After "boxcox" transformation:')
ax4 = fig.add_subplot(224)
res=stats.probplot(stats.boxcox(b+df[i])[0], dist=stats.norm, plot=ax4, rvalue=True)
fig.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.4, wspace=0.3)
plt.show()
# Multivariate analysis of Categorical Features: Value Counts and Success rate for different classes of a Categorical feature
for i in ['Gender', 'Married', 'Education', 'Dependents', 'Credit_History', 'Self_Employed', 'Property_Area']:
print('_'*70 )
print('variable name: ', i, '\n')
print('Value counts: \n', df[i].value_counts(), '\n', sep='')
p00=df[(df[i]==0) & (df['Loan_Status']==0)]['Loan_Status'].count()/df[df[i]==0]['Loan_Status'].count()
p01=df[(df[i]==0) & (df['Loan_Status']==1)]['Loan_Status'].count()/df[df[i]==0]['Loan_Status'].count()
p10=df[(df[i]==1) & (df['Loan_Status']==0)]['Loan_Status'].count()/df[df[i]==1]['Loan_Status'].count()
p11=df[(df[i]==1) & (df['Loan_Status']==1)]['Loan_Status'].count()/df[df[i]==1]['Loan_Status'].count()
print('Success rate for different values of this feature: \n', np.array([[p00, p01], [p10, p11]]))
sns.countplot(x=i, hue="Loan_Status", data=df[L_cat])
plt.show()
F={}
for c in ['Gender', 'Married', 'Education', 'Dependents', 'Credit_History', 'Self_Employed', 'Property_Area']:
print('_'*70 )
print('_'*70 )
print('variable name: ', c, '\n')
c0=df[df['Loan_Status']==0][c].value_counts().sort_index().values
c1=df[df['Loan_Status']==1][c].value_counts().sort_index().values
obs = np.array([c0, c1])
g, p, dof, expctd = stats.chi2_contingency(obs)
F[c] = round(g,2)
print('Chi-square statistic= ', g)
print('p_value= ', p)
fig = plt.figure(figsize=(9, 4))
ax1 = fig.add_subplot(121)
sns.barplot(x='Loan_Status', y=c, data=df)
fig.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.25, wspace=0.7)
plt.show()
# Sort and plot Categorical Features based on their Chi-square statistics (i.e. their dependency with Target):
# Helpful for Feature Selection
F_sorted=sorted(F,key=lambda i: F[i], reverse= True)
feature_df = pd.DataFrame([F[i] for i in F_sorted], index=[i for i in F_sorted]).reset_index()
feature_df.columns=['features', 'Chi-square test statistic']
fig, ax = plt.subplots(figsize=(18, 8))
sns.barplot(x='features', y='Chi-square test statistic', data=feature_df, color="blue", ax= ax)
plt.xticks(rotation=-45)
plt.show()
###Output
_____no_output_____ |
modules/Microsoft_Data/Microsoft_Graph/notebook/GraphAPI_module_ingestion.ipynb | ###Markdown
Graph API Module Example Notebook
###Code
%run /OEA_py
%run /GraphAPI_py
# 0) Initialize the OEA framework and modules needed.
oea = OEA()
graphapi = GraphAPI()
###Output
_____no_output_____
###Markdown
Using Actual Data
1. Processing Graph API Actual Data
###Code
graphapi.ingest()
###Output
_____no_output_____ |
atilla_gosha/Regularization.ipynb | ###Markdown
Data Loading
###Code
data = np.load('data.npz')
x, y = data['x'], data['y']
x = np.concatenate([np.ones((x.shape[0], 1)), x], axis=1)
t = np.arange(-1, 1, 0.01).reshape((-1, 1))
t = np.concatenate([np.ones((t.shape[0], 1)), t], axis=1)
x_small, y_small = x[:15], y[:15]
###Output
_____no_output_____
###Markdown
Ex1 Build a scatter plot for x_small, y_small. You may want to look at plt.scatter
###Code
plt.scatter(x_small[:, 1], y_small)
###Output
_____no_output_____
###Markdown
Simple Linear Regression Ex2 Fit a simple linear regression with lr=0.05 and plot the evolution of losses. You may want to look at utils file and at plt.plot
###Code
opt = utils.GD(0.05)
lr = utils.LR(num_features=2, optimizer=opt)
losses = lr.fit(x_small, y_small)
plt.plot(losses)
###Output
_____no_output_____
###Markdown
Ex3 Calculate model predictions over the values of t and plot them together with the input data
###Code
y_pred = lr.predict(t)
plt.scatter(x_small[:, 1], y_small, color='blue')
plt.plot(t[:, 1], y_pred, color='red')
###Output
_____no_output_____
###Markdown
Polynomial Regression Ex4 Define a function which takes a matrix x (first column is constant 1), int deg and returns matrix x_poly which has first column as constant 1 and other columnsare initial columns to the powers k, k=1..deg.
###Code
def make_poly(x, deg):
x_poly = x[:, 1:]
#polys = []
for k in range(2, deg+1):
x_poly = np.concatenate([x_poly, x[:, 1:] ** k], axis=1)
x_poly = np.concatenate([np.ones((x.shape[0], 1)), x_poly], axis=1)
return x_poly
def make_funcs(x, funcs):
x_poly = x[:, 1:]
#polys = []
for f in funcs:
x_poly = np.concatenate([x_poly, f(x[:, 1:])], axis=1)
x_poly = np.concatenate([np.ones((x.shape[0], 1)), x_poly], axis=1)
return x_poly
x_test = np.array([
[1, 2, 3],
[1, 4, 5]])
y_res = np.array([[ 1., 2., 3., 4., 9., 8., 27.],
[ 1., 4., 5., 16., 25., 64., 125.]])
make_poly(x_test, 4)
assert np.allclose(make_poly(x_test, 3), y_res), print('Something is wrong')
###Output
_____no_output_____
###Markdown
Ex5 Build polynomial regressions for all degrees from 1 to 25 and store their losses. For this exercise use fit_closed_form method instead of GD
###Code
lrs = {'models': [], 'losses': []}
for k in range(1, 26):
x_poly_k = make_poly(x_small, k)
lrs['models'].append(utils.LR(num_features=x_poly_k.shape[1]))
loss = lrs['models'][-1].fit_closed_form(x_poly_k, y_small)
lrs['losses'].append(loss)
plt.scatter(list(range(1, 26)), lrs['losses'])
###Output
_____no_output_____
###Markdown
Ex6 plot the predicted values over t and scatter of true points for some models
###Code
plt.figure(figsize=(20, 10))
for k in range(1, 26):
t_poly = make_poly(t, k)
lr = lrs['models'][k-1]
y_pred = lr.predict(t_poly)
plt.subplot(5, 5, k)
plt.scatter(x_small[:, 1], y_small, color='blue')
plt.plot(t[:, 1], y_pred, color='red')
plt.ylim((-0.15, 0.15))
plt.title(f'deg={k}')
plt.show()
###Output
_____no_output_____
###Markdown
Overfit/Underfit Ex7 Modify the regression's fit method to also get some validation data and output losses over validation data
###Code
class LR_valid:
def __init__(self, num_features=1, optimizer=utils.GD(0.1)):
self.W = np.zeros((num_features, 1))
self.optimizer = optimizer
def predict(self, X):
y = X @ self.W
return y
def one_step_opt(self, X, y_true):
grads = - X.T @ (y_true - X @ self.W) / X.shape[0]
self.W = self.optimizer.apply_grads(self.W, grads)
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss, grads
def fit(self, X, y_true, X_valid=None, y_valid=None, grad_tol=0.0001, n_iters=1000):
grad_norm = np.inf
n_iter = 0
losses = []
valid_losses = []
while (grad_norm > grad_tol) and (n_iter < n_iters):
loss, grads = self.one_step_opt(X, y_true)
grad_norm = np.linalg.norm(grads)
n_iter += 1
losses.append(loss[0][0])
valid_loss = (y_valid - self.predict(X_valid)).T @ (y_valid - self.predict(X_valid)) / X_valid.shape[0]
valid_losses.append(valid_loss[0][0])
return losses, valid_losses
def fit_closed_form(self, X, y_true, X_valid=None, y_valid=None):
self.W = np.linalg.inv(X.T @ X) @ X.T @ y_true
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss
k = 3
x_poly = make_poly(x_small, k)
x_valid = make_poly(x[100:1100], k)
y_valid = y[100:1100]
lr = LR_valid(num_features=x_poly.shape[1])
losses, valid_losses = lr.fit(x_poly, y_small, X_valid=x_valid, y_valid=y_valid)
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
plt.figure(figsize=(20, 10))
for k in range(1, 26):
x_poly = make_poly(x_small, k)
x_valid = make_poly(x[100:1100], k)
y_valid = y[100:1100]
lr = LR_valid(num_features=x_poly.shape[1])
losses, valid_losses = lr.fit(x_poly, y_small, X_valid=x_valid, y_valid=y_valid)
plt.subplot(5, 5, k)
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
plt.ylim((0, 0.005))
plt.title(f'deg={k}')
plt.show()
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
k = 12
x_poly = make_poly(x[:8000], k)
x_valid = make_poly(x[8000:], k)
y_valid = y[8000:]
lr = LR_valid(num_features=x_poly.shape[1])
losses, valid_losses = lr.fit(x_poly, y[:8000], X_valid=x_valid, y_valid=y_valid)
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
t_poly = make_poly(t, k)
y_pred = lr.predict(t_poly)
plt.plot(t[:, 1], y_pred, color='red')
plt.scatter(x[8000:, 1], y[8000:], color='blue')
###Output
_____no_output_____
###Markdown
Ex8 Find train and valid losses for all polynomial models
###Code
lrs = {'models': [], 'losses': [], 'train_loss_history': [], 'valid_loss_history': []}
for k in range(1, 26):
x_poly_k = make_poly(x_small, k)
pass
###Output
_____no_output_____
###Markdown
Ex9 Do the same thing as Ex8, but instead of using 15 samples use 5000
###Code
pass
###Output
_____no_output_____
###Markdown
Regularization Ex10 Implement L2 and L1 regularizations $$J(W) = J_{old}(W) + alpha * (w_1^2 + ... + w_{p}^2)$$$$J_{old}(W)$$
###Code
class LR_valid_L2:
def __init__(self, num_features=1, optimizer=utils.GD(0.05), alpha=0):
self.W = np.zeros((num_features, 1)) + 5
self.optimizer = optimizer
self.alpha = alpha
def predict(self, X):
y = X @ self.W
return y
def one_step_opt(self, X, y_true):
grad_reg = 2 * self.alpha * self.W
grad_reg[0, 0] = 0
reg_loss = np.sum(self.alpha * (self.W) ** 2)
grads = - X.T @ (y_true - X @ self.W) / X.shape[0] + grad_reg
self.W = self.optimizer.apply_grads(self.W, grads)
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss, grads, reg_loss
def fit(self, X, y_true, X_valid=None, y_valid=None, grad_tol=0.0001, n_iters=10000):
grad_norm = np.inf
n_iter = 0
losses = []
valid_losses = []
reg_losses = []
while (grad_norm > grad_tol) and (n_iter < n_iters):
loss, grads, reg_loss = self.one_step_opt(X, y_true)
grad_norm = np.linalg.norm(grads)
n_iter += 1
losses.append(loss[0][0])
valid_loss = (y_valid - self.predict(X_valid)).T @ (y_valid - self.predict(X_valid)) / X_valid.shape[0]
valid_losses.append(valid_loss[0][0])
reg_losses.append(reg_loss)
return np.array(losses), np.array(valid_losses), np.array(reg_losses)
def fit_closed_form(self, X, y_true, X_valid=None, y_valid=None):
self.W = np.linalg.inv(X.T @ X) @ X.T @ y_true
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss
#plt.figure(figsize=(10, 10))
terminal_losses = []
alphas = np.arange(0, 0.01, 0.001)
Ws = []
for alpha in alphas:
k = 17
x_poly = make_poly(x_small, k)
x_valid = make_poly(x[100:1100], k)
y_valid = y[100:1100]
lr = LR_valid_L2(num_features=x_poly.shape[1], alpha=alpha)
losses, valid_losses, reg_losses = lr.fit(x_poly, y_small, X_valid=x_valid, y_valid=y_valid)
terminal_losses.append(valid_losses[-1])
Ws.append(lr.W[2])
total_losses = losses + reg_losses
'''
plt.subplot(2, 1, 1)
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
plt.plot(reg_losses, color='green')
plt.plot(total_losses, color='black')
plt.ylim((0, 0.005))
plt.title(f'deg={k}')
t_poly = make_poly(t, k)
y_pred = lr.predict(t_poly)
plt.subplot(2, 1, 2)
plt.plot(t[:, 1], y_pred, color='red')
plt.plot(t[:, 1], y_pred, color='red')
plt.scatter(x[8000:, 1], y[8000:], color='blue')
plt.scatter(x_small[:, 1], y_small, color='pink', s=50)
plt.ylim((-0.15, 0.15))
plt.title(f'deg={k}')
plt.show()
'''
plt.plot(alphas, Ws)
#plt.ylim((0, 0.01))
###Output
_____no_output_____
###Markdown
$$J(W) = J_{old}(W) + alpha * (|w_1| + ... + |w_p|)$$
###Code
class LR_valid_L1:
def __init__(self, num_features=1, optimizer=utils.GD(0.05), alpha=0):
self.W = np.zeros((num_features, 1)) + 5
self.optimizer = optimizer
self.alpha = alpha
def predict(self, X):
y = X @ self.W
return y
def one_step_opt(self, X, y_true):
grad_reg = self.alpha * np.sign(self.W)
grad_reg[0, 0] = 0
reg_loss = np.sum(self.alpha * np.abs(self.W))
grads = - X.T @ (y_true - X @ self.W) / X.shape[0] + grad_reg
self.W = self.optimizer.apply_grads(self.W, grads)
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss, grads, reg_loss
def fit(self, X, y_true, X_valid=None, y_valid=None, grad_tol=0.0001, n_iters=10000):
grad_norm = np.inf
n_iter = 0
losses = []
valid_losses = []
reg_losses = []
while (grad_norm > grad_tol) and (n_iter < n_iters):
loss, grads, reg_loss = self.one_step_opt(X, y_true)
grad_norm = np.linalg.norm(grads)
n_iter += 1
losses.append(loss[0][0])
valid_loss = (y_valid - self.predict(X_valid)).T @ (y_valid - self.predict(X_valid)) / X_valid.shape[0]
valid_losses.append(valid_loss[0][0])
reg_losses.append(reg_loss)
return np.array(losses), np.array(valid_losses), np.array(reg_losses)
def fit_closed_form(self, X, y_true, X_valid=None, y_valid=None):
self.W = np.linalg.inv(X.T @ X) @ X.T @ y_true
loss = (y_true - self.predict(X)).T @ (y_true - self.predict(X)) / X.shape[0]
return loss
plt.figure(figsize=(10, 10))
terminal_losses = []
alphas = np.arange(0.01, 0.005, 0.001)
Ws = []
alpha = 0.003
#for alpha in alphas:
k = 17
x_poly = make_poly(x_small, k)
mu = np.mean(x_poly, axis=0)
std = np.std(x_poly, axis=0)
x_poly = (x_poly - mu) / std
x_poly[:, 0] = 1
x_valid = make_poly(x[100:1100], k)
x_valid = (x_valid - mu) / std
x_valid[:, 0] = 1
y_valid = y[100:1100]
lr = LR_valid_L1(num_features=x_poly.shape[1], alpha=alpha)
losses, valid_losses, reg_losses = lr.fit(x_poly, y_small, X_valid=x_valid, y_valid=y_valid)
terminal_losses.append(valid_losses[-1])
Ws.append(lr.W[2])
total_losses = losses + reg_losses
plt.subplot(2, 1, 1)
plt.plot(losses, color='blue')
plt.plot(valid_losses, color='orange')
plt.plot(reg_losses, color='green')
plt.plot(total_losses, color='black')
#plt.ylim((0, 0.005))
plt.title(f'deg={k}')
t_poly = make_poly(t, k)
y_pred = lr.predict(t_poly)
plt.subplot(2, 1, 2)
plt.plot(t[:, 1], y_pred, color='red')
plt.plot(t[:, 1], y_pred, color='red')
plt.scatter(x[8000:, 1], y[8000:], color='blue')
plt.scatter(x_small[:, 1], y_small, color='pink', s=50)
plt.ylim((-0.15, 0.15))
plt.title(f'deg={k}')
plt.show()
terminal_losses = []
alphas = np.arange(0.009, 0.01, 0.0001)
Ws = []
k = 17
x_poly = make_poly(x_small, k)
x_valid = make_poly(x[100:1100], k)
y_valid = y[100:1100]
for alpha in alphas:
lr = LR_valid_L1(num_features=x_poly.shape[1], alpha=alpha)
losses, valid_losses, reg_losses = lr.fit(x_poly, y_small, X_valid=x_valid, y_valid=y_valid)
terminal_losses.append(valid_losses[-1])
Ws.append(lr.W[2])
total_losses = losses + reg_losses
###Output
_____no_output_____ |
src/scripts/mondo_scratch_notebook.ipynb | ###Markdown
SSSOM Files
###Code
import pandas as pd
from pathlib import Path
from argparse import ArgumentParser
curiemap = {
'OMIM': 'http://omim.org/entry/',
'OMIMPS': 'http://www.omim.org/phenotypicSeries/',
'ORPHA': 'http://www.orpha.net/ORDO/Orphanet_',
'MONDO': 'http://purl.obolibrary.org/obo/MONDO_'
}
def get_dataframe(tsv):
try:
df = pd.read_csv(tsv,sep="\t", comment="#")
df["source"]=Path(tsv).stem
return df
except pd.errors.EmptyDataError:
print("WARNING! ", tsv, " is empty and has been skipped.")
sssom_tsv_path = "/Users/matentzn/ws/mondo/src/ontology/mappings/ordo-omim.sssom.tsv"
df = get_dataframe(sssom_tsv_path)
sssom_files = {}
for index, row in df.iterrows():
object_id = row["object_id"]
prefix = object_id.split(":")[0]
if prefix not in sssom_files:
sssom_files[prefix] = []
sssom_files[prefix].append(row)
for prefix in sssom_files:
if
print(prefix)
print(df.head())
ps = "https://omim.org/phenotypicSeriesTitles/all?format=tsv"
mim2gene = "https://omim.org/static/omim/data/mim2gene.txt"
mimTitles = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/mimTitles.txt"
genemap2 = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/genemap2.txt"
morbidmap = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/morbidmap.txt"
import pandas as pd
df_ps = pd.read_csv(ps,sep="\t")
df_ps.head()
import pandas as pd
diff_current_file="../ontology/reports/report-2020-06-30-release.tsv"
diff_compare_file="../ontology/reports/report-2019-06-29-release.tsv"
raw=pd.read_csv(diff_current_file,sep="\t")
raw=raw.fillna("None")
raw2=pd.read_csv(diff_compare_file,sep="\t")
raw2=raw2.fillna("None")
replace=dict()
replace['<http://purl.obolibrary.org/obo/IAO_']="IAO:"
replace['<http://purl.obolibrary.org/obo/mondo#']="mondo:"
replace['<http://purl.org/dc/elements/1.1/']="dce:"
replace['<http://purl.org/dc/terms/']="dc:"
replace['<http://www.geneontology.org/formats/oboInOwl#']="oio:"
replace['<http://www.w3.org/1999/02/22-rdf-syntax-ns#']="rdf:"
replace['<http://www.w3.org/2000/01/rdf-schema#']="rdfs:"
replace['<http://www.w3.org/2002/07/owl#']="owl:"
replace['<http://www.w3.org/2004/02/skos/core#']="skos:"
replace['>']=""
raw2
df2=raw2[['?term','?property','?value','?p','?v']].drop_duplicates()
df2['?property'] = df2['?property'].replace(replace, regex=True)
df2['?p'] = df2['?p'].replace(replace, regex=True)
df1=raw[['?term','?property','?value','?p','?v']].drop_duplicates()
df1['?property'] = df1['?property'].replace(replace, regex=True)
df1['?p'] = df1['?p'].replace(replace, regex=True)
df=pd.concat([df1, df2, df2]).drop_duplicates(keep=False)
df.to_csv("check.tsv",sep="\t")
df[['?term','?property']].drop_duplicates().groupby(['?property']).agg(['count'])
df_d=pd.concat([raw, raw2, raw2]).drop_duplicates(keep=False)
df_d
df_xref=df_d[((df_d['?property']=="<http://www.geneontology.org/formats/oboInOwl#hasDbXref>") & (df_d['?p']!="None") & (df_d['?v']=="MONDO:equivalentTo"))].copy()
df_xref['source'] = df_xref['?value'].str.split(r":", expand=True)[0]
df_xref=df_xref[['?term','source']].drop_duplicates()
df_xref
df_xref.groupby(['source']).agg(['count'])
###Output
_____no_output_____
###Markdown
SSSOM Files
###Code
import pandas as pd
from pathlib import Path
from argparse import ArgumentParser
curiemap = {
'OMIM': 'http://omim.org/entry/',
'OMIMPS': 'http://www.omim.org/phenotypicSeries/',
'ORPHA': 'http://www.orpha.net/ORDO/Orphanet_',
'MONDO': 'http://purl.obolibrary.org/obo/MONDO_'
}
def get_dataframe(tsv):
try:
df = pd.read_csv(tsv,sep="\t", comment="#")
df["source"]=Path(tsv).stem
return df
except pd.errors.EmptyDataError:
print("WARNING! ", tsv, " is empty and has been skipped.")
sssom_tsv_path = "/Users/matentzn/ws/mondo/src/ontology/mappings/ordo-omim.sssom.tsv"
df = get_dataframe(sssom_tsv_path)
sssom_files = {}
for index, row in df.iterrows():
object_id = row["object_id"]
prefix = object_id.split(":")[0]
if prefix not in sssom_files:
sssom_files[prefix] = []
sssom_files[prefix].append(row)
for prefix in sssom_files:
if
print(prefix)
print(df.head())
ps = "https://omim.org/phenotypicSeriesTitles/all?format=tsv"
mim2gene = "https://omim.org/static/omim/data/mim2gene.txt"
mimTitles = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/mimTitles.txt"
genemap2 = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/genemap2.txt"
morbidmap = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/morbidmap.txt"
import pandas as pd
df_ps = pd.read_csv(ps,sep="\t")
df_ps.head()
###Output
_____no_output_____
###Markdown
SSSOM Files
###Code
import pandas as pd
from pathlib import Path
from argparse import ArgumentParser
curiemap = {
'OMIM': 'http://omim.org/entry/',
'OMIMPS': 'http://www.omim.org/phenotypicSeries/',
'ORPHA': 'http://www.orpha.net/ORDO/Orphanet_',
'MONDO': 'http://purl.obolibrary.org/obo/MONDO_'
}
def get_dataframe(tsv):
try:
df = pd.read_csv(tsv,sep="\t", comment="#")
df["source"]=Path(tsv).stem
return df
except pd.errors.EmptyDataError:
print("WARNING! ", tsv, " is empty and has been skipped.")
sssom_tsv_path = "/Users/matentzn/ws/mondo/src/ontology/mappings/ordo-omim.sssom.tsv"
df = get_dataframe(sssom_tsv_path)
sssom_files = {}
for index, row in df.iterrows():
object_id = row["object_id"]
prefix = object_id.split(":")[0]
if prefix not in sssom_files:
sssom_files[prefix] = []
sssom_files[prefix].append(row)
for prefix in sssom_files:
if
print(prefix)
print(df.head())
ps = "https://omim.org/phenotypicSeriesTitles/all?format=tsv"
mim2gene = "https://omim.org/static/omim/data/mim2gene.txt"
mimTitles = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/mimTitles.txt"
genemap2 = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/genemap2.txt"
morbidmap = "https://data.omim.org/downloads/Wi95pIjsRfqt4ioi9rEJNQ/morbidmap.txt"
import pandas as pd
df_ps = pd.read_csv(ps,sep="\t")
df_ps.head()
import pandas as pd
diff_current_file="../ontology/reports/report-2020-06-30-release.tsv"
diff_compare_file="../ontology/reports/report-2019-06-29-release.tsv"
raw=pd.read_csv(diff_current_file,sep="\t")
raw=raw.fillna("None")
raw2=pd.read_csv(diff_compare_file,sep="\t")
raw2=raw2.fillna("None")
replace=dict()
replace['<http://purl.obolibrary.org/obo/IAO_']="IAO:"
replace['<http://purl.obolibrary.org/obo/mondo#']="mondo:"
replace['<http://purl.org/dc/elements/1.1/']="dce:"
replace['<http://purl.org/dc/terms/']="dc:"
replace['<http://www.geneontology.org/formats/oboInOwl#']="oio:"
replace['<http://www.w3.org/1999/02/22-rdf-syntax-ns#']="rdf:"
replace['<http://www.w3.org/2000/01/rdf-schema#']="rdfs:"
replace['<http://www.w3.org/2002/07/owl#']="owl:"
replace['<http://www.w3.org/2004/02/skos/core#']="skos:"
replace['>']=""
raw2
df2=raw2[['?term','?property','?value','?p','?v']].drop_duplicates()
df2['?property'] = df2['?property'].replace(replace, regex=True)
df2['?p'] = df2['?p'].replace(replace, regex=True)
df1=raw[['?term','?property','?value','?p','?v']].drop_duplicates()
df1['?property'] = df1['?property'].replace(replace, regex=True)
df1['?p'] = df1['?p'].replace(replace, regex=True)
df=pd.concat([df1, df2, df2]).drop_duplicates(keep=False)
df.to_csv("check.tsv",sep="\t")
df[['?term','?property']].drop_duplicates().groupby(['?property']).agg(['count'])
df_d=pd.concat([raw, raw2, raw2]).drop_duplicates(keep=False)
df_d
df_xref=df_d[((df_d['?property']=="<http://www.geneontology.org/formats/oboInOwl#hasDbXref>") & (df_d['?p']!="None") & (df_d['?v']=="MONDO:equivalentTo"))].copy()
df_xref['source'] = df_xref['?value'].str.split(r":", expand=True)[0]
df_xref=df_xref[['?term','source']].drop_duplicates()
df_xref
df_xref.groupby(['source']).agg(['count'])
import pandas as pd
import os
report_last_release="../ontology/reports/mondo_base_last_release-report.tsv"
report_current_release="../ontology/reports/mondo_base_current_release-report.tsv"
output="../ontology/reports/release-report-changed-terms.tsv"
output_new="../ontology/reports/release-report-new-terms.tsv"
def load_report_data(fn, prefix):
df = pd.read_csv(fn, sep='\t')
df.columns = ['mondo_id', 'label', 'definition','obsoletion_candidate','obsolete']
df['mondo_id']=[i.replace("<http://purl.obolibrary.org/obo/MONDO_","MONDO:").replace(">","") for i in df['mondo_id']]
df_m = pd.melt(df, id_vars='mondo_id', value_vars=['label', 'definition','obsoletion_candidate','obsolete'])
df_m.columns = ['mondo_id', 'property', 'value_'+prefix]
return df_m
def format_new_table(df):
if 'definition' not in df:
df['definition'] = ''
if 'label' not in df:
df['label'] = ''
if 'obsolete' not in df:
df['obsolete'] = ''
if 'obsoletion_candidate' not in df:
df['obsoletion_candidate'] = ''
df.fillna('', inplace=True)
return df[['mondo_id','label','definition','obsolete','obsoletion_candidate']]
df_report_last_release=load_report_data(report_last_release,'old')
df_report_current_release=load_report_data(report_current_release,'new')
last_ids = df_report_last_release['mondo_id'].unique()
current_ids = df_report_current_release['mondo_id'].unique()
new_terms = [x for x in current_ids if x not in last_ids]
df_report_full= pd.merge(df_report_last_release, df_report_current_release, on=['mondo_id','property'],how='right')
df_report_full.fillna('', inplace=True)
df_report = df_report_full[df_report_full['value_old']!=df_report_full['value_new']].copy()
df_report_new=df_report[df_report['mondo_id'].isin(new_terms)].copy()
del df_report_new['value_old']
df_report_new_wide=df_report_new.pivot(index='mondo_id', columns='property', values='value_new')
df_report_new_wide=pd.DataFrame(df_report_new_wide.to_records())
df_report_new_wide = format_new_table(df_report_new_wide)
df_report_new = df_report_new_wide.copy()
df_report_changed=df_report[~df_report['mondo_id'].isin(new_terms)].copy()
df_report_new.to_csv(output_new, sep="\t",index=False)
df_report_changed.to_csv(output, sep="\t",index=False)
df_report_new.head()
df_report_changed.head(15)
###Output
_____no_output_____ |
classificar_iris/iris_cruzada.ipynb | ###Markdown
###Code
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
base = pd.read_csv('/content/drive/My Drive/Deep Learning/Iris/original.csv')
previsores = base.iloc[:, 0:4].values
classe = base.iloc[:, 4].values
from sklearn.preprocessing import LabelEncoder
labelencoder = LabelEncoder()
classe = labelencoder.fit_transform(classe)
classe_dummy = np_utils.to_categorical(classe)
def criar_rede():
classificador = Sequential()
classificador.add(Dense(units = 4, activation = 'relu', input_dim = 4))
classificador.add(Dense(units = 4, activation = 'relu'))
classificador.add(Dense(units = 3, activation = 'softmax'))
classificador.compile(optimizer = 'adam', loss = 'categorical_crossentropy',
metrics = ['categorical_accuracy'])
return classificador
classificador = KerasClassifier(build_fn = criar_rede,epochs = 1000, batch_size = 10)
resultados = cross_val_score(estimator=classificador,X = previsores,y = classe, cv = 10, scoring = 'accuracy')
media = resultados.mean()
desvio = resultados.std()
print(media)
desvio
resultados
###Output
_____no_output_____ |
main/eit_real.ipynb | ###Markdown
Data Preprocessing - EIT - Machine Learning Copyright (c) 2018, Faststream Technologies Author: Sudhanva Narayana
###Code
import numpy as np
import pandas as pd
df = pd.read_excel('../assets/EIT Clean.xlsx', skip_rows=[0, 1, 2])
df.head()
to_be_removed = ['res_min_7_8', 'res_max_7_8', 'res_min_8_1',
'res_max_8_1', 'part', 'distance', 'no_electrodes', 'name', 'metric']
for i in range(1, 9):
df[str(i)] = (df['res_min_' + str(i)] + df['res_min_' + str(i)]) / 2
to_be_removed.append('res_min_' + str(i))
to_be_removed.append('res_max_' + str(i))
for i, j in zip(range(1, 9), range(2, 8)):
df[str(i) + '_' + str(j)] = (df['res_min_' + str(i) + '_' + str(j)] + df['res_min_' + str(i) + '_' + str(j)]) / 2
to_be_removed.append('res_min_' + str(i) + '_' + str(j))
to_be_removed.append('res_max_' + str(i) + '_' + str(j))
df['7_8'] = (df['res_min_7_8'] + df['res_max_7_8']) / 2
df['8_1'] = (df['res_min_8_1'] + df['res_max_8_1']) / 2
df.drop(to_be_removed, axis=1, inplace=True)
df.head()
df.columns
df.to_csv('../assets/eit_clean_final.csv', index=False)
###Output
_____no_output_____ |
colabs/mapping.ipynb | ###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Column Mapping ParametersUse sheet to define keyword to column mappings. 1. For the sheet, provide the full URL. 1. A tab called Mapping will be created. 1. Follow the instructions in the tab to complete the mapping. 1. The in table should have the columns you want to map. 1. The out view will have the new columns created in the mapping.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'sheet': '',
'tab': '',
'in_dataset': '',
'in_table': '',
'out_dataset': '',
'out_view': '',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Column MappingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'mapping': {
'auth': 'user',
'sheet': {'field': {'name': 'sheet','kind': 'string','order': 1,'default': ''}},
'tab': {'field': {'name': 'tab','kind': 'string','order': 2,'default': ''}},
'in': {
'dataset': {'field': {'name': 'in_dataset','kind': 'string','order': 3,'default': ''}},
'table': {'field': {'name': 'in_table','kind': 'string','order': 4,'default': ''}}
},
'out': {
'dataset': {'field': {'name': 'out_dataset','kind': 'string','order': 7,'default': ''}},
'view': {'field': {'name': 'out_view','kind': 'string','order': 8,'default': ''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Column Mapping ParametersUse sheet to define keyword to column mappings. 1. For the sheet, provide the full URL. 1. A tab called Mapping will be created. 1. Follow the instructions in the tab to complete the mapping. 1. The in table should have the columns you want to map. 1. The out view will have the new columns created in the mapping.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'sheet': '',
'tab': '',
'in_dataset': '',
'in_table': '',
'out_dataset': '',
'out_view': '',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Column MappingThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import commandline_parser
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'mapping': {
'auth': 'user',
'sheet': {'field': {'name': 'sheet','kind': 'string','order': 1,'default': ''}},
'tab': {'field': {'name': 'tab','kind': 'string','order': 2,'default': ''}},
'in': {
'dataset': {'field': {'name': 'in_dataset','kind': 'string','order': 3,'default': ''}},
'table': {'field': {'name': 'in_table','kind': 'string','order': 4,'default': ''}}
},
'out': {
'dataset': {'field': {'name': 'out_dataset','kind': 'string','order': 7,'default': ''}},
'view': {'field': {'name': 'out_view','kind': 'string','order': 8,'default': ''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Column Mapping ParametersUse sheet to define keyword to column mappings. 1. For the sheet, provide the full URL. 1. A tab called Mapping will be created. 1. Follow the instructions in the tab to complete the mapping. 1. The in table should have the columns you want to map. 1. The out view will have the new columns created in the mapping.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'sheet': '',
'tab': '',
'in_dataset': '',
'in_table': '',
'out_dataset': '',
'out_view': '',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Column MappingThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'mapping': {
'auth': 'user',
'sheet': {'field': {'name': 'sheet','kind': 'string','order': 1,'default': ''}},
'tab': {'field': {'name': 'tab','kind': 'string','order': 2,'default': ''}},
'in': {
'dataset': {'field': {'name': 'in_dataset','kind': 'string','order': 3,'default': ''}},
'table': {'field': {'name': 'in_table','kind': 'string','order': 4,'default': ''}}
},
'out': {
'dataset': {'field': {'name': 'out_dataset','kind': 'string','order': 7,'default': ''}},
'view': {'field': {'name': 'out_view','kind': 'string','order': 8,'default': ''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
Column MappingUse sheet to define keyword to column mappings. LicenseCopyright 2020 Google LLC,Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. DisclaimerThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Column Mapping Parameters 1. For the sheet, provide the full URL. 1. A tab called Mapping will be created. 1. Follow the instructions in the tab to complete the mapping. 1. The in table should have the columns you want to map. 1. The out view will have the new columns created in the mapping.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'sheet': '',
'tab': '',
'in_dataset': '',
'in_table': '',
'out_dataset': '',
'out_view': '',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Column MappingThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'mapping': {
'auth': 'user',
'sheet': {'field': {'name': 'sheet', 'kind': 'string', 'order': 1, 'default': ''}},
'tab': {'field': {'name': 'tab', 'kind': 'string', 'order': 2, 'default': ''}},
'in': {
'dataset': {'field': {'name': 'in_dataset', 'kind': 'string', 'order': 3, 'default': ''}},
'table': {'field': {'name': 'in_table', 'kind': 'string', 'order': 4, 'default': ''}}
},
'out': {
'dataset': {'field': {'name': 'out_dataset', 'kind': 'string', 'order': 7, 'default': ''}},
'view': {'field': {'name': 'out_view', 'kind': 'string', 'order': 8, 'default': ''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Column Mapping ParametersUse sheet to define keyword to column mappings. 1. For the sheet, provide the full URL. 1. A tab called Mapping will be created. 1. Follow the instructions in the tab to complete the mapping. 1. The in table should have the columns you want to map. 1. The out view will have the new columns created in the mapping.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'sheet': '',
'tab': '',
'in_dataset': '',
'in_table': '',
'out_dataset': '',
'out_view': '',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Column MappingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'mapping': {
'auth': 'user',
'sheet': {'field': {'name': 'sheet','kind': 'string','order': 1,'default': ''}},
'tab': {'field': {'name': 'tab','kind': 'string','order': 2,'default': ''}},
'in': {
'dataset': {'field': {'name': 'in_dataset','kind': 'string','order': 3,'default': ''}},
'table': {'field': {'name': 'in_table','kind': 'string','order': 4,'default': ''}}
},
'out': {
'dataset': {'field': {'name': 'out_dataset','kind': 'string','order': 7,'default': ''}},
'view': {'field': {'name': 'out_view','kind': 'string','order': 8,'default': ''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Column Mapping ParametersUse sheet to define keyword to column mappings. 1. For the sheet, provide the full URL. 1. A tab called Mapping will be created. 1. Follow the instructions in the tab to complete the mapping. 1. The in table should have the columns you want to map. 1. The out view will have the new columns created in the mapping.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'sheet': '',
'tab': '',
'in_dataset': '',
'in_table': '',
'out_dataset': '',
'out_view': '',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Column MappingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'mapping': {
'auth': 'user',
'sheet': {'field': {'name': 'sheet','kind': 'string','order': 1,'default': ''}},
'tab': {'field': {'name': 'tab','kind': 'string','order': 2,'default': ''}},
'in': {
'dataset': {'field': {'name': 'in_dataset','kind': 'string','order': 3,'default': ''}},
'table': {'field': {'name': 'in_table','kind': 'string','order': 4,'default': ''}}
},
'out': {
'dataset': {'field': {'name': 'out_dataset','kind': 'string','order': 7,'default': ''}},
'view': {'field': {'name': 'out_view','kind': 'string','order': 8,'default': ''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Column Mapping ParametersUse sheet to define keyword to column mappings. 1. For the sheet, provide the full URL. 1. A tab called Mapping will be created. 1. Follow the instructions in the tab to complete the mapping. 1. The in table should have the columns you want to map. 1. The out view will have the new columns created in the mapping.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'sheet': '',
'tab': '',
'in_dataset': '',
'in_table': '',
'out_dataset': '',
'out_view': '',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Column MappingThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'mapping': {
'auth': 'user',
'sheet': {'field': {'name': 'sheet','kind': 'string','order': 1,'default': ''}},
'tab': {'field': {'name': 'tab','kind': 'string','order': 2,'default': ''}},
'in': {
'dataset': {'field': {'name': 'in_dataset','kind': 'string','order': 3,'default': ''}},
'table': {'field': {'name': 'in_table','kind': 'string','order': 4,'default': ''}}
},
'out': {
'dataset': {'field': {'name': 'out_dataset','kind': 'string','order': 7,'default': ''}},
'view': {'field': {'name': 'out_view','kind': 'string','order': 8,'default': ''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Column Mapping ParametersUse sheet to define keyword to column mappings. 1. For the sheet, provide the full URL. 1. A tab called Mapping will be created. 1. Follow the instructions in the tab to complete the mapping. 1. The in table should have the columns you want to map. 1. The out view will have the new columns created in the mapping.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'sheet': '',
'tab': '',
'in_dataset': '',
'in_table': '',
'out_dataset': '',
'out_view': '',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Column MappingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields, json_expand_includes
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'mapping': {
'auth': 'user',
'sheet': {'field': {'name': 'sheet','kind': 'string','order': 1,'default': ''}},
'tab': {'field': {'name': 'tab','kind': 'string','order': 2,'default': ''}},
'in': {
'dataset': {'field': {'name': 'in_dataset','kind': 'string','order': 3,'default': ''}},
'table': {'field': {'name': 'in_table','kind': 'string','order': 4,'default': ''}}
},
'out': {
'dataset': {'field': {'name': 'out_dataset','kind': 'string','order': 7,'default': ''}},
'view': {'field': {'name': 'out_view','kind': 'string','order': 8,'default': ''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
json_expand_includes(TASKS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Column Mapping ParametersUse sheet to define keyword to column mappings. 1. For the sheet, provide the full URL. 1. A tab called Mapping will be created. 1. Follow the instructions in the tab to complete the mapping. 1. The in table should have the columns you want to map. 1. The out view will have the new columns created in the mapping.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'sheet': '',
'auth_read': 'user', # Credentials used for reading data.
'tab': '',
'in_dataset': '',
'in_table': '',
'out_dataset': '',
'out_view': '',
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Column MappingThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'mapping': {
'auth': 'user',
'sheet': {'field': {'kind': 'string','name': 'sheet','order': 1,'default': ''}},
'out': {
'dataset': {'field': {'kind': 'string','name': 'out_dataset','order': 7,'default': ''}},
'view': {'field': {'kind': 'string','name': 'out_view','order': 8,'default': ''}}
},
'in': {
'table': {'field': {'kind': 'string','name': 'in_table','order': 4,'default': ''}},
'dataset': {'field': {'kind': 'string','name': 'in_dataset','order': 3,'default': ''}}
},
'tab': {'field': {'kind': 'string','name': 'tab','order': 2,'default': ''}}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____ |
notebooks/ch-gates/multiple-qubits-entangled-states.ipynb | ###Markdown
Multiple Qubits and Entangled States Single qubits are interesting, but individually they offer no computational advantage. We will now look at how we represent multiple qubits, and how these qubits can interact with each other. We have seen how we can represent the state of a qubit using a 2D-vector, now we will see how we can represent the state of multiple qubits. Contents1. [Representing Multi-Qubit States](represent) 1.1 [Exercises](ex1)2. [Single Qubit Gates on Multi-Qubit Statevectors](single-qubit-gates) 2.1 [Exercises](ex2)3. [Multi-Qubit Gates](multi-qubit-gates) 3.1 [The CNOT-gate](cnot) 3.2 [Entangled States](entangled) 3.3 [Visualizing Entangled States](visual) 3.4 [Exercises](ex3) 1. Representing Multi-Qubit States We saw that a single bit has two possible states, and a qubit state has two complex amplitudes. Similarly, two bits have four possible states:`00` `01` `10` `11`And to describe the state of two qubits requires four complex amplitudes. We store these amplitudes in a 4D-vector like so:$$ |a\rangle = a_{00}|00\rangle + a_{01}|01\rangle + a_{10}|10\rangle + a_{11}|11\rangle = \begin{bmatrix} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{bmatrix} $$The rules of measurement still work in the same way:$$ p(|00\rangle) = |\langle 00 | a \rangle |^2 = |a_{00}|^2$$And the same implications hold, such as the normalisation condition:$$ |a_{00}|^2 + |a_{01}|^2 + |a_{10}|^2 + |a_{11}|^2 = 1$$If we have two separated qubits, we can describe their collective state using the tensor product:$$ |a\rangle = \begin{bmatrix} a_0 \\ a_1 \end{bmatrix}, \quad |b\rangle = \begin{bmatrix} b_0 \\ b_1 \end{bmatrix} $$$$ |ba\rangle = |b\rangle \otimes |a\rangle = \begin{bmatrix} b_0 \times \begin{bmatrix} a_0 \\ a_1 \end{bmatrix} \\ b_1 \times \begin{bmatrix} a_0 \\ a_1 \end{bmatrix} \end{bmatrix} = \begin{bmatrix} b_0 a_0 \\ b_0 a_1 \\ b_1 a_0 \\ b_1 a_1 \end{bmatrix}$$And following the same rules, we can use the tensor product to describe the collective state of any number of qubits. Here is an example with three qubits:$$ |cba\rangle = \begin{bmatrix} c_0 b_0 a_0 \\ c_0 b_0 a_1 \\ c_0 b_1 a_0 \\ c_0 b_1 a_1 \\ c_1 b_0 a_0 \\ c_1 b_0 a_1 \\ c_1 b_1 a_0 \\ c_1 b_1 a_1 \\ \end{bmatrix}$$If we have $n$ qubits, we will need to keep track of $2^n$ complex amplitudes. As we can see, these vectors grow exponentially with the number of qubits. This is the reason quantum computers with large numbers of qubits are so difficult to simulate. A modern laptop can easily simulate a general quantum state of around 20 qubits, but simulating 100 qubits is too difficult for the largest supercomputers.Let's look at an example circuit:
###Code
from qiskit import QuantumCircuit, Aer, assemble
from math import pi
import numpy as np
from qiskit.visualization import plot_histogram, plot_bloch_multivector
qc = QuantumCircuit(3)
# Apply H-gate to each qubit:
for qubit in range(3):
qc.h(qubit)
# See the circuit:
qc.draw()
###Output
_____no_output_____
###Markdown
Each qubit is in the state $|+\rangle$, so we should see the vector:$$ |{+++}\rangle = \frac{1}{\sqrt{8}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ \end{bmatrix}$$
###Code
# Let's see the result
svsim = Aer.get_backend('statevector_simulator')
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
# In Jupyter Notebooks we can display this nicely using Latex.
# If not using Jupyter Notebooks you may need to remove the
# array_to_latex function and use print(final_state) instead.
from qiskit_textbook.tools import array_to_latex
array_to_latex(final_state, pretext="\\text{Statevector} = ")
###Output
_____no_output_____
###Markdown
And we have our expected result. 1.2 Quick Exercises: 1. Write down the tensor product of the qubits: a) $|0\rangle|1\rangle$ b) $|0\rangle|+\rangle$ c) $|+\rangle|1\rangle$ d) $|-\rangle|+\rangle$ 2. Write the state: $|\psi\rangle = \tfrac{1}{\sqrt{2}}|00\rangle + \tfrac{i}{\sqrt{2}}|01\rangle $ as two separate qubits. 2. Single Qubit Gates on Multi-Qubit Statevectors We have seen that an X-gate is represented by the matrix:$$X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$$And that it acts on the state $|0\rangle$ as so:$$X|0\rangle = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1\end{bmatrix}$$but it may not be clear how an X-gate would act on a qubit in a multi-qubit vector. Fortunately, the rule is quite simple; just as we used the tensor product to calculate multi-qubit statevectors, we use the tensor product to calculate matrices that act on these statevectors. For example, in the circuit below:
###Code
qc = QuantumCircuit(2)
qc.h(0)
qc.x(1)
qc.draw()
###Output
_____no_output_____
###Markdown
we can represent the simultaneous operations (H & X) using their tensor product:$$X|q_1\rangle \otimes H|q_0\rangle = (X\otimes H)|q_1 q_0\rangle$$The operation looks like this:$$X\otimes H = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \otimes \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 0 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} & 1 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \\ 1 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} & 0 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}\end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & -1 \\ 1 & 1 & 0 & 0 \\ 1 & -1 & 0 & 0 \\\end{bmatrix}$$Which we can then apply to our 4D statevector $|q_1 q_0\rangle$. This can become quite messy, you will often see the clearer notation:$$X\otimes H = \begin{bmatrix} 0 & H \\ H & 0\\\end{bmatrix}$$Instead of calculating this by hand, we can use Qiskit’s `unitary_simulator` to calculate this for us. The unitary simulator multiplies all the gates in our circuit together to compile a single unitary matrix that performs the whole quantum circuit:
###Code
usim = Aer.get_backend('unitary_simulator')
qobj = assemble(qc)
unitary = usim.run(qobj).result().get_unitary()
###Output
_____no_output_____
###Markdown
and view the results:
###Code
# In Jupyter Notebooks we can display this nicely using Latex.
# If not using Jupyter Notebooks you may need to remove the
# array_to_latex function and use print(unitary) instead.
from qiskit_textbook.tools import array_to_latex
array_to_latex(unitary, pretext="\\text{Circuit = }\n")
###Output
_____no_output_____
###Markdown
If we want to apply a gate to only one qubit at a time (such as in the circuit below), we describe this using tensor product with the identity matrix, e.g.:$$ X \otimes I $$
###Code
qc = QuantumCircuit(2)
qc.x(1)
qc.draw()
# Simulate the unitary
usim = Aer.get_backend('unitary_simulator')
qobj = assemble(qc)
unitary = usim.run(qobj).result().get_unitary()
# Display the results:
array_to_latex(unitary, pretext="\\text{Circuit = } ")
###Output
_____no_output_____
###Markdown
We can see Qiskit has performed the tensor product:$$X \otimes I =\begin{bmatrix} 0 & I \\ I & 0\\\end{bmatrix} = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\\end{bmatrix}$$ 2.1 Quick Exercises: 1. Calculate the single qubit unitary ($U$) created by the sequence of gates: $U = XZH$. Use Qiskit's unitary simulator to check your results.2. Try changing the gates in the circuit above. Calculate their tensor product, and then check your answer using the unitary simulator.**Note:** Different books, softwares and websites order their qubits differently. This means the tensor product of the same circuit can look very different. Try to bear this in mind when consulting other sources. 3. Multi-Qubit Gates Now we know how to represent the state of multiple qubits, we are now ready to learn how qubits interact with each other. An important two-qubit gate is the CNOT-gate. 3.1 The CNOT-Gate You have come across this gate before in _[The Atoms of Computation](../ch-states/the-atoms-of-computation)._ This gate is a conditional gate that performs an X-gate on the second qubit (target), if the state of the first qubit (control) is $|1\rangle$. The gate is drawn on a circuit like this, with `q0` as the control and `q1` as the target:
###Code
qc = QuantumCircuit(2)
# Apply CNOT
qc.cx(0,1)
# See the circuit:
qc.draw()
###Output
_____no_output_____
###Markdown
When our qubits are not in superposition of $|0\rangle$ or $|1\rangle$ (behaving as classical bits), this gate is very simple and intuitive to understand. We can use the classical truth table:| Input (t,c) | Output (t,c) ||:-----------:|:------------:|| 00 | 00 || 01 | 11 || 10 | 10 || 11 | 01 |And acting on our 4D-statevector, it has one of the two matrices:$$\text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ \end{bmatrix}, \quad\text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix}$$depending on which qubit is the control and which is the target. Different books, simulators and papers order their qubits differently. In our case, the left matrix corresponds to the CNOT in the circuit above. This matrix swaps the amplitudes of $|01\rangle$ and $|11\rangle$ in our statevector:$$ |a\rangle = \begin{bmatrix} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{bmatrix}, \quad \text{CNOT}|a\rangle = \begin{bmatrix} a_{00} \\ a_{11} \\ a_{10} \\ a_{01} \end{bmatrix} \begin{matrix} \\ \leftarrow \\ \\ \leftarrow \end{matrix}$$We have seen how this acts on classical states, but let’s now see how it acts on a qubit in superposition. We will put one qubit in the state $|+\rangle$:
###Code
qc = QuantumCircuit(2)
# Apply H-gate to the first:
qc.h(0)
qc.draw()
# Let's see the result:
svsim = Aer.get_backend('statevector_simulator')
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
# Print the statevector neatly:
array_to_latex(final_state, pretext="\\text{Statevector = }")
###Output
_____no_output_____
###Markdown
As expected, this produces the state $|0\rangle \otimes |{+}\rangle = |0{+}\rangle$:$$|0{+}\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |01\rangle)$$And let’s see what happens when we apply the CNOT gate:
###Code
qc = QuantumCircuit(2)
# Apply H-gate to the first:
qc.h(0)
# Apply a CNOT:
qc.cx(0,1)
qc.draw()
# Let's get the result:
qobj = assemble(qc)
result = svsim.run(qobj).result()
# Print the statevector neatly:
final_state = result.get_statevector()
array_to_latex(final_state, pretext="\\text{Statevector = }")
###Output
_____no_output_____
###Markdown
We see we have the state:$$\text{CNOT}|0{+}\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$ This state is very interesting to us, because it is _entangled._ This leads us neatly on to the next section. 3.2 Entangled States We saw in the previous section we could create the state:$$\tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$ This is known as a _Bell_ state. We can see that this state has 50% probability of being measured in the state $|00\rangle$, and 50% chance of being measured in the state $|11\rangle$. Most interestingly, it has a **0%** chance of being measured in the states $|01\rangle$ or $|10\rangle$. We can see this in Qiskit:
###Code
plot_histogram(result.get_counts())
###Output
_____no_output_____
###Markdown
This combined state cannot be written as two separate qubit states, which has interesting implications. Although our qubits are in superposition, measuring one will tell us the state of the other and collapse its superposition. For example, if we measured the top qubit and got the state $|1\rangle$, the collective state of our qubits changes like so:$$\tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle) \quad \xrightarrow[]{\text{measure}} \quad |11\rangle$$Even if we separated these qubits light-years away, measuring one qubit collapses the superposition and appears to have an immediate effect on the other. This is the [‘spooky action at a distance’](https://en.wikipedia.org/wiki/Quantum_nonlocality) that upset so many physicists in the early 20th century.It’s important to note that the measurement result is random, and the measurement statistics of one qubit are **not** affected by any operation on the other qubit. Because of this, there is **no way** to use shared quantum states to communicate. This is known as the no-communication theorem.[1] 3.3 Visualizing Entangled StatesWe have seen that this state cannot be written as two separate qubit states, this also means we lose information when we try to plot our state on separate Bloch spheres:
###Code
plot_bloch_multivector(final_state)
###Output
_____no_output_____
###Markdown
Given how we defined the Bloch sphere in the earlier chapters, it may not be clear how Qiskit even calculates the Bloch vectors with entangled qubits like this. In the single-qubit case, the position of the Bloch vector along an axis nicely corresponds to the expectation value of measuring in that basis. If we take this as _the_ rule of plotting Bloch vectors, we arrive at this conclusion above. This shows us there is _no_ single-qubit measurement basis for which a specific measurement is guaranteed. This constrasts with our single qubit states, in which we could always pick a single-qubit basis. Looking at the individual qubits in this way, we miss the important effect of correlation between the qubits. We cannot distinguish between different entangled states. For example, the two states:$$\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle) \quad \text{and} \quad \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$will both look the same on these separate Bloch spheres, despite being very different states with different measurement outcomes.How else could we visualize this statevector? This statevector is simply a collection of four amplitudes (complex numbers), and there are endless ways we can map this to an image. One such visualization is the _Q-sphere,_ here each amplitude is represented by a blob on the surface of a sphere. The size of the blob is proportional to the magnitude of the amplitude, and the colour is proportional to the phase of the amplitude. The amplitudes for $|00\rangle$ and $|11\rangle$ are equal, and all other amplitudes are 0:
###Code
from qiskit.visualization import plot_state_qsphere
plot_state_qsphere(final_state)
###Output
_____no_output_____
###Markdown
Here we can clearly see the correlation between the qubits. The Q-sphere's shape has no significance, it is simply a nice way of arranging our blobs; the number of `0`s in the state is proportional to the states position on the Z-axis, so here we can see the amplitude of $|00\rangle$ is at the top pole of the sphere, and the amplitude of $|11\rangle$ is at the bottom pole of the sphere. 3.4 Exercise: 1. Create a quantum circuit that produces the Bell state: $\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$. Use the statevector simulator to verify your result. 2. The circuit you created in question 1 transforms the state $|00\rangle$ to $\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$, calculate the unitary of this circuit using Qiskit's simulator. Verify this unitary does in fact perform the correct transformation.3. Think about other ways you could represent a statevector visually. Can you design an interesting visualization from which you can read the magnitude and phase of each amplitude? 4. References[1] Asher Peres, Daniel R. Terno, _Quantum Information and Relativity Theory,_ 2004, https://arxiv.org/abs/quant-ph/0212023
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Multiple Qubits and Entangled States Single qubits are interesting, but individually they offer no computational advantage. We will now look at how we represent multiple qubits, and how these qubits can interact with each other. We have seen how we can represent the state of a qubit using a 2D-vector, now we will see how we can represent the state of multiple qubits. Contents1. [Representing Multi-Qubit States](represent) 1.1 [Exercises](ex1)2. [Single Qubit Gates on Multi-Qubit Statevectors](single-qubit-gates) 2.1 [Exercises](ex2)3. [Multi-Qubit Gates](multi-qubit-gates) 3.1 [The CNOT-gate](cnot) 3.2 [Entangled States](entangled) 3.3 [Visualizing Entangled States](visual) 3.4 [Exercises](ex3) 1. Representing Multi-Qubit States We saw that a single bit has two possible states, and a qubit state has two complex amplitudes. Similarly, two bits have four possible states:`00` `01` `10` `11`And to describe the state of two qubits requires four complex amplitudes. We store these amplitudes in a 4D-vector like so:$$ |a\rangle = a_{00}|00\rangle + a_{01}|01\rangle + a_{10}|10\rangle + a_{11}|11\rangle = \begin{bmatrix} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{bmatrix} $$The rules of measurement still work in the same way:$$ p(|00\rangle) = |\langle 00 | a \rangle |^2 = |a_{00}|^2$$And the same implications hold, such as the normalisation condition:$$ |a_{00}|^2 + |a_{01}|^2 + |a_{10}|^2 + |a_{11}|^2 = 1$$If we have two separated qubits, we can describe their collective state using the kronecker product:$$ |a\rangle = \begin{bmatrix} a_0 \\ a_1 \end{bmatrix}, \quad |b\rangle = \begin{bmatrix} b_0 \\ b_1 \end{bmatrix} $$$$ |ba\rangle = |b\rangle \otimes |a\rangle = \begin{bmatrix} b_0 \times \begin{bmatrix} a_0 \\ a_1 \end{bmatrix} \\ b_1 \times \begin{bmatrix} a_0 \\ a_1 \end{bmatrix} \end{bmatrix} = \begin{bmatrix} b_0 a_0 \\ b_0 a_1 \\ b_1 a_0 \\ b_1 a_1 \end{bmatrix}$$And following the same rules, we can use the kronecker product to describe the collective state of any number of qubits. Here is an example with three qubits:$$ |cba\rangle = \begin{bmatrix} c_0 b_0 a_0 \\ c_0 b_0 a_1 \\ c_0 b_1 a_0 \\ c_0 b_1 a_1 \\ c_1 b_0 a_0 \\ c_1 b_0 a_1 \\ c_1 b_1 a_0 \\ c_1 b_1 a_1 \\ \end{bmatrix}$$If we have $n$ qubits, we will need to keep track of $2^n$ complex amplitudes. As we can see, these vectors grow exponentially with the number of qubits. This is the reason quantum computers with large numbers of qubits are so difficult to simulate. A modern laptop can easily simulate a general quantum state of around 20 qubits, but simulating 100 qubits is too difficult for the largest supercomputers.Let's look at an example circuit:
###Code
from qiskit import QuantumCircuit, Aer, assemble
import numpy as np
from qiskit.visualization import plot_histogram, plot_bloch_multivector
qc = QuantumCircuit(3)
# Apply H-gate to each qubit:
for qubit in range(3):
qc.h(qubit)
# See the circuit:
qc.draw()
###Output
_____no_output_____
###Markdown
Each qubit is in the state $|+\rangle$, so we should see the vector:$$ |{+++}\rangle = \frac{1}{\sqrt{8}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ \end{bmatrix}$$
###Code
# Let's see the result
svsim = Aer.get_backend('aer_simulator')
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
# In Jupyter Notebooks we can display this nicely using Latex.
# If not using Jupyter Notebooks you may need to remove the
# array_to_latex function and use print(final_state) instead.
from qiskit.visualization import array_to_latex
array_to_latex(final_state, prefix="\\text{Statevector} = ")
###Output
_____no_output_____
###Markdown
And we have our expected result. 1.2 Quick Exercises: 1. Write down the kronecker product of the qubits: a) $|0\rangle|1\rangle$ b) $|0\rangle|+\rangle$ c) $|+\rangle|1\rangle$ d) $|-\rangle|+\rangle$ 2. Write the state: $|\psi\rangle = \tfrac{1}{\sqrt{2}}|00\rangle + \tfrac{i}{\sqrt{2}}|01\rangle $ as two separate qubits. 2. Single Qubit Gates on Multi-Qubit Statevectors We have seen that an X-gate is represented by the matrix:$$X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$$And that it acts on the state $|0\rangle$ as so:$$X|0\rangle = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1\end{bmatrix}$$but it may not be clear how an X-gate would act on a qubit in a multi-qubit vector. Fortunately, the rule is quite simple; just as we used the kronecker product to calculate multi-qubit statevectors, we use the tensor product to calculate matrices that act on these statevectors. For example, in the circuit below:
###Code
qc = QuantumCircuit(2)
qc.h(0)
qc.x(1)
qc.draw()
###Output
_____no_output_____
###Markdown
we can represent the simultaneous operations (H & X) using their kronecker product:$$X|q_1\rangle \otimes H|q_0\rangle = (X\otimes H)|q_1 q_0\rangle$$The operation looks like this:$$X\otimes H = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \otimes \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$$$$= \frac{1}{\sqrt{2}}\begin{bmatrix} 0 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} & 1 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \\ 1 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} & 0 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}\end{bmatrix} $$$$= \frac{1}{\sqrt{2}}\begin{bmatrix} 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & -1 \\ 1 & 1 & 0 & 0 \\ 1 & -1 & 0 & 0 \\\end{bmatrix}$$Which we can then apply to our 4D statevector $|q_1 q_0\rangle$. This can become quite messy, you will often see the clearer notation:$$X\otimes H = \begin{bmatrix} 0 & H \\ H & 0\\\end{bmatrix}$$Instead of calculating this by hand, we can use Qiskit’s `aer_simulator` to calculate this for us. The Aer simulator multiplies all the gates in our circuit together to compile a single unitary matrix that performs the whole quantum circuit:
###Code
usim = Aer.get_backend('aer_simulator')
qc.save_unitary()
qobj = assemble(qc)
unitary = usim.run(qobj).result().get_unitary()
###Output
_____no_output_____
###Markdown
and view the results:
###Code
# In Jupyter Notebooks we can display this nicely using Latex.
# If not using Jupyter Notebooks you may need to remove the
# array_to_latex function and use print(unitary) instead.
from qiskit.visualization import array_to_latex
array_to_latex(unitary, prefix="\\text{Circuit = }\n")
###Output
_____no_output_____
###Markdown
If we want to apply a gate to only one qubit at a time (such as in the circuit below), we describe this using kronecker product with the identity matrix, e.g.:$$ X \otimes I $$
###Code
qc = QuantumCircuit(2)
qc.x(1)
qc.draw()
# Simulate the unitary
usim = Aer.get_backend('aer_simulator')
qc.save_unitary()
qobj = assemble(qc)
unitary = usim.run(qobj).result().get_unitary()
# Display the results:
array_to_latex(unitary, prefix="\\text{Circuit = } ")
###Output
_____no_output_____
###Markdown
We can see Qiskit has performed the kronecker product:$$X \otimes I =\begin{bmatrix} 0 & I \\ I & 0\\\end{bmatrix} = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\\end{bmatrix}$$ 2.1 Quick Exercises: 1. Calculate the single qubit unitary ($U$) created by the sequence of gates: $U = XZH$. Use Qiskit's Aer simulator to check your results.2. Try changing the gates in the circuit above. Calculate their kronecker product, and then check your answer using the Aer simulator.**Note:** Different books, softwares and websites order their qubits differently. This means the kronecker product of the same circuit can look very different. Try to bear this in mind when consulting other sources. 3. Multi-Qubit Gates Now we know how to represent the state of multiple qubits, we are now ready to learn how qubits interact with each other. An important two-qubit gate is the CNOT-gate. 3.1 The CNOT-Gate You have come across this gate before in _[The Atoms of Computation](../ch-states/the-atoms-of-computation)._ This gate is a conditional gate that performs an X-gate on the second qubit (target), if the state of the first qubit (control) is $|1\rangle$. The gate is drawn on a circuit like this, with `q0` as the control and `q1` as the target:
###Code
qc = QuantumCircuit(2)
# Apply CNOT
qc.cx(0,1)
# See the circuit:
qc.draw()
###Output
_____no_output_____
###Markdown
When our qubits are not in superposition of $|0\rangle$ or $|1\rangle$ (behaving as classical bits), this gate is very simple and intuitive to understand. We can use the classical truth table:| Input (t,c) | Output (t,c) ||:-----------:|:------------:|| 00 | 00 || 01 | 11 || 10 | 10 || 11 | 01 |And acting on our 4D-statevector, it has one of the two matrices:$$\text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ \end{bmatrix}, \quad\text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix}$$depending on which qubit is the control and which is the target. Different books, simulators and papers order their qubits differently. In our case, the left matrix corresponds to the CNOT in the circuit above. This matrix swaps the amplitudes of $|01\rangle$ and $|11\rangle$ in our statevector:$$ |a\rangle = \begin{bmatrix} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{bmatrix}, \quad \text{CNOT}|a\rangle = \begin{bmatrix} a_{00} \\ a_{11} \\ a_{10} \\ a_{01} \end{bmatrix} \begin{matrix} \\ \leftarrow \\ \\ \leftarrow \end{matrix}$$We have seen how this acts on classical states, but let’s now see how it acts on a qubit in superposition. We will put one qubit in the state $|+\rangle$:
###Code
qc = QuantumCircuit(2)
# Apply H-gate to the first:
qc.h(0)
qc.draw()
# Let's see the result:
svsim = Aer.get_backend('aer_simulator')
qc.save_statevector()
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
# Print the statevector neatly:
array_to_latex(final_state, prefix="\\text{Statevector = }")
###Output
_____no_output_____
###Markdown
As expected, this produces the state $|0\rangle \otimes |{+}\rangle = |0{+}\rangle$:$$|0{+}\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |01\rangle)$$And let’s see what happens when we apply the CNOT gate:
###Code
qc = QuantumCircuit(2)
# Apply H-gate to the first:
qc.h(0)
# Apply a CNOT:
qc.cx(0,1)
qc.draw()
# Let's get the result:
qc.save_statevector()
qobj = assemble(qc)
result = svsim.run(qobj).result()
# Print the statevector neatly:
final_state = result.get_statevector()
array_to_latex(final_state, prefix="\\text{Statevector = }")
###Output
_____no_output_____
###Markdown
We see we have the state:$$\text{CNOT}|0{+}\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$ This state is very interesting to us, because it is _entangled._ This leads us neatly on to the next section. 3.2 Entangled States We saw in the previous section we could create the state:$$\tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$ This is known as a _Bell_ state. We can see that this state has 50% probability of being measured in the state $|00\rangle$, and 50% chance of being measured in the state $|11\rangle$. Most interestingly, it has a **0%** chance of being measured in the states $|01\rangle$ or $|10\rangle$. We can see this in Qiskit:
###Code
plot_histogram(result.get_counts())
###Output
_____no_output_____
###Markdown
This combined state cannot be written as two separate qubit states, which has interesting implications. Although our qubits are in superposition, measuring one will tell us the state of the other and collapse its superposition. For example, if we measured the top qubit and got the state $|1\rangle$, the collective state of our qubits changes like so:$$\tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle) \quad \xrightarrow[]{\text{measure}} \quad |11\rangle$$Even if we separated these qubits light-years away, measuring one qubit collapses the superposition and appears to have an immediate effect on the other. This is the [‘spooky action at a distance’](https://en.wikipedia.org/wiki/Quantum_nonlocality) that upset so many physicists in the early 20th century.It’s important to note that the measurement result is random, and the measurement statistics of one qubit are **not** affected by any operation on the other qubit. Because of this, there is **no way** to use shared quantum states to communicate. This is known as the no-communication theorem.[1] 3.3 Visualizing Entangled StatesWe have seen that this state cannot be written as two separate qubit states, this also means we lose information when we try to plot our state on separate Bloch spheres:
###Code
plot_bloch_multivector(final_state)
###Output
_____no_output_____
###Markdown
Given how we defined the Bloch sphere in the earlier chapters, it may not be clear how Qiskit even calculates the Bloch vectors with entangled qubits like this. In the single-qubit case, the position of the Bloch vector along an axis nicely corresponds to the expectation value of measuring in that basis. If we take this as _the_ rule of plotting Bloch vectors, we arrive at this conclusion above. This shows us there is _no_ single-qubit measurement basis for which a specific measurement is guaranteed. This contrasts with our single qubit states, in which we could always pick a single-qubit basis. Looking at the individual qubits in this way, we miss the important effect of correlation between the qubits. We cannot distinguish between different entangled states. For example, the two states:$$\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle) \quad \text{and} \quad \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$will both look the same on these separate Bloch spheres, despite being very different states with different measurement outcomes.How else could we visualize this statevector? This statevector is simply a collection of four amplitudes (complex numbers), and there are endless ways we can map this to an image. One such visualization is the _Q-sphere,_ here each amplitude is represented by a blob on the surface of a sphere. The size of the blob is proportional to the magnitude of the amplitude, and the colour is proportional to the phase of the amplitude. The amplitudes for $|00\rangle$ and $|11\rangle$ are equal, and all other amplitudes are 0:
###Code
from qiskit.visualization import plot_state_qsphere
plot_state_qsphere(final_state)
###Output
_____no_output_____
###Markdown
Here we can clearly see the correlation between the qubits. The Q-sphere's shape has no significance, it is simply a nice way of arranging our blobs; the number of `0`s in the state is proportional to the states position on the Z-axis, so here we can see the amplitude of $|00\rangle$ is at the top pole of the sphere, and the amplitude of $|11\rangle$ is at the bottom pole of the sphere. 3.4 Exercise: 1. Create a quantum circuit that produces the Bell state: $\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$. Use the statevector simulator to verify your result. 2. The circuit you created in question 1 transforms the state $|00\rangle$ to $\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$, calculate the unitary of this circuit using Qiskit's simulator. Verify this unitary does in fact perform the correct transformation.3. Think about other ways you could represent a statevector visually. Can you design an interesting visualization from which you can read the magnitude and phase of each amplitude? 4. References[1] Asher Peres, Daniel R. Terno, _Quantum Information and Relativity Theory,_ 2004, https://arxiv.org/abs/quant-ph/0212023
###Code
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____
###Markdown
Multiple Qubits and Entangled States Single qubits are interesting, but individually they offer no computational advantage. We will now look at how we represent multiple qubits, and how these qubits can interact with each other. We have seen how we can represent the state of a qubit using a 2D-vector, now we will see how we can represent the state of multiple qubits. Contents1. [Representing Multi-Qubit States](represent) 1.1 [Exercises](ex1)2. [Single Qubit Gates on Multi-Qubit Statevectors](single-qubit-gates) 2.1 [Exercises](ex2)3. [Multi-Qubit Gates](multi-qubit-gates) 3.1 [The CNOT-gate](cnot) 3.2 [Entangled States](entangled) 3.3 [Visualizing Entangled States](visual) 3.4 [Exercises](ex3) 1. Representing Multi-Qubit States We saw that a single bit has two possible states, and a qubit state has two complex amplitudes. Similarly, two bits have four possible states:`00` `01` `10` `11`And to describe the state of two qubits requires four complex amplitudes. We store these amplitudes in a 4D-vector like so:$$ |a\rangle = a_{00}|00\rangle + a_{01}|01\rangle + a_{10}|10\rangle + a_{11}|11\rangle = \begin{bmatrix} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{bmatrix} $$The rules of measurement still work in the same way:$$ p(|00\rangle) = |\langle 00 | a \rangle |^2 = |a_{00}|^2$$And the same implications hold, such as the normalisation condition:$$ |a_{00}|^2 + |a_{01}|^2 + |a_{10}|^2 + |a_{11}|^2 = 1$$If we have two separated qubits, we can describe their collective state using the tensor product:$$ |a\rangle = \begin{bmatrix} a_0 \\ a_1 \end{bmatrix}, \quad |b\rangle = \begin{bmatrix} b_0 \\ b_1 \end{bmatrix} $$$$ |ba\rangle = |b\rangle \otimes |a\rangle = \begin{bmatrix} b_0 \times \begin{bmatrix} a_0 \\ a_1 \end{bmatrix} \\ b_1 \times \begin{bmatrix} a_0 \\ a_1 \end{bmatrix} \end{bmatrix} = \begin{bmatrix} b_0 a_0 \\ b_0 a_1 \\ b_1 a_0 \\ b_1 a_1 \end{bmatrix}$$And following the same rules, we can use the tensor product to describe the collective state of any number of qubits. Here is an example with three qubits:$$ |cba\rangle = \begin{bmatrix} c_0 b_0 a_0 \\ c_0 b_0 a_1 \\ c_0 b_1 a_0 \\ c_0 b_1 a_1 \\ c_1 b_0 a_0 \\ c_1 b_0 a_1 \\ c_1 b_1 a_0 \\ c_1 b_1 a_1 \\ \end{bmatrix}$$If we have $n$ qubits, we will need to keep track of $2^n$ complex amplitudes. As we can see, these vectors grow exponentially with the number of qubits. This is the reason quantum computers with large numbers of qubits are so difficult to simulate. A modern laptop can easily simulate a general quantum state of around 20 qubits, but simulating 100 qubits is too difficult for the largest supercomputers.Let's look at an example circuit:
###Code
from qiskit import QuantumCircuit, Aer, assemble
from math import pi
import numpy as np
from qiskit.visualization import plot_histogram, plot_bloch_multivector
qc = QuantumCircuit(3)
# Apply H-gate to each qubit:
for qubit in range(3):
qc.h(qubit)
# See the circuit:
qc.draw()
###Output
_____no_output_____
###Markdown
Each qubit is in the state $|+\rangle$, so we should see the vector:$$ |{+++}\rangle = \frac{1}{\sqrt{8}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ \end{bmatrix}$$
###Code
# Let's see the result
svsim = Aer.get_backend('statevector_simulator')
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
# In Jupyter Notebooks we can display this nicely using Latex.
# If not using Jupyter Notebooks you may need to remove the
# array_to_latex function and use print(final_state) instead.
from qiskit_textbook.tools import array_to_latex
array_to_latex(final_state, pretext="\\text{Statevector} = ")
###Output
_____no_output_____
###Markdown
And we have our expected result. 1.2 Quick Exercises: 1. Write down the tensor product of the qubits: a) $|0\rangle|1\rangle$ b) $|0\rangle|+\rangle$ c) $|+\rangle|1\rangle$ d) $|-\rangle|+\rangle$ 2. Write the state: $|\psi\rangle = \tfrac{1}{\sqrt{2}}|00\rangle + \tfrac{i}{\sqrt{2}}|01\rangle $ as two separate qubits. 2. Single Qubit Gates on Multi-Qubit Statevectors We have seen that an X-gate is represented by the matrix:$$X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$$And that it acts on the state $|0\rangle$ as so:$$X|0\rangle = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1\end{bmatrix}$$but it may not be clear how an X-gate would act on a qubit in a multi-qubit vector. Fortunately, the rule is quite simple; just as we used the tensor product to calculate multi-qubit statevectors, we use the tensor product to calculate matrices that act on these statevectors. For example, in the circuit below:
###Code
qc = QuantumCircuit(2)
qc.h(0)
qc.x(1)
qc.draw()
###Output
_____no_output_____
###Markdown
we can represent the simultaneous operations (H & X) using their tensor product:$$X|q_1\rangle \otimes H|q_0\rangle = (X\otimes H)|q_1 q_0\rangle$$The operation looks like this:$$X\otimes H = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \otimes \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 0 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} & 1 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \\ 1 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} & 0 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}\end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & -1 \\ 1 & 1 & 0 & 0 \\ 1 & -1 & 0 & 0 \\\end{bmatrix}$$Which we can then apply to our 4D statevector $|q_1 q_0\rangle$. This can become quite messy, you will often see the clearer notation:$$X\otimes H = \begin{bmatrix} 0 & H \\ H & 0\\\end{bmatrix}$$Instead of calculating this by hand, we can use Qiskit’s `unitary_simulator` to calculate this for us. The unitary simulator multiplies all the gates in our circuit together to compile a single unitary matrix that performs the whole quantum circuit:
###Code
usim = Aer.get_backend('unitary_simulator')
qobj = assemble(qc)
unitary = usim.run(qobj).result().get_unitary()
###Output
_____no_output_____
###Markdown
and view the results:
###Code
# In Jupyter Notebooks we can display this nicely using Latex.
# If not using Jupyter Notebooks you may need to remove the
# array_to_latex function and use print(unitary) instead.
from qiskit_textbook.tools import array_to_latex
array_to_latex(unitary, pretext="\\text{Circuit = }\n")
###Output
_____no_output_____
###Markdown
If we want to apply a gate to only one qubit at a time (such as in the circuit below), we describe this using tensor product with the identity matrix, e.g.:$$ X \otimes I $$
###Code
qc = QuantumCircuit(2)
qc.x(1)
qc.draw()
# Simulate the unitary
usim = Aer.get_backend('unitary_simulator')
qobj = assemble(qc)
unitary = usim.run(qobj).result().get_unitary()
# Display the results:
array_to_latex(unitary, pretext="\\text{Circuit = } ")
###Output
_____no_output_____
###Markdown
We can see Qiskit has performed the tensor product:$$X \otimes I =\begin{bmatrix} 0 & I \\ I & 0\\\end{bmatrix} = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\\end{bmatrix}$$ 2.1 Quick Exercises: 1. Calculate the single qubit unitary ($U$) created by the sequence of gates: $U = XZH$. Use Qiskit's unitary simulator to check your results.2. Try changing the gates in the circuit above. Calculate their tensor product, and then check your answer using the unitary simulator.**Note:** Different books, softwares and websites order their qubits differently. This means the tensor product of the same circuit can look very different. Try to bear this in mind when consulting other sources. 3. Multi-Qubit Gates Now we know how to represent the state of multiple qubits, we are now ready to learn how qubits interact with each other. An important two-qubit gate is the CNOT-gate. 3.1 The CNOT-Gate You have come across this gate before in _[The Atoms of Computation](../ch-states/atoms-computation.html)._ This gate is a conditional gate that performs an X-gate on the second qubit (target), if the state of the first qubit (control) is $|1\rangle$. The gate is drawn on a circuit like this, with `q0` as the control and `q1` as the target:
###Code
qc = QuantumCircuit(2)
# Apply CNOT
qc.cx(0,1)
# See the circuit:
qc.draw()
###Output
_____no_output_____
###Markdown
When our qubits are not in superposition of $|0\rangle$ or $|1\rangle$ (behaving as classical bits), this gate is very simple and intuitive to understand. We can use the classical truth table:| Input (t,c) | Output (t,c) ||:-----------:|:------------:|| 00 | 00 || 01 | 11 || 10 | 10 || 11 | 01 |And acting on our 4D-statevector, it has one of the two matrices:$$\text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ \end{bmatrix}, \quad\text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix}$$depending on which qubit is the control and which is the target. Different books, simulators and papers order their qubits differently. In our case, the left matrix corresponds to the CNOT in the circuit above. This matrix swaps the amplitudes of $|01\rangle$ and $|11\rangle$ in our statevector:$$ |a\rangle = \begin{bmatrix} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{bmatrix}, \quad \text{CNOT}|a\rangle = \begin{bmatrix} a_{00} \\ a_{11} \\ a_{10} \\ a_{01} \end{bmatrix} \begin{matrix} \\ \leftarrow \\ \\ \leftarrow \end{matrix}$$We have seen how this acts on classical states, but let’s now see how it acts on a qubit in superposition. We will put one qubit in the state $|+\rangle$:
###Code
qc = QuantumCircuit(2)
# Apply H-gate to the first:
qc.h(0)
qc.draw()
# Let's see the result:
svsim = Aer.get_backend('statevector_simulator')
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
# Print the statevector neatly:
array_to_latex(final_state, pretext="\\text{Statevector = }")
###Output
_____no_output_____
###Markdown
As expected, this produces the state $|0\rangle \otimes |{+}\rangle = |0{+}\rangle$:$$|0{+}\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |01\rangle)$$And let’s see what happens when we apply the CNOT gate:
###Code
qc = QuantumCircuit(2)
# Apply H-gate to the first:
qc.h(0)
# Apply a CNOT:
qc.cx(0,1)
qc.draw()
# Let's get the result:
qobj = assemble(qc)
result = svsim.run(qobj).result()
# Print the statevector neatly:
final_state = result.get_statevector()
array_to_latex(final_state, pretext="\\text{Statevector = }")
###Output
_____no_output_____
###Markdown
We see we have the state:$$\text{CNOT}|0{+}\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$ This state is very interesting to us, because it is _entangled._ This leads us neatly on to the next section. 3.2 Entangled States We saw in the previous section we could create the state:$$\tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$ This is known as a _Bell_ state. We can see that this state has 50% probability of being measured in the state $|00\rangle$, and 50% chance of being measured in the state $|11\rangle$. Most interestingly, it has a **0%** chance of being measured in the states $|01\rangle$ or $|10\rangle$. We can see this in Qiskit:
###Code
plot_histogram(result.get_counts())
###Output
_____no_output_____
###Markdown
This combined state cannot be written as two separate qubit states, which has interesting implications. Although our qubits are in superposition, measuring one will tell us the state of the other and collapse its superposition. For example, if we measured the top qubit and got the state $|1\rangle$, the collective state of our qubits changes like so:$$\tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle) \quad \xrightarrow[]{\text{measure}} \quad |11\rangle$$Even if we separated these qubits light-years away, measuring one qubit collapses the superposition and appears to have an immediate effect on the other. This is the [‘spooky action at a distance’](https://en.wikipedia.org/wiki/Quantum_nonlocality) that upset so many physicists in the early 20th century.It’s important to note that the measurement result is random, and the measurement statistics of one qubit are **not** affected by any operation on the other qubit. Because of this, there is **no way** to use shared quantum states to communicate. This is known as the no-communication theorem.[1] 3.3 Visualizing Entangled StatesWe have seen that this state cannot be written as two separate qubit states, this also means we lose information when we try to plot our state on separate Bloch spheres:
###Code
plot_bloch_multivector(final_state)
###Output
_____no_output_____
###Markdown
Given how we defined the Bloch sphere in the earlier chapters, it may not be clear how Qiskit even calculates the Bloch vectors with entangled qubits like this. In the single-qubit case, the position of the Bloch vector along an axis nicely corresponds to the expectation value of measuring in that basis. If we take this as _the_ rule of plotting Bloch vectors, we arrive at this conclusion above. This shows us there is _no_ single-qubit measurement basis for which a specific measurement is guaranteed. This constrasts with our single qubit states, in which we could always pick a single-qubit basis. Looking at the individual qubits in this way, we miss the important effect of correlation between the qubits. We cannot distinguish between different entangled states. For example, the two states:$$\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle) \quad \text{and} \quad \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$will both look the same on these separate Bloch spheres, despite being very different states with different measurement outcomes.How else could we visualize this statevector? This statevector is simply a collection of four amplitudes (complex numbers), and there are endless ways we can map this to an image. One such visualization is the _Q-sphere,_ here each amplitude is represented by a blob on the surface of a sphere. The size of the blob is proportional to the magnitude of the amplitude, and the colour is proportional to the phase of the amplitude. The amplitudes for $|00\rangle$ and $|11\rangle$ are equal, and all other amplitudes are 0:
###Code
from qiskit.visualization import plot_state_qsphere
plot_state_qsphere(final_state)
###Output
_____no_output_____
###Markdown
Here we can clearly see the correlation between the qubits. The Q-sphere's shape has no significance, it is simply a nice way of arranging our blobs; the number of `0`s in the state is proportional to the states position on the Z-axis, so here we can see the amplitude of $|00\rangle$ is at the top pole of the sphere, and the amplitude of $|11\rangle$ is at the bottom pole of the sphere. 3.4 Exercise: 1. Create a quantum circuit that produces the Bell state: $\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$. Use the statevector simulator to verify your result. 2. The circuit you created in question 1 transforms the state $|00\rangle$ to $\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$, calculate the unitary of this circuit using Qiskit's simulator. Verify this unitary does in fact perform the correct transformation.3. Think about other ways you could represent a statevector visually. Can you design an interesting visualization from which you can read the magnitude and phase of each amplitude? 4. References[1] Asher Peres, Daniel R. Terno, _Quantum Information and Relativity Theory,_ 2004, https://arxiv.org/abs/quant-ph/0212023
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Multiple Qubits and Entangled States Single qubits are interesting, but individually they offer no computational advantage. We will now look at how we represent multiple qubits, and how these qubits can interact with each other. We have seen how we can represent the state of a qubit using a 2D-vector, now we will see how we can represent the state of multiple qubits. Contents1. [Representing Multi-Qubit States](represent) 1.1 [Exercises](ex1)2. [Single Qubit Gates on Multi-Qubit Statevectors](single-qubit-gates) 2.1 [Exercises](ex2)3. [Multi-Qubit Gates](multi-qubit-gates) 3.1 [The CNOT-gate](cnot) 3.2 [Entangled States](entangled) 3.3 [Visualizing Entangled States](visual) 3.4 [Exercises](ex3) 1. Representing Multi-Qubit States We saw that a single bit has two possible states, and a qubit state has two complex amplitudes. Similarly, two bits have four possible states:`00` `01` `10` `11`And to describe the state of two qubits requires four complex amplitudes. We store these amplitudes in a 4D-vector like so:$$ |a\rangle = a_{00}|00\rangle + a_{01}|01\rangle + a_{10}|10\rangle + a_{11}|11\rangle = \begin{bmatrix} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{bmatrix} $$The rules of measurement still work in the same way:$$ p(|00\rangle) = |\langle 00 | a \rangle |^2 = |a_{00}|^2$$And the same implications hold, such as the normalisation condition:$$ |a_{00}|^2 + |a_{01}|^2 + |a_{10}|^2 + |a_{11}|^2 = 1$$If we have two separated qubits, we can describe their collective state using the tensor product:$$ |a\rangle = \begin{bmatrix} a_0 \\ a_1 \end{bmatrix}, \quad |b\rangle = \begin{bmatrix} b_0 \\ b_1 \end{bmatrix} $$$$ |ba\rangle = |b\rangle \otimes |a\rangle = \begin{bmatrix} b_0 \times \begin{bmatrix} a_0 \\ a_1 \end{bmatrix} \\ b_1 \times \begin{bmatrix} a_0 \\ a_1 \end{bmatrix} \end{bmatrix} = \begin{bmatrix} b_0 a_0 \\ b_0 a_1 \\ b_1 a_0 \\ b_1 a_1 \end{bmatrix}$$And following the same rules, we can use the tensor product to describe the collective state of any number of qubits. Here is an example with three qubits:$$ |cba\rangle = \begin{bmatrix} c_0 b_0 a_0 \\ c_0 b_0 a_1 \\ c_0 b_1 a_0 \\ c_0 b_1 a_1 \\ c_1 b_0 a_0 \\ c_1 b_0 a_1 \\ c_1 b_1 a_0 \\ c_1 b_1 a_1 \\ \end{bmatrix}$$If we have $n$ qubits, we will need to keep track of $2^n$ complex amplitudes. As we can see, these vectors grow exponentially with the number of qubits. This is the reason quantum computers with large numbers of qubits are so difficult to simulate. A modern laptop can easily simulate a general quantum state of around 20 qubits, but simulating 100 qubits is too difficult for the largest supercomputers.Let's look at an example circuit:
###Code
from qiskit import QuantumCircuit, Aer, assemble
from math import pi
import numpy as np
from qiskit.visualization import plot_histogram, plot_bloch_multivector
qc = QuantumCircuit(3)
# Apply H-gate to each qubit:
for qubit in range(3):
qc.h(qubit)
# See the circuit:
qc.draw()
###Output
_____no_output_____
###Markdown
Each qubit is in the state $|+\rangle$, so we should see the vector:$$ |{+++}\rangle = \frac{1}{\sqrt{8}}\begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ \end{bmatrix}$$
###Code
# Let's see the result
svsim = Aer.get_backend('statevector_simulator')
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
# In Jupyter Notebooks we can display this nicely using Latex.
# If not using Jupyter Notebooks you may need to remove the
# array_to_latex function and use print(final_state) instead.
from qiskit_textbook.tools import array_to_latex
array_to_latex(final_state, pretext="\\text{Statevector} = ")
###Output
_____no_output_____
###Markdown
And we have our expected result. 1.2 Quick Exercises: 1. Write down the tensor product of the qubits: a) $|0\rangle|1\rangle$ b) $|0\rangle|+\rangle$ c) $|+\rangle|1\rangle$ d) $|-\rangle|+\rangle$ 2. Write the state: $|\psi\rangle = \tfrac{1}{\sqrt{2}}|00\rangle + \tfrac{i}{\sqrt{2}}|01\rangle $ as two separate qubits. 2. Single Qubit Gates on Multi-Qubit Statevectors We have seen that an X-gate is represented by the matrix:$$X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$$And that it acts on the state $|0\rangle$ as so:$$X|0\rangle = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1\end{bmatrix}$$but it may not be clear how an X-gate would act on a qubit in a multi-qubit vector. Fortunately, the rule is quite simple; just as we used the tensor product to calculate multi-qubit statevectors, we use the tensor product to calculate matrices that act on these statevectors. For example, in the circuit below:
###Code
qc = QuantumCircuit(2)
qc.h(0)
qc.x(1)
qc.draw()
###Output
_____no_output_____
###Markdown
we can represent the simultaneous operations (H & X) using their tensor product:$$X|q_1\rangle \otimes H|q_0\rangle = (X\otimes H)|q_1 q_0\rangle$$The operation looks like this:$$X\otimes H = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \otimes \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 0 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} & 1 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \\ 1 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} & 0 \times \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}\end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & -1 \\ 1 & 1 & 0 & 0 \\ 1 & -1 & 0 & 0 \\\end{bmatrix}$$Which we can then apply to our 4D statevector $|q_1 q_0\rangle$. This can become quite messy, you will often see the clearer notation:$$X\otimes H = \begin{bmatrix} 0 & H \\ H & 0\\\end{bmatrix}$$Instead of calculating this by hand, we can use Qiskit’s `unitary_simulator` to calculate this for us. The unitary simulator multiplies all the gates in our circuit together to compile a single unitary matrix that performs the whole quantum circuit:
###Code
usim = Aer.get_backend('unitary_simulator')
qobj = assemble(qc)
unitary = usim.run(qobj).result().get_unitary()
###Output
_____no_output_____
###Markdown
and view the results:
###Code
# In Jupyter Notebooks we can display this nicely using Latex.
# If not using Jupyter Notebooks you may need to remove the
# array_to_latex function and use print(unitary) instead.
from qiskit_textbook.tools import array_to_latex
array_to_latex(unitary, pretext="\\text{Circuit = }\n")
###Output
_____no_output_____
###Markdown
If we want to apply a gate to only one qubit at a time (such as in the circuit below), we describe this using tensor product with the identity matrix, e.g.:$$ X \otimes I $$
###Code
qc = QuantumCircuit(2)
qc.x(1)
qc.draw()
# Simulate the unitary
usim = Aer.get_backend('unitary_simulator')
qobj = assemble(qc)
unitary = usim.run(qobj).result().get_unitary()
# Display the results:
array_to_latex(unitary, pretext="\\text{Circuit = } ")
###Output
_____no_output_____
###Markdown
We can see Qiskit has performed the tensor product:$$X \otimes I =\begin{bmatrix} 0 & I \\ I & 0\\\end{bmatrix} = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\\end{bmatrix}$$ 2.1 Quick Exercises: 1. Calculate the single qubit unitary ($U$) created by the sequence of gates: $U = XZH$. Use Qiskit's unitary simulator to check your results.2. Try changing the gates in the circuit above. Calculate their tensor product, and then check your answer using the unitary simulator.**Note:** Different books, softwares and websites order their qubits differently. This means the tensor product of the same circuit can look very different. Try to bear this in mind when consulting other sources. 3. Multi-Qubit Gates Now we know how to represent the state of multiple qubits, we are now ready to learn how qubits interact with each other. An important two-qubit gate is the CNOT-gate. 3.1 The CNOT-Gate You have come across this gate before in _[The Atoms of Computation](../ch-states/atoms-computation.html)._ This gate is a conditional gate that performs an X-gate on the second qubit (target), if the state of the first qubit (control) is $|1\rangle$. The gate is drawn on a circuit like this, with `q0` as the control and `q1` as the target:
###Code
qc = QuantumCircuit(2)
# Apply CNOT
qc.cx(0,1)
# See the circuit:
qc.draw()
###Output
_____no_output_____
###Markdown
When our qubits are not in superposition of $|0\rangle$ or $|1\rangle$ (behaving as classical bits), this gate is very simple and intuitive to understand. We can use the classical truth table:| Input (t,c) | Output (t,c) ||:-----------:|:------------:|| 00 | 00 || 01 | 11 || 10 | 10 || 11 | 01 |And acting on our 4D-statevector, it has one of the two matrices:$$\text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ \end{bmatrix}, \quad\text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix}$$depending on which qubit is the control and which is the target. Different books, simulators and papers order their qubits differently. In our case, the left matrix corresponds to the CNOT in the circuit above. This matrix swaps the amplitudes of $|01\rangle$ and $|11\rangle$ in our statevector:$$ |a\rangle = \begin{bmatrix} a_{00} \\ a_{01} \\ a_{10} \\ a_{11} \end{bmatrix}, \quad \text{CNOT}|a\rangle = \begin{bmatrix} a_{00} \\ a_{11} \\ a_{10} \\ a_{01} \end{bmatrix} \begin{matrix} \\ \leftarrow \\ \\ \leftarrow \end{matrix}$$We have seen how this acts on classical states, but let’s now see how it acts on a qubit in superposition. We will put one qubit in the state $|+\rangle$:
###Code
qc = QuantumCircuit(2)
# Apply H-gate to the first:
qc.h(0)
qc.draw()
# Let's see the result:
svsim = Aer.get_backend('statevector_simulator')
qobj = assemble(qc)
final_state = svsim.run(qobj).result().get_statevector()
# Print the statevector neatly:
array_to_latex(final_state, pretext="\\text{Statevector = }")
###Output
_____no_output_____
###Markdown
As expected, this produces the state $|0\rangle \otimes |{+}\rangle = |0{+}\rangle$:$$|0{+}\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |01\rangle)$$And let’s see what happens when we apply the CNOT gate:
###Code
qc = QuantumCircuit(2)
# Apply H-gate to the first:
qc.h(0)
# Apply a CNOT:
qc.cx(0,1)
qc.draw()
# Let's get the result:
qobj = assemble(qc)
result = svsim.run(qobj).result()
# Print the statevector neatly:
final_state = result.get_statevector()
array_to_latex(final_state, pretext="\\text{Statevector = }")
###Output
_____no_output_____
###Markdown
We see we have the state:$$\text{CNOT}|0{+}\rangle = \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$ This state is very interesting to us, because it is _entangled._ This leads us neatly on to the next section. 3.2 Entangled States We saw in the previous section we could create the state:$$\tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$ This is known as a _Bell_ state. We can see that this state has 50% probability of being measured in the state $|00\rangle$, and 50% chance of being measured in the state $|11\rangle$. Most interestingly, it has a **0%** chance of being measured in the states $|01\rangle$ or $|10\rangle$. We can see this in Qiskit:
###Code
plot_histogram(result.get_counts())
###Output
_____no_output_____
###Markdown
This combined state cannot be written as two separate qubit states, which has interesting implications. Although our qubits are in superposition, measuring one will tell us the state of the other and collapse its superposition. For example, if we measured the top qubit and got the state $|1\rangle$, the collective state of our qubits changes like so:$$\tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle) \quad \xrightarrow[]{\text{measure}} \quad |11\rangle$$Even if we separated these qubits light-years away, measuring one qubit collapses the superposition and appears to have an immediate effect on the other. This is the [‘spooky action at a distance’](https://en.wikipedia.org/wiki/Quantum_nonlocality) that upset so many physicists in the early 20th century.It’s important to note that the measurement result is random, and the measurement statistics of one qubit are **not** affected by any operation on the other qubit. Because of this, there is **no way** to use shared quantum states to communicate. This is known as the no-communication theorem.[1] 3.3 Visualizing Entangled StatesWe have seen that this state cannot be written as two separate qubit states, this also means we lose information when we try to plot our state on separate Bloch spheres:
###Code
plot_bloch_multivector(final_state)
###Output
_____no_output_____
###Markdown
Given how we defined the Bloch sphere in the earlier chapters, it may not be clear how Qiskit even calculates the Bloch vectors with entangled qubits like this. In the single-qubit case, the position of the Bloch vector along an axis nicely corresponds to the expectation value of measuring in that basis. If we take this as _the_ rule of plotting Bloch vectors, we arrive at this conclusion above. This shows us there is _no_ single-qubit measurement basis for which a specific measurement is guaranteed. This constrasts with our single qubit states, in which we could always pick a single-qubit basis. Looking at the individual qubits in this way, we miss the important effect of correlation between the qubits. We cannot distinguish between different entangled states. For example, the two states:$$\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle) \quad \text{and} \quad \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle)$$will both look the same on these separate Bloch spheres, despite being very different states with different measurement outcomes.How else could we visualize this statevector? This statevector is simply a collection of four amplitudes (complex numbers), and there are endless ways we can map this to an image. One such visualization is the _Q-sphere,_ here each amplitude is represented by a blob on the surface of a sphere. The size of the blob is proportional to the magnitude of the amplitude, and the colour is proportional to the phase of the amplitude. The amplitudes for $|00\rangle$ and $|11\rangle$ are equal, and all other amplitudes are 0:
###Code
from qiskit.visualization import plot_state_qsphere
plot_state_qsphere(final_state)
###Output
_____no_output_____
###Markdown
Here we can clearly see the correlation between the qubits. The Q-sphere's shape has no significance, it is simply a nice way of arranging our blobs; the number of `0`s in the state is proportional to the states position on the Z-axis, so here we can see the amplitude of $|00\rangle$ is at the top pole of the sphere, and the amplitude of $|11\rangle$ is at the bottom pole of the sphere. 3.4 Exercise: 1. Create a quantum circuit that produces the Bell state: $\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$. Use the statevector simulator to verify your result. 2. The circuit you created in question 1 transforms the state $|00\rangle$ to $\tfrac{1}{\sqrt{2}}(|01\rangle + |10\rangle)$, calculate the unitary of this circuit using Qiskit's simulator. Verify this unitary does in fact perform the correct transformation.3. Think about other ways you could represent a statevector visually. Can you design an interesting visualization from which you can read the magnitude and phase of each amplitude? 4. References[1] Asher Peres, Daniel R. Terno, _Quantum Information and Relativity Theory,_ 2004, https://arxiv.org/abs/quant-ph/0212023
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____ |
src/chemistry/overview.ipynb | ###Markdown
Chemistry OverviewA wide range of tools exist that support workflows in Chemistry, from looking up the structure and properties of a wide variety of elements and compounds, to visualising their structure using interactive HTML widgets.Using automation to help us generate the the lookup of compound structures from their names allows us to create narratives where correctness is guaranteed when moving from one consideration, such as the formula for a particular compound given its common name, to the visualisation of that structure, or to a consideration of its physical, structural or chemical properties.:::{admonition} Hiding Code:class: tipThe following example includes the code inline to show how the automation proceeds. In a finished would the code could be hidden but revealable, for example in collapsed code cells, or have all mention of the code removed from the final output document.::: Example - Describing a CompoundAs an example of providing a generated description of a compound simply from its name, let's consider *ethanol*. (We could just as easily have picked another compound, such as a *methane* or *nitric acid*.Let's define a reference to the compound:
###Code
#Provide the common name of a compound
compound_name = "ethanol"
###Output
_____no_output_____
###Markdown
At the current time, whilst R based `bookdown` workflows *do* support inline embedding of code variables in markdown text, interactive Jupyter notebook markdown cells don't support such a feature (although there is ongoing work to provide this sort of support) other than by extension.However, it is possible to embed variables into markdown text in Jupyter Book workflows using [`jupyter-glue`](https://jupyterbook.org/content/executable/output-insert.html).
###Code
from myst_nb import glue
# Create a reference to a value we can use in our markdown text
glue("compound", compound_name, display=False)
###Output
_____no_output_____
###Markdown
Having declared the compound we want to investigate in code, we can refer to it directly inline in our text using a ``{glue:}`compound` `` reference: {glue:text}`compound`.We can also automatically look-up various properties associated with the compound, such as its chemical formula or a universal compound identifier.
###Code
import pubchempy as pcp
_compound = pcp.get_compounds(compound_name, 'name')[0]
###Output
_____no_output_____
###Markdown
The formula can be rendered in an appropriate typographical form from a LaTeX representation of the formula.
###Code
from IPython.display import Latex
Latex('$\ce{'+_compound.molecular_formula+'}$')
###Output
_____no_output_____
###Markdown
$$\require{mhchem}$$It is also possible to create `glue` references to things like the compound LaTeX equation.Using the `mhchem` *MathJax* package, we easily add support for inline rendering of chemical equations, just as we can rendered mathematical equations:
###Code
_compound_latex = '$\ce{'+_compound.molecular_formula+'}$'
# Save a reference to the Latex equivalent of the compound formula
glue("compoundLatex", Latex(_compound_latex), display=False)
###Output
_____no_output_____
###Markdown
This means that we can render the chemical equation for our chosen compound ({glue:text}`compound`) in a markdown content block: ```{glue:math} compoundLatex:label: eq-sym```We can also render a free standing HTML+JS 3D interactive version of the molecule into the page from the previously retrieved universal compound ID:
###Code
import py3Dmol
# Lookup a molecule using its CID (PubChem Compound Identification) code
p=py3Dmol.view(query = f'cid:{_compound.cid}')
# Set the render style
p.setStyle({'stick': {'radius': .1}, 'sphere': {'scale': 0.25}})
p.show()
###Output
_____no_output_____ |
Visualization with Seaborn.ipynb | ###Markdown
Seaborn examples, taken from https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html
###Code
import numpy as np
import pandas as pd
import seaborn as sns
# Turning on notebook plots -- just for use in jupyter notebooks.
import matplotlib
matplotlib.use('nbagg')
import numpy as np
import matplotlib.pyplot as plt
sns.set()
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
plt.hist(data[col], normed=True, alpha=0.5)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
sns.kdeplot(data[col], shade=True)
#fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
with sns.axes_style('white'):
sns.jointplot("x", "y", data, kind='hex')
iris = sns.load_dataset("iris")
iris.head()
sns.pairplot(iris, hue='species', size=2.5);
tips = sns.load_dataset('tips')
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
with sns.axes_style(style='ticks'):
g = sns.factorplot("day", "total_bill", "sex", data=tips, kind="box")
g.set_axis_labels("Day", "Total Bill");
sns.jointplot("total_bill", "tip", data=tips, kind='reg');
url = 'https://raw.githubusercontent.com/jakevdp/marathon-data/master/marathon-data.csv'
data = pd.read_csv(url)
data.head()
###Output
_____no_output_____
###Markdown
Seaborn examples, taken from https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html
###Code
import numpy as np
import pandas as pd
import seaborn as sns
# Turning on notebook plots -- just for use in jupyter notebooks.
import matplotlib
matplotlib.use('nbagg')
import numpy as np
import matplotlib.pyplot as plt
sns.set()
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
plt.hist(data[col], normed=True, alpha=0.5)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
sns.kdeplot(data[col], shade=True)
#fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
with sns.axes_style('white'):
sns.jointplot("x", "y", data, kind='hex')
iris = sns.load_dataset("iris")
iris.head()
sns.pairplot(iris, hue='species', size=2.5);
tips = sns.load_dataset('tips')
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
with sns.axes_style(style='ticks'):
g = sns.factorplot("day", "total_bill", "sex", data=tips, kind="box")
g.set_axis_labels("Day", "Total Bill");
sns.jointplot("total_bill", "tip", data=tips, kind='reg');
url = 'https://raw.githubusercontent.com/jakevdp/marathon-data/master/marathon-data.csv'
data = pd.read_csv(url)
data.head()
###Output
_____no_output_____
###Markdown
Seaborn examples, taken from https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html
###Code
import numpy as np
import pandas as pd
import seaborn as sns
# Turning on notebook plots -- just for use in jupyter notebooks.
import matplotlib
matplotlib.use('nbagg')
import numpy as np
import matplotlib.pyplot as plt
sns.set()
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
plt.hist(data[col], density=True, alpha=0.5)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
sns.kdeplot(data[col], shade=True)
#fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
with sns.axes_style('white'):
sns.jointplot("x", "y", data, kind='hex')
iris = sns.load_dataset("iris")
iris.head()
sns.pairplot(iris, hue='species', height=2.5);
tips = sns.load_dataset('tips')
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
with sns.axes_style(style='ticks'):
g = sns.factorplot("day", "total_bill", "sex", data=tips, kind="box")
g.set_axis_labels("Day", "Total Bill");
sns.jointplot("total_bill", "tip", data=tips, kind='reg');
url = 'https://raw.githubusercontent.com/jakevdp/marathon-data/master/marathon-data.csv'
data = pd.read_csv(url)
data.head()
###Output
_____no_output_____
###Markdown
Seaborn examples, taken from https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html
###Code
import numpy as np
import pandas as pd
import seaborn as sns
# Turning on notebook plots -- just for use in jupyter notebooks.
import matplotlib
matplotlib.use('nbagg')
import matplotlib.pyplot as plt
#cruciale: seaborn va fatto 'partire'
sns.set()
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
plt.hist(data[col], normed=True, alpha=0.5)
#con seaborn facciamo i density plot
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
sns.kdeplot(data[col], shade=True)
#fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
with sns.axes_style('white'):
sns.jointplot("x", "y", data, kind='hex')
#tentativo di plottare relazioni tra le dimensioni del petalo e del sepalo
#di iris di tipo "setosa"
iris = sns.load_dataset("iris")
iris.head()
#questo incrocia tutte le categorie numeriche data la specie 'setosa'
sns.pairplot(iris, hue='species', size=2.5);
tips = sns.load_dataset('tips')
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
with sns.axes_style(style='ticks'):
g = sns.factorplot("day", "total_bill", "sex", data=tips, kind="box")
g.set_axis_labels("Day", "Total Bill");
sns.jointplot("total_bill", "tip", data=tips, kind='reg');
url = 'https://raw.githubusercontent.com/jakevdp/marathon-data/master/marathon-data.csv'
data = pd.read_csv(url)
data.head()
###Output
_____no_output_____
###Markdown
Seaborn examples, taken from https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html
###Code
import numpy as np
import pandas as pd
import seaborn as sns
# Turning on notebook plots -- just for use in jupyter notebooks.
import matplotlib
matplotlib.use('nbagg')
import matplotlib.pyplot as plt
sns.set()
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
plt.hist(data[col], density=True, alpha=0.5)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
sns.kdeplot(data[col], shade=True)
#fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
with sns.axes_style('white'):
sns.jointplot("x", "y", data, kind='hex')
iris = sns.load_dataset("iris")
iris.head()
sns.pairplot(iris, hue='species', height=2.5);
tips = sns.load_dataset('tips')
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
with sns.axes_style(style='ticks'):
g = sns.factorplot("day", "total_bill", "sex", data=tips, kind="box")
g.set_axis_labels("Day", "Total Bill");
sns.jointplot("total_bill", "tip", data=tips, kind='reg');
url = 'https://raw.githubusercontent.com/jakevdp/marathon-data/master/marathon-data.csv'
data = pd.read_csv(url)
data.head()
###Output
_____no_output_____
###Markdown
Seaborn examples, taken from https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html
###Code
import numpy as np
import pandas as pd
import seaborn as sns
# Turning on notebook plots -- just for use in jupyter notebooks.
import matplotlib
matplotlib.use('nbagg')
import numpy as np
import matplotlib.pyplot as plt
sns.set()
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
plt.hist(data[col], normed=True, alpha=0.5)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
sns.kdeplot(data[col], shade=True)
#fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
with sns.axes_style('white'):
sns.jointplot("x", "y", data, kind='hex')
iris = sns.load_dataset("iris")
iris.head()
sns.pairplot(iris, hue='species', size=3.5);
tips = sns.load_dataset('tips')
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
with sns.axes_style(style='ticks'):
g = sns.factorplot("day", "total_bill", "sex", data=tips, kind="box")
g.set_axis_labels("Day", "Total Bill");
sns.jointplot("total_bill", "tip", data=tips, kind='reg');
url = 'https://raw.githubusercontent.com/jakevdp/marathon-data/master/marathon-data.csv'
data = pd.read_csv(url)
data.head()
###Output
_____no_output_____
###Markdown
Seaborn examples, taken from https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html
###Code
import numpy as np
import pandas as pd
import seaborn as sns
# Turning on notebook plots -- just for use in jupyter notebooks.
import matplotlib
matplotlib.use('nbagg')
import numpy as np
import matplotlib.pyplot as plt
sns.set()
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
plt.hist(data[col], normed=True, alpha=0.5)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
sns.kdeplot(data[col], shade=True)
#fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
with sns.axes_style('white'):
sns.jointplot("x", "y", data, kind='hex')
iris = sns.load_dataset("iris")
iris.head()
sns.pairplot(iris, hue='species', size=2.5);
tips = sns.load_dataset('tips')
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
with sns.axes_style(style='ticks'):
g = sns.factorplot("day", "total_bill", "sex", data=tips, kind="box")
g.set_axis_labels("Day", "Total Bill");
sns.jointplot("total_bill", "tip", data=tips, kind='reg');
url = 'https://raw.githubusercontent.com/jakevdp/marathon-data/master/marathon-data.csv'
data = pd.read_csv(url)
data.head()
###Output
_____no_output_____
###Markdown
Seaborn examples, taken from https://jakevdp.github.io/PythonDataScienceHandbook/04.14-visualization-with-seaborn.html
###Code
import numpy as np
import pandas as pd
import seaborn as sns
# Turning on notebook plots -- just for use in jupyter notebooks.
import matplotlib
matplotlib.use('nbagg')
import matplotlib.pyplot as plt
sns.set()
data = np.random.multivariate_normal([0, 0], [[5, 2], [2, 2]], size=2000)
data = pd.DataFrame(data, columns=['x', 'y'])
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
axes.hist(data[col], normed=True, alpha=0.5)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
for col in 'xy':
axes = sns.kdeplot(data[col], shade=True)
#fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(9, 4))
with sns.axes_style('white'):
sns.jointplot("x", "y", data, kind='hex')
iris = sns.load_dataset("iris")
iris.head()
sns.pairplot(iris, hue='species', size=2.5);
tips = sns.load_dataset('tips')
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
with sns.axes_style(style='ticks'):
g = sns.factorplot("day", "total_bill", "sex", data=tips, kind="box")
g.set_axis_labels("Day", "Total Bill");
sns.jointplot("total_bill", "tip", data=tips, kind='reg');
url = 'https://raw.githubusercontent.com/jakevdp/marathon-data/master/marathon-data.csv'
data = pd.read_csv(url)
data.head()
###Output
_____no_output_____ |
Complete Code/attempt4/attempt4_1.ipynb | ###Markdown
AskReddit Troll Question Detection Challenge Imports
###Code
import numpy as np
import pandas as pd
import sklearn
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
import re
import nltk # for tokenizing the paragraphs in sentences and sentences in words
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
nltk.download('punkt')
nltk.download('stopwords')
nltk.download('wordnet')
train_df = pd.read_csv("train.csv")
# train_df.head()
# df = train_df[(train_df == 1).any(axis=1)]
# print(df['question_text'].tolist())
###Output
_____no_output_____
###Markdown
Preprocessing Dropping the qid
###Code
train_df.drop(columns=["qid"],inplace=True)
# train_df.head()
###Output
_____no_output_____
###Markdown
Data Balance Check
###Code
import matplotlib.pyplot as plt
# Plotting the distribution for dataset.
ax = train_df.groupby('target').count().plot(kind='bar', title='Distribution of data',legend=False)
ax.set_xticklabels(['0','1'], rotation=0)
###Output
_____no_output_____
###Markdown
Hence, we need to balance the data some how.- As the data is in string so we cannot do balancing of data right now.- We cannot duplicate the data here as in that case we will affect the vectorisation of the base data (We tried but that didn't work well).- Now we will first vectorize the data and then use balancing data techniques.
###Code
# from imblearn.over_sampling import SMOTE
# sm = SMOTE(random_state=23, sampling_strategy=1.0)
# X_train_sm, y_train_sm = sm.fit_resample(train_df['question_text'], train_df['target'])
# print(len(X_train_sm), len(y_train_sm))
# Above cannot be used here as they are in string format
# -----------------------------------------------------------------------------------------------------------
# minority_class = train_df[train_df['target']==1]
# majority_class = train_df[train_df['target']==0]
# for i in range(14):
# train_df = train_df.append(minority_class, ignore_index=True)
# print(train_df.shape)
# train_df=train_df.sample(frac=1).reset_index(drop=True)
# print(train_df.shape)
# print(train_df.shape)
# print(minority_class.shape)
# print(majority_class.shape)
# print(minority_class[0:100])
# ax = train_df.groupby('target').count().plot(kind='bar', title='Distribution of data',legend=False)
# ax.set_xticklabels(['0','1'], rotation=0)
###Output
_____no_output_____
###Markdown
Cleaning the data- Like removing !?., etc.- converting sentences to lower case
###Code
sentences = train_df['question_text'].tolist()
N = len(sentences)
sentences = sentences[0:N]
i=0
for sentence in sentences:
temp = re.sub('[^a-zA-Z]', ' ', sentence)
temp = temp.lower()
new_sentence = temp.split()
new_sentence = ' '.join(new_sentence)
sentences[i] = new_sentence
# print(new_sentence)
i+=1
###Output
_____no_output_____
###Markdown
Lemmatization- We need to perform Stemming and Lemmatization on the sentences. Lemmatization is prefered as of now (Converting to meaningful words).It was obvious, lemmatization is not working for our data. It is affecting True Positives. So we will just remove stop words for now.
###Code
tokenized_sentences = []
for sentence in sentences:
words = nltk.word_tokenize(sentence)
# removing stop words and using list composition
words = [word for word in words if word not in set(stopwords.words('english'))]
# joining words using spaces
tokenized_sentences.append(' '.join(words))
sentences = tokenized_sentences
# print(sentences)
###Output
_____no_output_____
###Markdown
Saving The PreProcessed Data
###Code
Y1 = train_df['target'].to_numpy().astype(np.float64)
Y1 = Y1[:N]
data = [["question_text","target"]]
for i in range(N):
data.append([sentences[i],Y1[i]])
import csv
with open('processed_train_data.csv','w',newline='') as fp:
a = csv.writer(fp, delimiter=',')
a.writerows(data)
###Output
_____no_output_____ |
1-3D-visualization/Problem-1.ipynb | ###Markdown
Problem-1Visualize a PDB structure, set styles, and add labels.
###Code
import py3Dmol
###Output
_____no_output_____
###Markdown
TODO-1Instantiate py3Dmol viewer with PDB structure 1NCA (Neuraminidase-FAB complex)
###Code
... your code here ...
###Output
_____no_output_____
###Markdown
TODO-2Apply the following styles to this structure:* chain N (Neuraminidase): orange cartoon* chain H (Heavy chain): blue sphere* chain L (Light chain): lightblue sphere
###Code
... your code here ...
###Output
_____no_output_____
###Markdown
TODO-3: Add text labels to the three chains
###Code
... your code here ...
###Output
_____no_output_____
###Markdown
Bonus: Set the style for sugar residues MAN, BMA, and NAG to stick and color by a greenCarbon colorscheme.
###Code
... your code here ...
###Output
_____no_output_____
###Markdown
Problem-1Visualize a PDB structure, set styles, and add labels.
###Code
import py3Dmol
###Output
_____no_output_____
###Markdown
TODO-1Instantiate py3Dmol viewer with PDB structure 1NCA (Neuraminidase-FAB complex)
###Code
view = py3Dmol.view(query='pdb:1NCA')
view.show()
###Output
_____no_output_____
###Markdown
TODO-2Apply the following styles to this structure:* chain N (Neuraminidase): orange cartoon* chain H (Heavy chain): blue sphere* chain L (Light chain): lightblue sphere
###Code
view.setStyle({'chain':'N'},{'cartoon': {'color': 'orange'}})
view.setStyle({'chain':'H'},{'sphere': {'color': 'blue'}})
view.setStyle({'chain':'L'},{'sphere': {'color': 'lightblue'}})
view.show()
###Output
_____no_output_____
###Markdown
TODO-3: Add text labels to the three chains
###Code
view.addLabel('Neuroaminidase', {'chain': 'N'})
view.addLabel('Heavy chain', {'chain': 'H'})
view.addLabel('Light chain', {'chain': 'L'})
view.show()
###Output
_____no_output_____
###Markdown
Bonus: Set the style for sugar residues MAN, BMA, and NAG to stick and color by a greenCarbon colorscheme.
###Code
view.setStyle({'resn': ['MAN', 'BMA', 'NAG']},{'stick': {'colorscheme': 'greenCarbon'}})
view.show()
###Output
_____no_output_____ |
notebooks/Image Classification/03 - Model Deployment as Web Service.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Deploy an image classification model in Azure Container Instance (ACI)This tutorial is **part two of a two-part tutorial series**. In the [previous tutorial](img-classification-part1-training.ipynb), you trained machine learning models and then registered a model in your workspace on the cloud. Now, you're ready to deploy the model as a web service in [Azure Container Instances](https://docs.microsoft.com/azure/container-instances/) (ACI). A web service is an image, in this case a Docker image, that encapsulates the scoring logic and the model itself. ACI is a great solution for testing and understanding the workflow. For scalable production deployments, consider using Azure Kubernetes Service. For more information, see [how to deploy and where](https://docs.microsoft.com/azure/machine-learning/service/how-to-deploy-and-where). Set up the environment
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import azureml.core
# display the core SDK version number
print("Azure ML SDK Version: ", azureml.core.VERSION)
from azureml.core import Workspace
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, sep='\t')
###Output
TestMLWS northeurope TestMLWS
###Markdown
Register a local model
###Code
from azureml.core.model import Model
model = Model.register(model_path="sklearn_mnist_model.pkl",
model_name="sklearn_mnist_local",
tags={"data": "mnist", "model": "classification"},
description="Mnist handwriting recognition",
workspace=ws)
###Output
Registering model sklearn_mnist_local
###Markdown
Retrieve the trained model from your Machine Learning WorkspaceYou registered a model in your workspace in the previous tutorial. Now, load this workspace and download the model to your local directory.
###Code
import os
from azureml.core.model import Model
model=Model(ws, 'sklearn_mnist')
model.download(target_dir=os.getcwd(), exist_ok=True)
# verify the downloaded model file
file_path = os.path.join(os.getcwd(), "sklearn_mnist_model.pkl")
os.stat(file_path)
print(model.name, model.description, model.version, sep = '\t')
###Output
sklearn_mnist None 6
###Markdown
Deploy as web serviceOnce you've tested the model and are satisfied with the results, deploy the model as a web service hosted in ACI. To build the correct environment for ACI, provide the following:* A scoring script to show how to use the model* An environment file to show what packages need to be installed* A configuration file to build the ACI* The model you trained before Create scoring scriptCreate the scoring script, called score.py, used by the web service call to show how to use the model.You must include two required functions into the scoring script:* The `init()` function, which typically loads the model into a global object. This function is run only once when the Docker container is started. * The `run(input_data)` function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization, but other formats are supported.
###Code
%%writefile score.py
import json
import numpy as np
import os
import pickle
from sklearn.externals import joblib
from sklearn.linear_model import LogisticRegression
from azureml.core.model import Model
def init():
global model
# retrieve the path to the model file using the model name
model_path = Model.get_model_path('sklearn_mnist')
model = joblib.load(model_path)
def run(raw_data):
data = np.array(json.loads(raw_data)['data'])
# make prediction
y_hat = model.predict(data)
# you can return any data type as long as it is JSON-serializable
return y_hat.tolist()
###Output
Writing score.py
###Markdown
Create environment fileNext, create an environment file, called myenv.yml, that specifies all of the script's package dependencies. This file is used to ensure that all of those dependencies are installed in the Docker image. This model needs `scikit-learn` and `azureml-sdk`.
###Code
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_conda_package("scikit-learn")
with open("myenv.yml","w") as f:
f.write(myenv.serialize_to_string())
###Output
_____no_output_____
###Markdown
Review the content of the `myenv.yml` file.
###Code
with open("myenv.yml","r") as f:
print(f.read())
###Output
# Conda environment specification. The dependencies defined in this file will
# be automatically provisioned for runs with userManagedDependencies=False.
# Details about the Conda environment file format:
# https://conda.io/docs/user-guide/tasks/manage-environments.html#create-env-file-manually
name: project_environment
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- pip:
# Required packages for AzureML execution, history, and data preparation.
- azureml-defaults
- scikit-learn
channels:
- conda-forge
###Markdown
Create configuration fileCreate a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container. While it depends on your model, the default of 1 core and 1 gigabyte of RAM is usually sufficient for many models. If you feel you need more later, you would have to recreate the image and redeploy the service.
###Code
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={"data": "MNIST", "method" : "sklearn"},
description='Predict MNIST with sklearn')
###Output
_____no_output_____
###Markdown
Deploy in ACIEstimated time to complete: **about 7-8 minutes**Configure the image and deploy. The following code goes through these steps:1. Build an image using: * The scoring file (`score.py`) * The environment file (`myenv.yml`) * The model file1. Register that image under the workspace. 1. Send the image to the ACI container.1. Start up a container in ACI using the image.1. Get the web service HTTP endpoint.
###Code
%%time
from azureml.core.webservice import Webservice
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(runtime= "python",
entry_script="score.py",
conda_file="myenv.yml")
service = Model.deploy(workspace=ws,
name='sklearn-mnist-svc',
models=[model],
inference_config=inference_config,
deployment_config=aciconfig)
service.wait_for_deployment(show_output=True)
###Output
Running.......................
SucceededACI service creation operation finished, operation "Succeeded"
CPU times: user 271 ms, sys: 59 ms, total: 330 ms
Wall time: 2min 3s
###Markdown
Get the scoring web service's HTTP endpoint, which accepts REST client calls. This endpoint can be shared with anyone who wants to test the web service or integrate it into an application.
###Code
print(service.scoring_uri)
###Output
http://8abb772f-a950-4d9e-9e48-b6ab1936842d.northeurope.azurecontainer.io/score
###Markdown
Test deployed serviceEarlier you scored all the test data with the local version of the model. Now, you can test the deployed model with a random sample of 30 images from the test data. The following code goes through these steps:1. Send the data as a JSON array to the web service hosted in ACI. 1. Use the SDK's `run` API to invoke the service. You can also make raw calls using any HTTP tool such as curl.1. Print the returned predictions and plot them along with the input images. Red font and inverse image (white on black) is used to highlight the misclassified samples. Since the model accuracy is high, you might have to run the following code a few times before you can see a misclassified sample.
###Code
from utils import load_data
import os
data_folder = os.path.join(os.getcwd(), 'data')
# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster
X_test = load_data(os.path.join(data_folder, 'test-images.gz'), False) / 255.0
y_test = load_data(os.path.join(data_folder, 'test-labels.gz'), True).reshape(-1)
import json
# find 30 random samples from test set
n = 30
sample_indices = np.random.permutation(X_test.shape[0])[0:n]
test_samples = json.dumps({"data": X_test[sample_indices].tolist()})
test_samples = bytes(test_samples, encoding='utf8')
# predict using the deployed model
result = service.run(input_data=test_samples)
# compare actual value vs. the predicted values:
i = 0
plt.figure(figsize = (20, 1))
for s in sample_indices:
plt.subplot(1, n, i + 1)
plt.axhline('')
plt.axvline('')
# use different color for misclassified sample
font_color = 'red' if y_test[s] != result[i] else 'black'
clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys
plt.text(x=10, y =-10, s=result[i], fontsize=18, color=font_color)
plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map)
i = i + 1
plt.show()
###Output
_____no_output_____
###Markdown
You can also send raw HTTP request to test the web service.
###Code
import requests
# send a random row from the test set to score
random_index = np.random.randint(0, len(X_test)-1)
input_data = "{\"data\": [" + str(list(X_test[random_index])) + "]}"
headers = {'Content-Type':'application/json'}
# for AKS deployment you'd need to the service key in the header as well
# api_key = service.get_key()
# headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)}
resp = requests.post(service.scoring_uri, input_data, headers=headers)
print("POST to url", service.scoring_uri)
#print("input data:", input_data)
print("label:", y_test[random_index])
print("prediction:", resp.text)
###Output
POST to url http://8abb772f-a950-4d9e-9e48-b6ab1936842d.northeurope.azurecontainer.io/score
label: 6
prediction: [6]
###Markdown
Clean up resourcesTo keep the resource group and workspace for other tutorials and exploration, you can delete only the ACI deployment using this API call:
###Code
service.delete()
###Output
_____no_output_____ |
sparksql/2018C1_1_GCPD.ipynb | ###Markdown
**EL ENUNCIADO DE ESTE EJERCICIO PIDE RESOLVERLO CON PANDAS****SE AGREGA LA RESOLUCIÓN DE SPARKSQL DE TODAS FORMAS**> https://piazza.com/class_profile/get_resource/jkr2voxi1yw4wt/jkr2vqu7n114zxEl GCPD (Gotham City Police Dept) recolecta la información de casos policiales que acontecen en Ciudad Gótica. Esta información se encuentra guardada en un dataframe con el siguiente formato: (fecha, id_caso, descripcion, estado_caso, categoria, latitud, longitud). Los posibles estados que puede tener un caso son 1: caso abierto, 2: caso resuelto, 3: cerrado sin resolución. Las fechas se encuentran en el formato YYYY-MM-DD. Por otro lado el comisionado Gordon guarda un registro detallado sobre en cuáles casos fue activada la batiseñal para pedir ayuda del vigilante, Batman. Esta información se encuentra en un Dataframe con el siguiente formato (id_caso, respuesta), siendo campo respuesta si la señal tuvo una respuesta positiva (1) o negativa (0) de parte de él.El sector encargado de las estadísticas oficiales del GCPD quiere con esta información analizar las siguientes situaciones: - Tasa de resolución de casos de la fuerza policial por categoría de caso (considerando aquellos casos en los que no participó Batman). ---
###Code
# Set-up y vista rápida de las dos bases de datos truchas
import pyspark
spark = pyspark.sql.SparkSession.builder.appName("Batman").getOrCreate()
df_gcpd = spark.read.csv('../data/2018C1_GCPD.csv', header=True)
df_gcpd.createOrReplaceTempView('GCPD')
df_gordon = spark.read.csv('../data/2018C1_gordon.csv', header=True)
df_gordon.createOrReplaceTempView('GORDON')
query = "SELECT GCPD.categoria, (SUM(case when GCPD.estado_caso=2 then 1 else 0 end) / COUNT(GCPD.estado_caso)) as tasa_resolucion \
from GCPD left join GORDON on GCPD.id_caso = GORDON.id_caso \
where GORDON.respuesta=0 or GORDON.respuesta is null \
group by GCPD.categoria"
spark.sql(query).show()
###Output
_____no_output_____ |
Implementations/neuralNetwork/.ipynb_checkpoints/Neural Network - Regression - Redacted-checkpoint.ipynb | ###Markdown
Neural NetworksThe main purpose of this notebook is to help you understand how the process of backpropagation helps us to train a neural network by tuning the weights to maximise predictive accuracy. Readers should be familiar with the general concept of neural networks before attempting to fill in the notebook. For a more formal explanation of backpropagation, Bishop's [*Pattern Recognition and Machine Learning*](http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf) covers it in detail in section 5.3. I found it to be useful to sit down with a pen and paper to draw the network diagrams and map how the inputs and error move forwards and backwards through the network respectively! Import Libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
###Output
_____no_output_____
###Markdown
Create DatasetUse the generative process of a linear model - i.e. a weighted sum of the features plus Gaussian noise
###Code
n = 1000 #Number of observations in the training set
p = 5 #Number of parameters, including intercept
beta = np.random.uniform(-10, 10, p) #Randomly initialise true parameters
for i, b in enumerate(beta):
print(f'\u03B2{i}: {round(b, 3)}')
X = np.random.uniform(0,10,(n,(p-1))) #Randomly sample features X1-X4
X0 = np.array([1]*n).reshape((n,1)) #X0 is our intercept so always equal to 1
X = np.concatenate([X0,X], axis = 1) #Join intercept to other variables to form feature matrix
Y = np.matmul(X,beta) + np.random.normal(0,10,n) #Linear combination of the features plus a normal error term
#Concatenate to create dataframe
dataFeatures = pd.DataFrame(X)
dataFeatures.columns = [f'X{i}' for i in range(p)] #Name feature columns
dataTarget = pd.DataFrame(Y)
dataTarget.columns = ['Y'] #Name target
data = pd.concat([dataFeatures, dataTarget], axis = 1)
###Output
_____no_output_____
###Markdown
Quickly visualise the dataset
###Code
print(f'Number of Rows: {data.shape[0]}')
print(f'Number of Columns: {data.shape[1]}')
data.head()
###Output
_____no_output_____
###Markdown
Create a neural networkWe'll use a single hidden layer and tanh activation function
###Code
class NeuralNetwork:
def __init__(self, data, target, features, hiddenSize, trainTestRatio = 0.9):
self.target = target
self.features = features
#Split up data into a training and testing set
self.train, self.test = train_test_split(data, test_size=1-trainTestRatio)
self.input = np.array(self.train[self.features])
self.hiddenSize = hiddenSize
self.weightsInputToHidden = np.random.normal(size = (self.input.shape[1],hiddenSize))
self.weightsHiddenToOutput = np.random.normal(size = (hiddenSize + 1 ,)) #+1 is for the bias term
self.y = np.array(self.train[self.target])
self.output = np.zeros(self.y.shape)
#Standardise training set
self.scaler = StandardScaler()
self.scaler.fit(self.input)
self.input = self.scaler.transform(self.input)
#Pre-allocate weight derivatives
self.dWeightsInputToHidden = np.ones(self.weightsInputToHidden.shape)
self.dWeightsHiddenToOutput = np.ones(self.weightsHiddenToOutput.shape)
def feedforward(self):
#Compute hidden activations, a, and transform them with tanh activation
self.a = #...
self.z = #...
#Add bias term onto z for the next layer of the network
#Code goes here...
self.z = self.zWithBias
#Compute Output
def backpropagation(self):
normFactor = 1/self.input.shape[0] #Normalising factor for the derivatives
#Compute Deltas
#self.deltaOutput and self.deltaHidden
#Compute Weight derivatives:
self.dWeightsInputToHidden = #...Make sure dimensions match up with weight matrix
self.dWeightsHiddenToOutput =#...
def trainNetwork(self, lr = 0.001, numEpochs = 100):
#Train by feeding the data through the network and then backpropagating error a set number (numEpochs) of times
#Apply gradient descent to update the weights
#Stop training early if the gradients vanish
ep = 0
while ep < numEpochs and (np.linalg.norm(self.dWeightsInputToHidden) + np.linalg.norm(self.dWeightsHiddenToOutput)) > 0.5:
#feedforward and backpropagate
#Update weights
#update ep
print('Training completed')
def predict(self, x):
#Works in the same way as feedforward:
pass
dataInput = np.array(data[['X0', 'X1', 'X2', 'X3', 'X4']])
dataOutput = np.array(data['Y'])
myNN = NeuralNetwork(data, 'Y', ['X0', 'X1', 'X2', 'X3', 'X4'], 3)
myNN.feedforward()
myNN.trainNetwork(lr= 0.001, numEpochs=200000)
###Output
_____no_output_____
###Markdown
Let's see how our model performsLets predict the labels of the held out test set and plot them against the true values
###Code
predTest = myNN.predict(myNN.test[myNN.features])
###Output
_____no_output_____
###Markdown
If the poins gather around the line y = x then our model is performing as desired
###Code
plt.scatter(myNN.test[myNN.target], predTest)
plt.plot(np.arange(-100,100), np.arange(-100,100))
plt.xlabel('True Label')
plt.ylabel('Predicted Label (Neural Network)')
plt.show()
###Output
_____no_output_____ |
02_bring_your_own_container/tensorflow_bring_your_own.ipynb | ###Markdown
Building your own TensorFlow containerThis notebook works well with `TensorFlow 2.3 Python 3.7 CPU Optimized` kernel on SageMaker Studio.---With Amazon SageMaker, you can package your own algorithms that can then be trained and deployed in the SageMaker environment. This notebook guides you through an example using TensorFlow that shows you how to build a Docker container for SageMaker and use it for training and inference.By packaging an algorithm in a container, you can bring almost any code to the Amazon SageMaker environment, regardless of programming language, environment, framework, or dependencies. 1. [Building your own TensorFlow container](Building-your-own-tensorflow-container) 1. [When should I build my own algorithm container?](When-should-I-build-my-own-algorithm-container?) 1. [Permissions](Permissions) 1. [The example](The-example) 1. [The presentation](The-presentation)1. [Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker](Part-1:-Packaging-and-Uploading-your-Algorithm-for-use-with-Amazon-SageMaker) 1. [An overview of Docker](An-overview-of-Docker) 1. [How Amazon SageMaker runs your Docker container](How-Amazon-SageMaker-runs-your-Docker-container) 1. [Running your container during training](Running-your-container-during-training) 1. [The input](The-input) 1. [The output](The-output) 1. [Running your container during hosting](Running-your-container-during-hosting) 1. [The parts of the sample container](The-parts-of-the-sample-container) 1. [Setup](Setup) 1. [The `Dockerfile`](The-Dockerfile) 1. [Building and registering the container](Building-and-registering-the-container)1. [Part 2: Training and Hosting your Algorithm in Amazon SageMaker](Part-2:-Training-and-Hosting-your-Algorithm-in-Amazon-SageMaker) 1. [Set up the environment](Set-up-the-environment) 1. [Create the session](Create-the-session) 1. [Upload the data for training](Upload-the-data-for-training) 1. [Training On SageMaker](Training-on-SageMaker) 1. [Optional cleanup](Optional-cleanup) 1. [Reference](Reference)_or_ I'm impatient, just [let me see the code](The-Dockerfile)! When should I build my own algorithm container?You may not need to create a container to bring your own code to Amazon SageMaker. When you are using a framework such as Apache MXNet or TensorFlow that has direct support in SageMaker, you can simply supply the Python code that implements your algorithm using the SDK entry points for that framework. This set of supported frameworks is regularly added to, so you should check the current list to determine whether your algorithm is written in one of these common machine learning environments.Even if there is direct SDK support for your environment or framework, you may find it more effective to build your own container. If the code that implements your algorithm is quite complex, or you need special additions to the framework, building your own container may be the right choice.Some reasons to build an already supported framework container are:1. A specific version isn't supported.2. Configure and install your dependencies and environment.3. Use a different training/hosting solution than provided.This walkthrough shows that it is quite straightforward to build your own container. So you can still use SageMaker even if your use case is not covered by the deep learning containers that we've built for you. PermissionsRunning this notebook requires permissions in addition to the normal `SageMakerFullAccess` permissions. This is because it creates new repositories on Amazon ECR. The easiest way to add these permissions is simply to add the managed policy `AmazonEC2ContainerRegistryFullAccess` to the role that you used to start your notebook instance. There's no need to restart your notebook instance when you do this, the new permissions will be available immediately. The exampleIn this example we show how to package a custom TensorFlow container with a Python example which works with the CIFAR-10 dataset and uses TensorFlow Serving for inference. However, different inference solutions other than TensorFlow Serving can be used by modifying the docker container.In this example, we use a single image to support training and hosting. This simplifies the procedure because we only need to manage one image for both tasks. Sometimes you may want separate images for training and hosting because they have different requirements. In this case, separate the parts discussed below into separate `Dockerfiles` and build two images. Choosing whether to use a single image or two images is a matter of what is most convenient for you to develop and manage.If you're only using Amazon SageMaker for training or hosting, but not both, only the functionality used needs to be built into your container.[CIFAR-10]: http://www.cs.toronto.edu/~kriz/cifar.html The presentationThis presentation is divided into two parts: _building_ the container and _using_ the container. Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker An overview of DockerIf you're familiar with Docker already, you can skip ahead to the next section.For many data scientists, Docker containers are a new technology. But they are not difficult and can significantly simply the deployment of your software packages. Docker provides a simple way to package arbitrary code into an _image_ that is totally self-contained. Once you have an image, you can use Docker to run a _container_ based on that image. Running a container is just like running a program on the machine except that the container creates a fully self-contained environment for the program to run. Containers are isolated from each other and from the host environment, so the way your program is set up is the way it runs, no matter where you run it.Docker is more powerful than environment managers like `conda` or `virtualenv` because (a) it is completely language independent and (b) it comprises your whole operating environment, including startup commands, and environment variable.A Docker container is like a virtual machine, but it is much lighter weight. For example, a program running in a container can start in less than a second and many containers can run simultaneously on the same physical or virtual machine instance.Docker uses a simple file called a `Dockerfile` to specify how the image is assembled. An example is provided below. You can build your Docker images based on Docker images built by yourself or by others, which can simplify things quite a bit.Docker has become very popular in programming and `devops` communities due to its flexibility and its well-defined specification of how code can be run in its containers. It is the underpinning of many services built in the past few years, such as [Amazon ECS].Amazon SageMaker uses Docker to allow users to train and deploy arbitrary algorithms.In Amazon SageMaker, Docker containers are invoked in a one way for training and another, slightly different, way for hosting. The following sections outline how to build containers for the SageMaker environment.Some helpful links:* [Docker home page](http://www.docker.com)* [Getting started with Docker](https://docs.docker.com/get-started/)* [`Dockerfile` reference](https://docs.docker.com/engine/reference/builder/)* [`docker run` reference](https://docs.docker.com/engine/reference/run/)[Amazon ECS]: https://aws.amazon.com/ecs/ How Amazon SageMaker runs your Docker containerBecause you can run the same image in training or hosting, Amazon SageMaker runs your container with the argument `train` or `serve`. How your container processes this argument depends on the container.* In this example, we don't define a `ENTRYPOINT` in the `Dockerfile`, so Docker runs the command [`train` at training time](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html) and [`serve` at serving time](https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-inference-code.html). In this example, we define these as executable Python scripts, but they could be any program that we want to start in that environment.* If you specify a program as a `ENTRYPOINT` in the `Dockerfile`, that program will be run at startup and its first argument will be `train` or `serve`. The program can then look at that argument and decide what to do.* If you are building separate containers for training and hosting (or building only for one or the other), you can define a program as a `ENTRYPOINT` in the `Dockerfile` and ignore (or verify) the first argument passed in. Running your container during trainingWhen Amazon SageMaker runs training, your `train` script is run, as in a regular Python program. A number of files are laid out for your use, under the `/opt/ml` directory:``` /opt/ml |-- input | |-- config | | |-- hyperparameters.json | | -- resourceConfig.json | -- data | -- | -- |-- model | -- -- output -- failure``` The input* `/opt/ml/input/config` contains information to control how your program runs. `hyperparameters.json` is a JSON-formatted dictionary of hyperparameter names to values. These values are always strings, so you may need to convert them. `resourceConfig.json` is a JSON-formatted file that describes the network layout used for distributed training.* `/opt/ml/input/data//` (for File mode) contains the input data for that channel. The channels are created based on the call to `CreateTrainingJob`, but it's generally important that channels match algorithm expectations. The files for each channel are copied from S3 to this directory, preserving the tree structure indicated by the S3 key structure.* `/opt/ml/input/data/_` (for Pipe mode) is the pipe for a given epoch. Epochs start at zero and go up by one each time you read them. There is no limit to the number of epochs that you can run, but you must close each pipe before reading the next epoch. The output* `/opt/ml/model/` is the directory where you write the model that your algorithm generates. Your model can be in any format that you want. It can be a single file or a whole directory tree. SageMaker packages any files in this directory into a compressed tar archive file. This file is made available at the S3 location returned to the `DescribeTrainingJob` result.* `/opt/ml/output` is a directory where the algorithm can write a file `failure` that describes why the job failed. The contents of this file are returned to the `FailureReason` field of the `DescribeTrainingJob` result. For jobs that succeed, there is no reason to write this file as it is ignored. Running your container during hostingHosting has a very different model than training because hosting is responding to inference requests that come in via HTTP. In this example, we use [TensorFlow Serving](https://www.tensorflow.org/serving/), however the hosting solution can be customized. One example is the [Python serving stack within the `scikit learn` example](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb).Amazon SageMaker uses two URLs in the container:* `/ping` receives `GET` requests from the infrastructure. Your program returns 200 if the container is up and accepting requests.* `/invocations` is the endpoint that receives client inference `POST` requests. The format of the request and the response is up to the algorithm. If the client supplied `ContentType` and `Accept` headers, these are passed in as well. The container has the model files in the same place that they were written to during training: /opt/ml `-- model `-- The parts of the sample containerThe `container` directory has all the components you need to package the sample algorithm for `Amazon SageMager`:``` . |-- Dockerfile |-- build_and_push.sh `-- cifar10 |-- cifar10.py |-- resnet_model.py |-- nginx.conf |-- serve `-- train```Let's discuss each of these in turn:* __`Dockerfile`__ describes how to build your Docker container image. More details are provided below.* __`build_and_push.sh`__ is a script that uses the `Dockerfile` to build your container images and then pushes it to ECR. We invoke the commands directly later in this notebook, but you can just copy and run the script for your own algorithms.* __`cifar10`__ is the directory which contains the files that are installed in the container.In this simple application, we install only five files in the container. You may only need that many, but if you have many supporting routines, you may wish to install more. These five files show the standard structure of our Python containers, although you are free to choose a different tool set and therefore could have a different layout. If you're writing in a different programming language, you will have a different layout depending on the frameworks and tools you choose.The files that we put in the container are:* __`cifar10.py`__ is the program that implements our training algorithm.* __`resnet_model.py`__ is the program that contains our `Resnet` model.* __`nginx.conf`__ is the configuration file for the nginx front-end. Generally, you should be able to take this file as-is.* __`serve`__ is the program started when the container is started for hosting. It simply launches nginx and loads your exported model with TensorFlow Serving.* __`train`__ is the program that is invoked when the container is run for training. Our implementation of this script invokes `cifar10.py` with our `hyperparameter` values retrieved from /opt/ml/input/config/hyperparameters.json. The goal for doing this is to avoid having to modify our training algorithm program.In summary, the two files you probably want to change for your application are `train` and `serve`. Setup
###Code
import sys
import IPython
!{sys.executable} -m pip install sagemaker-studio-image-build ipywidgets opencv-python
IPython.Application.instance().kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
The `Dockerfile`The `Dockerfile` describes the image that we want to build. You can think of it as describing the complete operating system installation of the system that you want to run. A Docker container running is quite a bit lighter than a full operating system, however, because it takes advantage of Linux on the host machine for the basic operations.For the Python science stack, we start from an official TensorFlow docker image and run the normal tools to install TensorFlow Serving. Then we add the code that implements our specific algorithm to the container and set up the right environment for it to run under.Let's look at the `Dockerfile` for this example.
###Code
!cat container/Dockerfile
###Output
_____no_output_____
###Markdown
Building and registering the containerThe following shell code shows how to build the container image using `docker build` and push the container image to ECR using `docker push`. This code is also available as the shell script `container/build-and-push.sh`, which you can run as `build-and-push.sh sagemaker-tf-cifar10-example` to build the image `sagemaker-tf-cifar10-example`. If you are using Amazon SageMaker Studio, please kindly choose option 1 to build the docker image, otherwise, go for Option 2. * For Option 1, please refer to ***Installing*** section in [sagemaker-studio-image-build](https://github.com/aws-samples/sagemaker-studio-image-build-cli) for the SageMaker Execution Role configuration.* For Option 2, the code looks for an ECR repository in the account you're using and the current default region (if you're using a SageMaker notebook instance, this is the region where the notebook instance was created). If the repository doesn't exist, the script will create it. Running container build in SageMaker Studio
###Code
%%sh
# The name of our algorithm
repository_name=sagemaker-tf-cifar10-example:latest
cd container
chmod +x cifar10/train
chmod +x cifar10/serve
sm-docker build . --file ./Dockerfile --repository $repository_name
###Output
_____no_output_____
###Markdown
Download the CIFAR-10 datasetOur training algorithm is expecting our training data to be in the file format of [`TFRecords`](https://www.tensorflow.org/guide/datasets), which is a simple record-oriented binary format that many TensorFlow applications use for training data.Below is a Python script adapted from the [official TensorFlow CIFAR-10 example](https://github.com/tensorflow/models/tree/master/tutorials/image/cifar10_estimator), which downloads the CIFAR-10 dataset and converts them into `TFRecords`.
###Code
! python utils/generate_cifar10_tfrecords.py --data-dir=/tmp/cifar-10-data
# There should be three tfrecords. (eval, train, validation)
! ls /tmp/cifar-10-data
###Output
_____no_output_____
###Markdown
Part 2: Training and Hosting your Algorithm in Amazon SageMakerOnce you have your container packaged, you can use it to train and serve models. Let's do that with the algorithm we made above. Set up the environmentHere we specify the bucket to use and the role that is used for working with SageMaker.
###Code
# S3 prefix
prefix = "DEMO-tensorflow-cifar10"
###Output
_____no_output_____
###Markdown
To represent our training, we use the Estimator class, which needs to be configured in five steps. 1. IAM role - our AWS execution role2. train_instance_count - number of instances to use for training.3. train_instance_type - type of instance to use for training. (For training locally, we specify `local`, which is supported in SageMaker Notebook Instance only.4. image_name - our custom TensorFlow Docker image we created.5. hyperparameters - hyperparameters we want to pass.Let's start with setting up our IAM role.
###Code
from sagemaker import get_execution_role
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Create the sessionThe session remembers our connection parameters to SageMaker. We use it to perform all of our SageMaker operations.
###Code
import sagemaker as sage
sess = sage.Session()
###Output
_____no_output_____
###Markdown
Upload the data for trainingWe will use the tools provided by the SageMaker Python SDK to upload the data to a default bucket.
###Code
WORK_DIRECTORY = "/tmp/cifar-10-data"
data_location = sess.upload_data(WORK_DIRECTORY, key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Training on SageMakerTraining a model on SageMaker with the Python SDK manages the API calls to SageMaker platform. Please select instance_type referring to [supported EC2 instance types](https://aws.amazon.com/sagemaker/pricing/instance-types/).In addition, we must now specify the ECR image URL, which we just pushed above.Let's first fetch our ECR image `url` that corresponds to the image we just built and pushed.
###Code
import boto3
client = boto3.client("sts")
account = client.get_caller_identity()["Account"]
my_session = boto3.session.Session()
region = my_session.region_name
algorithm_name = "sagemaker-tf-cifar10-example"
ecr_image = "{}.dkr.ecr.{}.amazonaws.com/{}:latest".format(account, region, algorithm_name)
print(ecr_image)
from sagemaker.estimator import Estimator
hyperparameters = {"train-steps": 100}
instance_type = "ml.m5.xlarge"
estimator = Estimator(
role=role,
instance_count=1,
instance_type=instance_type,
image_uri=ecr_image,
hyperparameters=hyperparameters,
)
estimator.fit(data_location)
# deploying model via SageMaker Endpoint service
predictor = estimator.deploy(
initial_instance_count=1,
instance_type=instance_type
)
import cv2
import numpy as np
from sagemaker.serializers import JSONSerializer
from sagemaker.deserializers import JSONDeserializer
class_names = [
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck"
]
image = cv2.imread("data/cat.png", 1)
# resize, as our model is expecting images in 32x32.
image = cv2.resize(image, (32, 32))
data = {"instances": np.asarray(image).astype(float).tolist()}
predictor.serializer = JSONSerializer()
predictor.deserializer = JSONDeserializer()
pred = predictor.predict(data)
print(pred)
print(f"Class: {class_names[pred['predictions'][0]['classes']]}")
###Output
_____no_output_____
###Markdown
Optional cleanupWhen you're done with the endpoint, you should clean it up.All the training jobs, models and endpoints we created can be viewed through the SageMaker console of your AWS account.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____ |
doc/groovy/STIL.ipynb | ###Markdown
STIL Integration[STIL](http://www.star.bristol.ac.uk/~mbt/stil/), the Starlink Tables Infrastructure Library, is a Java API for working with astronomical data, including VOTable, FITS, SQL, ASCII, CSV, CDF, and GBIN formats. This notebook shows how to load STIL, and configure BeakerX to display STIL StarTables with the BeakerX interactive table widget.
###Code
%classpath add mvn commons-io commons-io 2.6
import org.apache.commons.io.FileUtils
stilUrl = "http://www.star.bristol.ac.uk/~mbt/stil/stil.jar"
stilFile = System.getProperty("java.io.tmpdir") + "/stilFiles/stil.jar"
FileUtils.copyURLToFile(new URL(stilUrl), new File(stilFile));
%classpath add dynamic stilFile
import uk.ac.starlink.table.StarTable
import uk.ac.starlink.table.Tables
import jupyter.Displayer
import jupyter.Displayers
Displayers.register(StarTable.class, new Displayer<StarTable>() {
def getColumnNames(t){
names = []
nCol = t.getColumnCount();
for ( int icol = 0; icol < nCol; icol++ ) {
names.add(t.getColumnInfo(icol).getName())
}
names
}
@Override
public Map<String, String> display(StarTable table) {
columnNames = getColumnNames(table)
columnInfos = Tables.getColumnInfos(table)
MAXCHAR = 64
new TableDisplay(
(int)table.getRowCount(),
(int)table.getColumnCount(),
columnNames,
new TableDisplay.Element() {
@Override
public String get(int columnIndex, int rowIndex) {
Object cell = table.getCell(rowIndex, columnIndex);
return columnInfos[columnIndex].formatValue(cell, MAXCHAR)
}
}
).display();
return OutputCell.DISPLAYER_HIDDEN;
}
});
import org.apache.commons.io.FileUtils
messierUrl = "http://andromeda.star.bristol.ac.uk/data/messier.csv"
messierFile = System.getProperty("java.io.tmpdir") + "/stilFiles/messier.csv"
FileUtils.copyURLToFile(new URL(messierUrl), new File(messierFile));
"Done"
import uk.ac.starlink.table.StarTable
import uk.ac.starlink.table.StarTableFactory
import uk.ac.starlink.table.Tables
starTable = new StarTableFactory().makeStarTable( messierFile, "csv" );
starTable = Tables.randomTable(starTable)
###Output
_____no_output_____ |
graphs_trees/tree_level_lists/tree_level_lists_challenge.ipynb | ###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Create a list for each level of a binary tree.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Is this a binary search tree? * Yes* Should each level be a list of nodes? * Yes* Can we assume we already have a Node class with an insert method? * Yes* Can we assume this fits memory? * Yes Test Cases* 5, 3, 8, 2, 4, 1, 7, 6, 9, 10, 11 -> [[5], [3, 8], [2, 4, 7, 9], [1, 6, 10], [11]]Note: Each number in the result is actually a node containing the number AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/tree_level_lists/tree_level_lists_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
%run ../bst/bst.py
%load ../bst/bst.py
class BstLevelLists(Bst):
def create_level_lists(self):
levelLists = [[self.root]]
stax = levelLists[-1][:]
while stax:
stax = [c for n in stax for c in [n.left, n.right] if c]
levelLists.append(stax)
return levelLists
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
%run ../utils/results.py
# %load test_tree_level_lists.py
import unittest
class TestTreeLevelLists(unittest.TestCase):
def test_tree_level_lists(self):
bst = BstLevelLists(Node(5))
bst.insert(3)
bst.insert(8)
bst.insert(2)
bst.insert(4)
bst.insert(1)
bst.insert(7)
bst.insert(6)
bst.insert(9)
bst.insert(10)
bst.insert(11)
levels = bst.create_level_lists()
results_list = []
for level in levels:
results = Results()
for node in level:
results.add_result(node)
results_list.append(results)
self.assertEqual(str(results_list[0]), '[5]')
self.assertEqual(str(results_list[1]), '[3, 8]')
self.assertEqual(str(results_list[2]), '[2, 4, 7, 9]')
self.assertEqual(str(results_list[3]), '[1, 6, 10]')
self.assertEqual(str(results_list[4]), '[11]')
print('Success: test_tree_level_lists')
def main():
test = TestTreeLevelLists()
test.test_tree_level_lists()
if __name__ == '__main__':
main()
###Output
Success: test_tree_level_lists
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Create a list for each level of a binary tree.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Is this a binary search tree? * Yes* Should each level be a list of nodes? * Yes* Can we assume we already have a Node class with an insert method? * Yes* Can we assume this fits memory? * Yes Test Cases* 5, 3, 8, 2, 4, 1, 7, 6, 9, 10, 11 -> [[5], [3, 8], [2, 4, 7, 9], [1, 6, 10], [11]]Note: Each number in the result is actually a node containing the number AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/tree_level_lists/tree_level_lists_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
# %load ../bst/bst.py
class Node(object):
def __init__(self, data):
self.data = data
self.left = None
self.right = None
self.parent = None
def __repr__(self):
return str(self.data)
class Bst(object):
def __init__(self, root=None):
self.root = root
def insert(self, data):
if data is None:
raise TypeError('data cannot be None')
if self.root is None:
self.root = Node(data)
return self.root
else:
return self._insert(self.root, data)
def _insert(self, node, data):
if node is None:
return Node(data)
if data <= node.data:
if node.left is None:
node.left = self._insert(node.left, data)
node.left.parent = node
return node.left
else:
return self._insert(node.left, data)
else:
if node.right is None:
node.right = self._insert(node.right, data)
node.right.parent = node
return node.right
else:
return self._insert(node.right, data)
from collections import deque
class NodeWithLevel(object):
def __init__(self, node, level):
self.node = node
self.level = level
class BstLevelLists(Bst):
def create_level_lists(self):
results = []
current = [self.root]
parents = []
# meaning while len(current) > 0
while current:
results.append(current)
parents = current
current = []
for node in parents:
if node.left is not None:
current.append(node.left)
if node.right is not None:
current.append(node.right)
return results
# create level lists with helper class
def create_level_lists2(self):
queue = deque()
queue.append(NodeWithLevel(self.root, 0))
currentLevel = 0
levelLists = []
levelList = []
while len(queue) > 0:
nodeWithLevel = queue.popleft()
node = nodeWithLevel.node
level = nodeWithLevel.level
if currentLevel != level:
levelLists.append(levelList)
levelList = []
currentLevel = level
levelList.append(node)
if node.left is not None:
queue.append(NodeWithLevel(node.left, level + 1))
if node.right is not None:
queue.append(NodeWithLevel(node.right, level + 1))
levelLists.append(levelList)
return levelLists
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
%run ../utils/results.py
# %load test_tree_level_lists.py
from nose.tools import assert_equal
class TestTreeLevelLists(object):
def test_tree_level_lists(self):
bst = BstLevelLists(Node(5))
bst.insert(3)
bst.insert(8)
bst.insert(2)
bst.insert(4)
bst.insert(1)
bst.insert(7)
bst.insert(6)
bst.insert(9)
bst.insert(10)
bst.insert(11)
levels = bst.create_level_lists()
results_list = []
for level in levels:
results = Results()
for node in level:
results.add_result(node)
results_list.append(results)
assert_equal(str(results_list[0]), '[5]')
assert_equal(str(results_list[1]), '[3, 8]')
assert_equal(str(results_list[2]), '[2, 4, 7, 9]')
assert_equal(str(results_list[3]), '[1, 6, 10]')
assert_equal(str(results_list[4]), '[11]')
print('Success: test_tree_level_lists')
def main():
test = TestTreeLevelLists()
test.test_tree_level_lists()
if __name__ == '__main__':
main()
###Output
Success: test_tree_level_lists
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Create a list for each level of a binary tree.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Is this a binary search tree? * Yes* Should each level be a list of nodes? * Yes* Can we assume we already have a Node class with an insert method? * Yes* Can we assume this fits memory? * Yes Test Cases* 5, 3, 8, 2, 4, 1, 7, 6, 9, 10, 11 -> [[5], [3, 8], [2, 4, 7, 9], [1, 6, 10], [11]]Note: Each number in the result is actually a node containing the number AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/tree_level_lists/tree_level_lists_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
%run ../bst/bst.py
%load ../bst/bst.py
class BstLevelLists(Bst):
def create_level_lists(self):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
%run ../utils/results.py
# %load test_tree_level_lists.py
from nose.tools import assert_equal
class TestTreeLevelLists(object):
def test_tree_level_lists(self):
bst = BstLevelLists(Node(5))
bst.insert(3)
bst.insert(8)
bst.insert(2)
bst.insert(4)
bst.insert(1)
bst.insert(7)
bst.insert(6)
bst.insert(9)
bst.insert(10)
bst.insert(11)
levels = bst.create_level_lists()
results_list = []
for level in levels:
results = Results()
for node in level:
results.add_result(node)
results_list.append(results)
assert_equal(str(results_list[0]), '[5]')
assert_equal(str(results_list[1]), '[3, 8]')
assert_equal(str(results_list[2]), '[2, 4, 7, 9]')
assert_equal(str(results_list[3]), '[1, 6, 10]')
assert_equal(str(results_list[4]), '[11]')
print('Success: test_tree_level_lists')
def main():
test = TestTreeLevelLists()
test.test_tree_level_lists()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Create a list for each level of a binary tree.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Is this a binary search tree? * Yes* Should each level be a list of nodes? * Yes* Can we assume we already have a Node class with an insert method? * Yes* Can we assume this fits memory? * Yes Test Cases* 5, 3, 8, 2, 4, 1, 7, 6, 9, 10, 11 -> [[5], [3, 8], [2, 4, 7, 9], [1, 6, 10], [11]]Note: Each number in the result is actually a node containing the number AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/tree_level_lists/tree_level_lists_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
%run ../bst/bst.py
%load ../bst/bst.py
class BstLevelLists(Bst):
def create_level_lists(self):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
%run ../utils/results.py
# %load test_tree_level_lists.py
import unittest
class TestTreeLevelLists(unittest.TestCase):
def test_tree_level_lists(self):
bst = BstLevelLists(Node(5))
bst.insert(3)
bst.insert(8)
bst.insert(2)
bst.insert(4)
bst.insert(1)
bst.insert(7)
bst.insert(6)
bst.insert(9)
bst.insert(10)
bst.insert(11)
levels = bst.create_level_lists()
results_list = []
for level in levels:
results = Results()
for node in level:
results.add_result(node)
results_list.append(results)
self.assertEqual(str(results_list[0]), '[5]')
self.assertEqual(str(results_list[1]), '[3, 8]')
self.assertEqual(str(results_list[2]), '[2, 4, 7, 9]')
self.assertEqual(str(results_list[3]), '[1, 6, 10]')
self.assertEqual(str(results_list[4]), '[11]')
print('Success: test_tree_level_lists')
def main():
test = TestTreeLevelLists()
test.test_tree_level_lists()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Create a linked list for each level of a binary tree.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Is this a binary search tree? * Yes* Can we assume we already have a Node class with an insert method? * Yes Test Cases* 5, 3, 8, 2, 4, 1, 7, 6, 9, 10, 11 -> [[5], [3, 8], [2, 4, 7, 9], [1, 6, 10], [11]] AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/graphs_trees/tree_level_lists/tree_level_lists_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
%run ../bst/bst.py
%load ../bst/bst.py
def create_level_lists(root):
# TODO: Implement me
pass
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
%run ../utils/results.py
# %load test_tree_level_lists.py
from nose.tools import assert_equal
class TestTreeLevelLists(object):
def test_tree_level_lists(self):
node = Node(5)
insert(node, 3)
insert(node, 8)
insert(node, 2)
insert(node, 4)
insert(node, 1)
insert(node, 7)
insert(node, 6)
insert(node, 9)
insert(node, 10)
insert(node, 11)
levels = create_level_lists(node)
results_list = []
for level in levels:
results = Results()
for node in level:
results.add_result(node)
results_list.append(results)
assert_equal(str(results_list[0]), '[5]')
assert_equal(str(results_list[1]), '[3, 8]')
assert_equal(str(results_list[2]), '[2, 4, 7, 9]')
assert_equal(str(results_list[3]), '[1, 6, 10]')
assert_equal(str(results_list[4]), '[11]')
print('Success: test_tree_level_lists')
def main():
test = TestTreeLevelLists()
test.test_tree_level_lists()
if __name__ == '__main__':
main()
###Output
_____no_output_____ |
BC4_crypto_forecasting/scripts/AVAX_notebook.ipynb | ###Markdown
--> Forecasting - AVAX Master Degree Program in Data Science and Advanced Analytics Business Cases with Data Science Project: > Group AA Done by:> - Beatriz Martins Selidónio Gomes, m20210545> - Catarina Inês Lopes Garcez, m20210547 > - Diogo André Domingues Pires, m20201076 > - Rodrigo Faísca Guedes, m20210587 --- Table of Content Import and Data Integration - [Import the needed Libraries](third-bullet) Data Exploration and Understanding - [Initial Analysis (EDA - Exploratory Data Analysis)](fifth-bullet) - [Variables Distribution](seventh-bullet) Data Preparation - [Data Transformation](eighth-bullet) Modelling - [Building LSTM Model](twentysecond-bullet) - [Get Best Parameters for LSTM](twentythird-bullet) - [Run the LSTM Model and Get Predictions](twentyfourth-bullet) - [Recursive Predictions](twentysixth-bullet) --- Import and Data Integration Import the needed Libraries [Back to TOC](toc)
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Data Exploration and Understanding Initial Analysis (EDA - Exploratory Data Analysis) [Back to TOC](toc)
###Code
df = pd.read_csv('../data/data_aux/df_AVAX.csv')
df
###Output
_____no_output_____
###Markdown
Data Types
###Code
# Get to know the number of instances and Features, the DataTypes and if there are missing values in each Feature
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1826 entries, 0 to 1825
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date 1826 non-null object
1 AVAX-USD_ADJCLOSE 583 non-null float64
2 AVAX-USD_CLOSE 583 non-null float64
3 AVAX-USD_HIGH 583 non-null float64
4 AVAX-USD_LOW 583 non-null float64
5 AVAX-USD_OPEN 583 non-null float64
6 AVAX-USD_VOLUME 583 non-null float64
dtypes: float64(6), object(1)
memory usage: 100.0+ KB
###Markdown
Missing Values
###Code
# Count the number of missing values for each Feature
df.isna().sum().to_frame().rename(columns={0: 'Count Missing Values'})
###Output
_____no_output_____
###Markdown
Descriptive Statistics
###Code
# Descriptive Statistics Table
df.describe().T
# settings to display all columns
pd.set_option("display.max_columns", None)
# display the dataframe head
df.sample(n=10)
#CHECK ROWS THAT HAVE ANY MISSING VALUE IN ONE OF THE COLUMNS
is_NaN = df.isnull()
row_has_NaN = is_NaN.any(axis=1)
rows_with_NaN = df[row_has_NaN]
rows_with_NaN
#FILTER OUT ROWS THAT ARE MISSING INFORMATION
df = df[~row_has_NaN]
df.reset_index(inplace=True, drop=True)
df
###Output
_____no_output_____
###Markdown
Data Preparation Data Transformation [Back to TOC](toc) __`Duplicates`__
###Code
# Checking if exist duplicated observations
print(f'\033[1m' + "Number of duplicates: " + '\033[0m', df.duplicated().sum())
###Output
[1mNumber of duplicates: [0m 0
###Markdown
__`Convert Date to correct format`__
###Code
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
df
###Output
_____no_output_____
###Markdown
__`Get percentual difference between open and close values and low and high values`__
###Code
df['pctDiff_CloseOpen'] = abs((df[df.columns[2]]-df[df.columns[5]])/df[df.columns[2]])*100
df['pctDiff_HighLow'] = abs((df[df.columns[3]]-df[df.columns[4]])/df[df.columns[4]])*100
df.head()
def plot_coinValue(df):
#Get coin name
coin_name = df.columns[2].split('-')[0]
#Get date and coin value
x = df['Date']
y = df[df.columns[2]] # ADA-USD_CLOSE
#Get the volume of trades
v = df[df.columns[-3]]/1e9
#Get percentual diferences
y2 = df[df.columns[-1]] # pctDiff_HighLow
y1= df[df.columns[-2]] # pctDiff_CloseOpen
fig, axs = plt.subplots(3, 1, figsize=(12,14))
axs[0].plot(x, y)
axs[2].plot(x, v)
# plotting the line 1 points
axs[1].plot(x, y1, label = "Close/Open")
# plotting the line 2 points
axs[1].plot(x, y2, label = "High/Low")
axs[1].legend()
axs[0].title.set_text('Time Evolution of '+ coin_name)
axs[0].set(xlabel="", ylabel="Close Value in USD$")
axs[2].title.set_text('Volume of trades of '+ coin_name)
axs[2].set(xlabel="", ylabel="Total number of trades in billions")
axs[1].title.set_text('Daily Market percentual differences of '+ coin_name)
axs[1].set(xlabel="", ylabel="Percentage (%)")
plt.savefig('../analysis/'+coin_name +'_stats'+'.png')
return coin_name
coin_name = plot_coinValue(df)
#FILTER DATASET
df = df.loc[df['Date']>= '2021-09-01']
df
###Output
_____no_output_____
###Markdown
Modelling Building LSTM Model [Back to TOC](toc) StrategyCreate a DF (windowed_df) where the middle columns will correspond to the close values of X days before the target date and the final column will correspond to the close value of the target date. Use these values for prediction and play with the value of X
###Code
def get_windowed_df(X, df):
start_Date = df['Date'] + pd.Timedelta(days=X)
perm = np.zeros((1,X+1))
#Get labels for DataFrame
j=1
labels=[]
while j <= X:
label = 'closeValue_' + str(j) + 'daysBefore'
labels.append(label)
j+=1
labels.append('closeValue')
for i in range(X,df.shape[0]):
temp = np.zeros((1,X+1))
#Date for i-th day
#temp[0,0] = df.iloc[i]['Date']
#Close values for k days before
for k in range(X):
temp[0,k] = df.iloc[i-k-1,2]
#Close value for i-th date
temp[0,-1] = df.iloc[i,2]
#Add values to the permanent frame
perm = np.vstack((perm,temp))
#Get the array in dataframe form
windowed_df = pd.DataFrame(perm[1:,:], columns = labels)
return windowed_df
#Get the dataframe and append the dates
windowed_df = get_windowed_df(15, df)
windowed_df['Date'] = df.iloc[15:]['Date'].reset_index(drop=True)
windowed_df
#Get the X,y and dates into a numpy array to apply on a model
def windowed_df_to_date_X_y(windowed_dataframe):
df_as_np = windowed_dataframe.to_numpy()
dates = df_as_np[:, -1]
middle_matrix = df_as_np[:, 0:-2]
X = middle_matrix.reshape((len(dates), middle_matrix.shape[1], 1))
Y = df_as_np[:, -2]
return dates, X.astype(np.float32), Y.astype(np.float32)
dates, X, y = windowed_df_to_date_X_y(windowed_df)
dates.shape, X.shape, y.shape
#Partition for train, validation and test
q_80 = int(len(dates) * .8)
q_90 = int(len(dates) * .9)
dates_train, X_train, y_train = dates[:q_80], X[:q_80], y[:q_80]
dates_val, X_val, y_val = dates[q_80:q_90], X[q_80:q_90], y[q_80:q_90]
dates_test, X_test, y_test = dates[q_90:], X[q_90:], y[q_90:]
fig,axs = plt.subplots(1, 1, figsize=(12,5))
#Plot the partitions
axs.plot(dates_train, y_train)
axs.plot(dates_val, y_val)
axs.plot(dates_test, y_test)
axs.legend(['Train', 'Validation', 'Test'])
fig.savefig('../analysis/'+coin_name +'_partition'+'.png')
###Output
_____no_output_____
###Markdown
Get Best Parameters for LSTM [Back to TOC](toc)
###Code
#!pip install tensorflow
#import os
#os.environ['PYTHONHASHSEED']= '0'
#import numpy as np
#np.random.seed(1)
#import random as rn
#rn.seed(1)
#import tensorflow as tf
#tf.random.set_seed(1)
#
#from tensorflow.keras.models import Sequential
#from tensorflow.keras.optimizers import Adam
#from tensorflow.keras import layers
#from sklearn.metrics import mean_squared_error
#
## Function to create LSTM model and compute the MSE value for the given parameters
#def check_model(X_train, y_train, X_val, y_val, X_test, y_test, learning_rate,epoch,batch):
#
# # create model
# model = Sequential([layers.Input((15, 1)),
# layers.LSTM(64),
# layers.Dense(32, activation='relu'),
# layers.Dense(32, activation='relu'),
# layers.Dense(1)])
# # Compile model
# model.compile(loss='mse', optimizer=Adam(learning_rate=learning_rate), metrics=['mean_absolute_error'])
#
# model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=epoch, shuffle=False, batch_size=batch, verbose=2)
#
# test_predictions = model.predict(X_test).flatten()
#
# LSTM_mse = mean_squared_error(y_test, test_predictions)
#
# return LSTM_mse
#
##Function that iterates the different parameters and gets the ones corresponding to the lowest MSE score.
#def search_parameters(batch_size, epochs, learn_rate, X_train, y_train, X_val, y_val, X_test, y_test):
#
# best_score = float('inf')
#
# for b in batch_size:
# for e in epochs:
# for l in learn_rate:
# print('Batch Size: ' + str(b))
# print('Number of Epochs: ' + str(e))
# print('Value of Learning Rate: ' + str(l))
# try:
# mse = check_model(X_train, y_train, X_val, y_val, X_test, y_test,l,e,b)
# print('MSE=%.3f' % (mse))
# if mse < best_score:
# best_score = mse
# top_params = [b, e, l]
# except:
# continue
#
# print('Best MSE=%.3f' % (best_score))
# print('Optimal Batch Size: ' + str(top_params[0]))
# print('Optimal Number of Epochs: ' + str(top_params[1]))
# print('Optimal Value of Learning Rate: ' + str(top_params[2]))
#
#
## define parameters
#batch_size = [10, 100, 1000]
#epochs = [50, 100]
#learn_rate = np.linspace(0.001,0.1, num=10)
#
#warnings.filterwarnings("ignore")
#search_parameters(batch_size, epochs, learn_rate, X_train, y_train, X_val, y_val, X_test, y_test)
###Output
_____no_output_____
###Markdown
Run the LSTM Model and Get Predictions [Back to TOC](toc)
###Code
#BEST SOLUTION OF THE MODEL
# MSE=48.801
# Batch Size: 10
# Number of Epochs: 100
# Value of Learning Rate: 0.012
model = Sequential([layers.Input((15, 1)),
layers.LSTM(64),
layers.Dense(32, activation='relu'),
layers.Dense(32, activation='relu'),
layers.Dense(1)])
model.compile(loss='mse',
optimizer=Adam(learning_rate=0.012),
metrics=['mean_absolute_error'])
model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=100, shuffle=False, batch_size=10, verbose=2)
#PREDICT THE VALUES USING THE MODEL
train_predictions = model.predict(X_train).flatten()
val_predictions = model.predict(X_val).flatten()
test_predictions = model.predict(X_test).flatten()
fig,axs = plt.subplots(3, 1, figsize=(14,14))
axs[0].plot(dates_train, train_predictions)
axs[0].plot(dates_train, y_train)
axs[0].legend(['Training Predictions', 'Training Observations'])
axs[1].plot(dates_val, val_predictions)
axs[1].plot(dates_val, y_val)
axs[1].legend(['Validation Predictions', 'Validation Observations'])
axs[2].plot(dates_test, test_predictions)
axs[2].plot(dates_test, y_test)
axs[2].legend(['Testing Predictions', 'Testing Observations'])
plt.savefig('../analysis/LTSM_recursive/'+coin_name +'_modelPredictions'+'.png')
###Output
_____no_output_____
###Markdown
Recursive Predictions [Back to TOC](toc)
###Code
from copy import deepcopy
#Get prediction for future dates recursively based on the previous existing information. Then update the window of days upon
#which the predictions are made
recursive_predictions = []
recursive_dates = np.concatenate([dates_test])
last_window = deepcopy(X_train[-1])
for target_date in recursive_dates:
next_prediction = model.predict(np.array([last_window])).flatten()
recursive_predictions.append(next_prediction)
last_window = np.insert(last_window,0,next_prediction)[:-1]
fig,axs = plt.subplots(2, 1, figsize=(14,10))
axs[0].plot(dates_train, train_predictions)
axs[0].plot(dates_train, y_train)
axs[0].plot(dates_val, val_predictions)
axs[0].plot(dates_val, y_val)
axs[0].plot(dates_test, test_predictions)
axs[0].plot(dates_test, y_test)
axs[0].plot(recursive_dates, recursive_predictions)
axs[0].legend(['Training Predictions',
'Training Observations',
'Validation Predictions',
'Validation Observations',
'Testing Predictions',
'Testing Observations',
'Recursive Predictions'])
axs[1].plot(dates_test, y_test)
axs[1].plot(recursive_dates, recursive_predictions)
axs[1].legend(['Testing Observations',
'Recursive Predictions'])
plt.savefig('../analysis/LTSM_recursive/'+ coin_name +'_recursivePredictions'+'.png')
###Output
_____no_output_____ |
KDDCUP99_18.ipynb | ###Markdown
KDD Cup 1999 Data http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
###Code
import sklearn
import pandas as pd
from sklearn import preprocessing
from sklearn.utils import resample
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
import numpy as np
from sklearn.decomposition import PCA
from sklearn.neural_network import MLPClassifier
from sklearn.pipeline import Pipeline
import time
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.externals import joblib
from sklearn.utils import resample
print('The scikit-learn version is {}.'.format(sklearn.__version__))
col_names = ["duration","protocol_type","service","flag","src_bytes",
"dst_bytes","land","wrong_fragment","urgent","hot","num_failed_logins",
"logged_in","num_compromised","root_shell","su_attempted","num_root","num_file_creations",
"num_shells","num_access_files","num_outbound_cmds","is_host_login","is_guest_login","count",
"srv_count","serror_rate","srv_serror_rate","rerror_rate","srv_rerror_rate","same_srv_rate",
"diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count",
"dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate",
"dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate",
"dst_host_rerror_rate","dst_host_srv_rerror_rate","label"]
data = pd.read_csv("data/corrected", header=None, names = col_names)
data.shape
###Output
_____no_output_____
###Markdown
前処理 カテゴリ化
###Code
data.label.value_counts()
data['label2'] = data.label.where(data.label.str.contains('normal'),'atack')
data.label2.value_counts()
data['label3'] = data.label.copy()
data.loc[data.label.str.contains('back|land|neptune|pod|smurf|teardrop|mailbomb|apache2|processtable|udpstorm'),'label3'] = 'DoS'
data.loc[data.label.str.contains('buffer_overflow|loadmodule|perl|rootkit|ps|xterm|sqlattack'),'label3'] = 'U2R'
data.loc[data.label.str.contains('ftp_write|guess_passwd|imap|multihop|phf|spy|warezclient|warezmaster|snmpgetattack|snmpguess|httptunnel|sendmail|named|xlock|xsnoop|worm'),'label3'] = 'R2L'
data.loc[data.label.str.contains('ipsweep|nmap|portsweep|satan|mscan|saint'),'label3'] = 'Probe'
data.label3.value_counts()
#joblib.dump(data,'dump/20171118/corrected.pkl')
###Output
_____no_output_____
###Markdown
サンプリング
###Code
#data = resample(data,n_samples=10000,random_state=0)
#data.shape
###Output
_____no_output_____
###Markdown
数値化
###Code
le_protocol_type = preprocessing.LabelEncoder()
le_protocol_type.fit(data.protocol_type)
data.protocol_type=le_protocol_type.transform(data.protocol_type)
le_service = preprocessing.LabelEncoder()
le_service.fit(data.service)
data.service = le_service.transform(data.service)
le_flag = preprocessing.LabelEncoder()
le_flag.fit(data.flag)
data.flag = le_flag.transform(data.flag)
data.describe()
data.shape
###Output
_____no_output_____
###Markdown
ラベルの分離
###Code
y_test_1 = data.label.copy()
y_test_2 = data.label2.copy()
y_test_3 = data.label3.copy()
x_test= data.drop(['label','label2','label3'],axis=1)
x_test.shape
y_test_1.shape
y_test_2.shape
y_test_3.shape
###Output
_____no_output_____
###Markdown
標準化
###Code
ss = preprocessing.StandardScaler()
ss.fit(x_test)
x_test = ss.transform(x_test)
col_names2 = ["duration","protocol_type","service","flag","src_bytes",
"dst_bytes","land","wrong_fragment","urgent","hot","num_failed_logins",
"logged_in","num_compromised","root_shell","su_attempted","num_root","num_file_creations",
"num_shells","num_access_files","num_outbound_cmds","is_host_login","is_guest_login","count",
"srv_count","serror_rate","srv_serror_rate","rerror_rate","srv_rerror_rate","same_srv_rate",
"diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count",
"dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate",
"dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate",
"dst_host_rerror_rate","dst_host_srv_rerror_rate"]
pd.DataFrame(x_test,columns=col_names2).describe()
###Output
_____no_output_____
###Markdown
学習
###Code
clf = joblib.load('dump/20171118/MLPClassifier10per.pkl')
t1=time.perf_counter()
pred = clf.predict(x_test)
t2=time.perf_counter()
print(t2-t1,"秒")
print(classification_report(y_test_3, pred))
print(confusion_matrix(y_test_3, pred))
#joblib.dump(data,'dump/20171118/MLPClassifier10per.pkl')
###Output
_____no_output_____ |
script/prophet_model.ipynb | ###Markdown
Library
###Code
import pandas as pd
import numpy as np
import re
import pickle
import os
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
from fbprophet import Prophet
from joblib import Parallel, delayed
import multiprocessing
def temp_func(func, name, group):
return func(group), name
def applyParallel(dfGrouped, func):
retLst, top_index = zip(
*Parallel(n_jobs=multiprocessing.cpu_count()-1)(delayed(temp_func)(
func, name, group) for name, group in dfGrouped))
return pd.concat(retLst, keys=top_index)
###Output
_____no_output_____
###Markdown
Scoring functions
###Code
def smape(y_true, y_pred):
"""
Scoring function
"""
denominator = (np.abs(y_true) + np.abs(y_pred)) / 2.0
diff = np.abs(y_true - y_pred) / denominator
diff[denominator == 0] = 0.0
return 100 * np.mean(diff)
def smape_serie(x):
"""
Scoring function on serie
"""
return smape(y_pred=x.Visits, y_true=x.value)
###Output
_____no_output_____
###Markdown
Helping functions
###Code
def create_train():
if os.path.isfile("../data/work/train.pickle"):
data = pd.read_pickle("../data/work/train.pickle")
else:
data = pd.read_csv('../data/input/train_2.csv')
cols = data.columns[data.columns.str.contains("-")].tolist()
data["Page"] = data["Page"].astype(str)
data = data.set_index("Page").T
data.index = pd.to_datetime(data.index, format="%Y-%m-%d")
data.to_pickle("../data/work/train.pickle")
return data
def create_test():
if os.path.isfile("../data/work/test.pickle"):
df_test = pd.read_pickle("../data/work/test.pickle")
else:
df_test = pd.read_csv("../data/input/key_2.csv")
df_test['date'] = df_test.Page.apply(lambda a: a[-10:])
df_test['Page'] = df_test.Page.apply(lambda a: a[:-11])
df_test['date'] = pd.to_datetime(df_test['date'], format="%Y-%m-%d")
df_test.to_pickle("../data/work/test.pickle")
return df_test
###Output
_____no_output_____
###Markdown
Read data
###Code
data = create_train()
print(data.info())
data.head()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 793 entries, 2015-07-01 to 2017-08-31
Columns: 145063 entries, 2NE1_zh.wikipedia.org_all-access_spider to Francisco_el_matemático_(serie_de_televisión_de_2017)_es.wikipedia.org_all-access_spider
dtypes: float64(145063)
memory usage: 877.7 MB
None
###Markdown
Train / Test
###Code
## Split in train / test to evaluate scoring
train = data.iloc[:-60]
test = data.iloc[-60:]
print(train.shape)
print(test.shape)
print(data.shape)
###Output
(733, 145063)
(60, 145063)
(793, 145063)
###Markdown
Prophet
###Code
def prophet_forecast(df):
return Prophet(
yearly_seasonality=False,
daily_seasonality=False,
weekly_seasonality="auto",
seasonality_prior_scale=5,
changepoint_prior_scale=0.5).fit(df.dropna()).predict(df_predict)[[
"ds", "yhat"
]]
###Output
_____no_output_____
###Markdown
Test
###Code
df_predict = pd.DataFrame({"ds": test.index})
df_predict.head()
# page_sample = train.columns[np.random.randint(0, len(train.columns), 10)]
# train_sample = train[page_sample].reset_index().rename(
# columns={"index": "ds"}).melt(id_vars="ds").rename(columns={"value":
# "y"}).dropna()
# test_sample = test[page_sample]
# train_sample.head()
forecast = applyParallel(train.groupby("Page"),
prophet_forecast).reset_index().rename(
columns={"level_0": "Page"}).drop(
"level_1", axis=1)
forecast.head()
forecast = pd.merge(
test_sample.reset_index().rename(columns={"index": "ds"}).melt(
id_vars="ds"),
forecast,
on=["ds", "Page"],
how="inner")
forecast.head()
print("SMAPE is : ")
print(smape(y_pred=forecast["value"], y_true=forecast["yhat"]))
###Output
SMAPE is :
81.98447484169223
|
10_Motif-I/motif_1_inclass.ipynb | ###Markdown
Motif discovery and regulatory analysis - I Table of Contents1. Consensus sequences2. Probability and positional weight matrices3. Information content / entropy4. Motif finding approaches 1. Consensus sequencesAs you saw in the prelab lecture, there are many ways to represent motifs. In this assignment, we are going to have some more practice with these different representations and the kinds of interesting information contained in each one.One simple way to represent motifs which is easy for people to actually look at is the exact consensus sequence representation. In this representation, a motif is encoded as the most common base at each position. Say you have the following examples of a given motif:1. ACAGGAA2. TGCGGAA3. TGAGGAT4. AGTGGAA5. AACGGAA6. ACAGGAT By finding the most common base at each position, what is the exact consensus sequence for this motif? Although there is a single most common letter at each position in this example, you probably noticed that many of these positions seem to be somewhat flexible, where there is another nucleotide that comes up almost as frequently as the most common base. It is quite common for motifs such as transcription factor binding motifs to include some level of flexibility or degeneracy, and so we also have a human-readable way to encode this, called the degenerate consensus sequence representation. There are two common ways to encode this. One is related to the concept of regular expressions that we have seen a few times now, where the set of symbols that are possible at each position is contained in brackets, i.e. [AT] means that position can contain either an A or a T. Using this representation, what is the degenerate consensus sequence for this motif? In this case, we have two positions that seem to be able to contain three different nucleotides. For the sake of clarity, a common convention is to only include a base as a degenerate possibility if more than 25% of the input sequences include that base. In this example, that means that a base that is only present in one of the sequences should not be counted. Rewrite your degenerate representation using this convention: The other way to represent degenerate consensus sequences is to use specific characters (defined by IUPAC) to represent these sets of possibilities: SymbolDescriptionBases representedNumber of bases representedAAdenineA1CCytosineC1GGuanineG1TThymineT1UUracilU1WWeak hydrogen bondingA,T2SStrong hydrogen bondingG,C2MaMinoA,C2KKetoG,T2RpuRineA,G2YpYrimidineC,T2Bnot A (B comes after A)C,G,T3Dnot C (D comes after C)A,G,T3Hnot G (H comes after G)A,C,T3Vnot T (V comes after T)A,C,G3N or -any Nucleotide (not a gap)A,C,G,T4 Using this approach, write the representation of the motif with all the possible degenerate positions (don't filter out bases that only appear once in a position): Now write the representation of the motif with the cleaner definition of degenerate positions (do filter out bases that appear only once in a position): 2. Probability and positional weight matricesSo far in this lab, we have seen motif representations that are meant to be easily human-readable and interpretable. However, one issue with these representations is that they throw away quantitative information about the probability of each base at each position, and so we cannot use them for any more mathematical approaches to motif interpretation. One very common alternative representation that retains this information is the probability weight matrix (PWM), which is a matrix with 4 rows, one for each nucleotide, and a number of columns corresponding to the length of the motif. For example, the PWM representation of the six motifs from above (ACAGGAA, TGCGGAA, TGAGGAT, AGTGGAA, AACGGAA, ACAGGAT) is:NucleotidePos. 1 Probability (Observed Counts)Pos. 2 Probability (Observed Counts)Pos. 3 Probability (Observed Counts)Pos. 4 Probability (Observed Counts)Pos. 5 Probability (Observed Counts)Pos. 6 Probability (Observed Counts)Pos. 7 Probability (Observed Counts)A0.66 (4)0.166 (1)0.5 (3)0.0 (0)0.0 (0)1.0 (6)0.66 (4)C0.0 (0)0.33 (2)0.33 (2)0.0 (0)0.0 (0)0.0 (0)0.0 (0)G0.0 (0)0.5 (3)0.0 (0)1.0 (6)1.0 (6)0.0 (0)0.0 (0)T0.33 (2)0.0 (0)0.166 (1)0.0 (0)0.0 (0)0.0 (0)0.33 (2) Using this table, we can use a simple approach of finding how well a given putative motif sequence matches what we think the real motif is by just comparing it to this table and multiplying the probability at each base. For example, if we want to quantify how well the motif 'AGAGGAA' (which was our exact consensus sequence) matches, we just go through and multiply 0.66 \* 0.5 \* 0.5 \* 1.0 \* 1.0 \* 1.0 \* 0.66 = .1089. One major issue with using this approach is the fact that some of these cells contain '0.0' as their probability. Consider the motif 'CGAGGAA', which only differs from our exact consensus sequence by a single base pair. If we try to use the same quantification approach, we will compute 0.0 \* 0.5 \* 0.5 \* 1.0 \* 1.0 \* 1.0 \* 0.66 = 0.0. In other words, the fact that we had one position containing a nucleotide that was not observed in our reference set means that the probability of that motif, under this PWM, is 0. To avoid this issue, we can add a 'pseudocount' of 1 at every position for every nucleotide, yielding the following PWM:NucleotidePos. 1 Probability (Obs + Pseudocounts)Pos. 2 Probability (Obs + Pseudocounts)Pos. 3 Probability (Obs + Pseudocounts)Pos. 4 Probability (Obs + Pseudocounts)Pos. 5 Probability (Obs + Pseudocounts)Pos. 6 Probability (Obs + Pseudocounts)Pos. 7 Probability(Obs + Pseudocounts)A0.5 (5)0.2 (2)0.4 (4)0.1 (1)0.1 (1)0.7 (7)0.5 (5)C0.1 (1)0.3 (3)0.3 (3)0.1 (1)0.1 (1)0.1 (1)0.1 (1)G0.1 (1)0.4 (4)0.1 (1)0.7 (7)0.7 (7)0.1 (1)0.1 (1)T0.3 (3)0.1 (1)0.2 (2)0.1 (1)0.1 (1)0.1 (1)0.3 (3) Now if we try to compute the probability of observing 'CGAGGAA', we get 0.1 \* 0.4 \* 0.4 \* 0.7 \* 0.7 \* 0.7 \* 0.5 = 0.0027. What is the probability of observing a motif very unlike what we have seen, say 'CTCTTTG'? Generating positional weight matricesA further refinement to this idea is to correct these probabilities for the background distribution of bases in the genome you are interested in. Doing this, we can define positional weight matrices. To do this, after we have obtained the matrix of probabilities including pseudocounts (i.e. the table directly above this one), we divide each entry in each row by the background probability of observing the nucleotide corresponding to that row. In the naive case, we just use p(i) = 0.25 for each nucleotide i. This assumes an equal probability of observing any given nucleotide. Finally, a common transformation is to take the natural logarithm (ln, or log base e) of each of these background-corrected quantities (note that these are no longer probabilities). This is done so that in order to compute the score for a given sequence, the entries in each row can be added instead of multiplied together. In our example above, applying these transformations using the naive nucleotide distribution yields the following table:NucleotidePos. 1 Log-oddsPos. 2 Log-oddsPos. 3 Log-oddsPos. 4 Log-oddsPos. 5 Log-oddsPos. 6 Log-oddsPos. 7 Log-oddsA0.693-0.2230.470-0.916-0.9161.0300.693C-0.9160.1820.182-0.916-0.916-0.916-0.916G-0.9160.470-0.9161.0301.030-0.916-0.916T0.182-0.916-0.223-0.916-0.916-0.9160.182Now, the corrected probability of any given sequence can be computed by simply adding the entries corresponding to that sequence. If the score is greater than 0, the sequence is more likely to be a functional than a 'random' sequence, and if the score is less than 0, the reverse is true. This is why the column titles refer to the 'log-odds': this model represents the 'odds' or likelihood that a given sequence matches the motif. Compute the score for the exact consensus sequence 'AGAGGAA': It is worth noting that the human genome does not follow the naive distribution of an equal probability of observing each nucleotide. Instead, the distribution is roughly p(A) = 0.3, p(C) = 0.2, p(G) = 0.2, and p(T) = 0.3. Using this, we can recompute our positional weight matrix:NucleotidePos. 1 Log-oddsPos. 2 Log-oddsPos. 3 Log-oddsPos. 4 Log-oddsPos. 5 Log-oddsPos. 6 Log-oddsPos. 7 Log-oddsA0.510-0.4050.288-1.099-1.0990.8470.511C-0.6930.405-0.693-0.693-0.693-0.693-0.693G-0.6930.693-0.6931.2531.253-0.693-0.693T0.000-1.099-0.405-1.099-1.099-1.0990.000Now what is the score for the exact consensus sequence 'AGAGGAA'? 3. Information content and entropy One aspect of these PWMs that we have not yet addressed is the concept of how well they actually capture the motif, or how informative they actually are. In other words, we want to know how well a motif, as represented by a PWM, can discriminate between a real signal and background noise. To do so, we can take advantage of a very useful and powerful concept called the information content (IC) of a motif. This is a way of directly quantifying how informative a signal is, and applications of this concept can be found in a wide range of fields from computer encryption to machine learning to physics. In this case, we define the information content of each column $j$ in the PWM (i.e. each position in the motif) as $IC_j = 2 + \sum_{x=A,C,G,T} p_x log_2(p_x)$, where $p_x$ is the entry for nucleotide $x$ in that column. This means that a value of 2.0 is the most informative and a value of 0 is the least informative. Consider the following simple PWM:NucleotidePos. 1 ProbabilityPos. 2 ProbabilityPos. 3 ProbabilityA1.000.250.4C0.000.250.4G0.000.250.1T0.000.250.1 The IC for each column can be calculated: $IC_1 = 2 + 1.0 * log_2(1.0) + 0.0 + 0.0 + 0.0 = 2$$IC_2 = 2 + 0.25 * log_2(0.25) + 0.25 * log_2(0.25) + 0.25 * log_2(0.25) + 0.25 * log_2(0.25) = 2 + 0.25 * (-2) + 0.25 * (-2) + 0.25 * (-2) + 0.25 * (-2) = 0$$IC_3 = 2 + 0.4 * log_2(0.4) + 0.4 * log_2(0.4) + 0.1 * log_2(0.1) + 0.1 * log_2(0.1) = 2 + 0.4 * (-1.32) + 0.4 * (-1.32) + 0.1 * (-3.32) + 0.1 * (-3.32) = 0.27$So we see that the first position is maximally informative (intuitively, we know that it will always be an A), while the second position is minimally informative (each base has an exactly equal chance of occuring), and the third position is weakly informative (it is more likely to be an A or a C than a G or a T).Then, the IC for a motif can be calculated as the sum of the information contents of each column, so this motif would have an IC of 2.27. Similarly to how we wanted to generate positional weight matrices to correct for the background nucleotide distributions, we may also want to account for the background nucleotide probabilities when we look at the information content in a motif. There is a related concept called relative entropy that allows us to do this. Entropy measures the 'randomness' of a signal, and in that sense is the opposite of information. Relative entropy measures this 'randomness' or 'disorderedness' of a given motif relative to the background distribution. In other words, relative entropy measures how different your motif is from what you would expect given the background distribution; thus, if a motif is very informative, it will have a high relative entropy. The equation for relative entropy is given as $RE = \sum_{x=A,C,G,T} p_x log_2(p_x/Q_x)$, where $Q_x$ is the background probability of the nucleotide $x$. Thus, if your PWM exactly matches the background probability Q, the relative entropy of your PWM will be 0 (because $p_x / Q_x = 1$ and $log_2(1) = 0$); otherwise, this quantity can be arbitrarily high or low. Aside: creating motif logos A useful way of representing motifs is using what are known as sequence logos, which we saw in the prelab lecture. These logos scale each nucleotide at each position to represent their information content. An easy way to create these logos is to use the website http://weblogo.berkeley.edu/logo.cgi. We will practice this with the set of 6 sequences we were looking at earlier. The general approach is to upload a set of sequences, either by copy and pasting or by uploading the file. These sequences can be provided in fasta format, as we have done here, or as a plain text list, where each line is the same length, as we have in question 4 on the homework. Here, we will just copy and paste the 6 sequences from this box:
###Code
>seq1
ACAGGAA
>seq2
TGCGGAA
>seq3
TGAGGAT
>seq4
AGTGGAA
>seq5
AACGGAA
>seq6
ACAGGAT
###Output
_____no_output_____
###Markdown
Then, navigate to the website and paste those sequences into the box marked 'multiple sequence aligment'. Then, simply press the 'create logo' button, and you should get a sequence logo! Save this file and upload it into the images/ folder of this assignment. 4. Motif finding approachesAs we saw in the lecture, there are several different computational approaches that can be used to identify enriched motifs in a given set of sequences, including exact counting, iterative approaches like Gibbs' sampling and expectation maximization, and differential enrichment approaches. For this section of the lab, we will just have some practice using the most common tool for motif enrichment in relatively small datasets, MEME, which is based on expectation maximization. We will analyze the file called 'selex_seqs.fasta', in the inclass_data/ folder. This fasta-formatted file contains sequences from a SELEX-like experiment, where sequences were pulled down based on their affinity with some transcription factor. We will use the online MEME tool to do this. You can either download this file to your computer (recommended) or copy and paste it to upload it to MEME, but make sure you get the full file if you do this. Navigate to http://meme-suite.org/tools/meme, and under the input the primary sequences header, select whichever approach you are using to upload the sequences. Under select the site distribution, choose 'one occurrence per sequence', because this file comes from a SELEX-like experiment and so each sequence was experimentally found to bind to some transcription factor. Leave the value of 3 for how many motifs MEME should find, and under advanced options, change the maximum width of the motifs to 20bp to speed up the computation. This will take some time to finish running, so make sure to save the link, or you can provide an email address that they will mail the link to. Make sure to submit this job before starting the homework as some of the questions will be about these results! Homework problems: motif practice Question 1: Consider the following probability weight matrix:NucleotidePos. 1Pos. 2Pos. 3Pos. 4Pos. 5Pos. 6Pos. 7Pos. 8A0.010.10.970.950.50.050.80.4C0.030.050.010.010.10.60.10.08G0.950.050.010.030.10.050.050.02T0.010.80.010.010.30.30.050.5What is the information content of positions 3 and 5 in this matrix? (1 point) Question 2: Using the PWM given above, what is the exact consensus sequence and the degenerate consensus sequence (using either the regular expression or IUPAC characters)? For the degenerate sequence, only count a nucleotide as a degenerate possibility if it has a probability of more than 0.25. (1 point) Question 3 (short answer): Based on this consensus sequence, do you expect the relative entropy of this probability matrix to be higher when compared to the naive nucleotide distribution (equal probability of any nucleotide) or to the human genome background probability (A and T are more common than G and C)? (1 point) For the next two questions, we will be using the following set of sequences:
###Code
TGGGAA
TGGGAA
TGGGAA
TGGGAA
TGGGAA
TGAGAA
TGGGAA
TGGGAA
TGGGAA
TGGGAG
TGAGAA
TGAGAA
TGTGAA
TGGGAA
TGGGAG
TGGGAG
CGGGAA
TGGGAT
###Output
_____no_output_____ |
_utilities.ipynb | ###Markdown
Check Words vs. Enchant Spellcheck
###Code
import os
from nltk.tokenize import word_tokenize
import regex as re
path = ".\corpus"
files = []
words = set()
az = re.compile(r"^[a-zA-Z]+$")
files = [f for f in os.listdir(path) if f.endswith(".txt")]
print(f"Loading {len(files)} files...")
for file in files:
word_count = len(words)
with open(os.path.join(path, file), "r", encoding="utf-8") as f:
text = f.read()
for word in word_tokenize(text):
if not az.search(word):
continue
words.add(word)
print(f"Added {len(words) - word_count} words from {file}.")
print("\n** DONE! **")
print(f"Found {len(words)} unique words.")
import enchant
d = enchant.Dict("en_US")
words = list(words)
words.sort()
print(f"Checking {len(words)} words...")
ok_count = 0
nf_count = 0
for word in words:
print(f"(?) {word} ...", end="")
if not d.check(word):
with open("words.txt", "a", encoding="utf-8") as f:
f.write(f"{word}\n")
print("Not Found")
continue
print("OK")
total = ok_count + nf_count
print("\n** DONE! **")
print(f"Could not find spelling for {nf_count} words out of {total} (corpus {(ok_count/total):.0f}% ok).")
###Output
_____no_output_____
###Markdown
CHOMP v2__Misc. Utilities____by Sean Gilleran__ __Last updated November 30__, __2021__ [https://github.com/seangilleran/chomp2](https://github.com/seangilleran/chomp2) Check Language (With Spacy)
###Code
import os
import en_core_web_sm
from spacy.language import Language
from spacy_langdetect import LanguageDetector
path = "./corpus"
# Load language detector.
@Language.factory("language_detector")
def language_detector(nlp, name):
return LanguageDetector()
nlp = en_core_web_sm.load()
nlp.add_pipe("language_detector", last=True)
for file in [f for f in os.listdir(path) if f.endswith(f".txt")]:
with open(os.path.join(path, file), "r", encoding="utf-8") as f:
text = f.read()
# Check language.
lang = nlp(text)._.language
language = lang["language"]
score = lang["score"]
print(f"{language.capitalize()} ({(score * 100):.0f}%): {file}")
with open("lang_check.csv", "a", encoding="utf-8") as f:
f.write(f"{language},{score},{file}\n")
###Output
_____no_output_____
###Markdown
Check Language (With Tag)
###Code
import json
import os
path = "./meta"
for file in [f for f in os.listdir(path) if f.endswith(".json")]:
with open(os.path.join(path, file), "r", encoding="utf-8") as f:
collection = json.loads(f.read())
for item in collection["items"]:
lang = item["language"]
for file in item["files"]:
with open("lang_check.csv", "a", encoding="utf-8") as f:
f.write(f"{lang},{file['name']},{file['id']}\n")
###Output
_____no_output_____
###Markdown
Convert PDF to TXT
###Code
# TODO
###Output
_____no_output_____ |
courses/data-engineering/demos/tweet_nlp_beam_notebook/TweetPipeline.ipynb | ###Markdown
Apache Beam Notebooks for Streaming NLP on Real-time Tweets In this demo we will walk through setting up a local client to gather tweets using the `tweepy` API. After that we will using the interactive runner in Apache Beam notebooks to build a pipeline to do natural language processing on tweets in real-time. One of the advantages of using the interactive runner is that we can explore the intermediate outputs for our pipeline while building the pipeline!At the end of the notebook we will turn the relevant parts of the notebook into a script where we can deploy our streaming pipeline on Cloud Dataflow.First, let us look at the script we will be using to gather our tweets and publish them to Pub/Sub.
###Code
# NoExport
!cat tweet-setup.sh
###Output
_____no_output_____
###Markdown
After installing some packages, we will run the `tweets-gatherer.py` script. This script will not be covered explicitly in the demo, but it is recommended to glance through the code and see how the Tweepy API and Pub/Sub client are being used. Note that you need to have a Twitter Developer Account to run this script. The free version of the account will suffice and you can sign up here. We need the the Twitter API Consumer Key/Secret and the Twitter API Access Key/Secret for our client to be able to search and pull tweets in real time. These tweets will be published to a Pub/Sub topic in your project created by the script above.Before moving forward, insert your Twitter Developer API keys, open a terminal (File > New > Terminal) and run the command `bash tweet-setup.sh`. If you already have a Pub/Sub topic named `tweet-nlp-demo` or a BigQuery dataset named `tweet_nlp_demo` then you can ignore the corresponding error messages. Before we begin to build our pipeline, we need to install a couple of Python client libraries. After doing this, you should reset the notebook kernel (Kernel > Restart Kernel) so that the packages are properly picked up. It may take a few minutes to install the packages.
###Code
# NoExport
%pip install google-cloud-translate google-cloud-language
###Output
_____no_output_____
###Markdown
We will start by importing the packages that we need for the notebook. The first code block contains packages that we will need when we submit the pipeline to Dataflow, so we will want to include the code cell in the exported script. **Before running the cell, be sure to change the Project ID to your own**. The rest of the variables (`OUTPUT_DATASET`, `OUTPUT_TABLE_UNAGG`,`OUTPUT_TABLE_AGG`, and `INPUT_TOPIC`) refer to objects created within the lab.
###Code
import argparse, os, json, logging
from datetime import datetime, timedelta
import json
import pandas as pd
import apache_beam as beam
from apache_beam.transforms import trigger
from apache_beam.io.gcp.internal.clients import bigquery
from apache_beam.options.pipeline_options import GoogleCloudOptions, PipelineOptions, SetupOptions, StandardOptions
import google.auth
from google.cloud import language_v1
from google.cloud.language_v1 import enums
from google.cloud import translate_v2 as translate
print('Beam Version:', beam.__version__)
PROJECT_ID = 'your-project-id-here' #TODO: CHANGE PROJECT ID
OUTPUT_DATASET = 'tweet_nlp_demo'
OUTPUT_TABLE_UNAGG = 'processed_tweet_data'
OUTPUT_TABLE_AGG = 'aggregated_tweet_data'
INPUT_TOPIC = "projects/{}/topics/tweet-nlp-demo".format(PROJECT_ID)
###Output
_____no_output_____
###Markdown
However, the next cell contains code to import the interactive runner we will use to explore the pipeline within the notebook. We do not want to include this in the final script so we will annotate it as such.
###Code
# NoExport
from apache_beam.runners.interactive import interactive_runner
import apache_beam.runners.interactive.interactive_beam as ib
###Output
_____no_output_____
###Markdown
Next we define our pipeline options. Since we wish to deal with data in real-time, we will set the streaming option to `True` to ensure that the pipeline runs indefinitely. The behavior differs slightly when we wish to use the interactive runner, but we will address that in just a moment.
###Code
# Setting up the Beam pipeline options.
options = PipelineOptions()
# Sets the pipeline mode to streaming, so we can stream the data from PubSub.
options.view_as(StandardOptions).streaming = True
# Sets the project to the default project in your current Google Cloud environment.
# The project will be used for creating a subscription to the PubSub topic.
_, options.view_as(GoogleCloudOptions).project = google.auth.default()
###Output
_____no_output_____
###Markdown
Now we set up our interactive runner. Note that we're setting a capture duration of 60 seconds. Instead of waiting indefinitely for more data to come in, we will collect 60 seconds worth of data and load it into an in-memory PCollection. That way we can visualize the results one transform at a time while building our pipeline. When we run the pipeline in Dataflow, we will want to run the pipeline indefintely.
###Code
# NoExport
ib.options.capture_duration = timedelta(seconds=60)
p = beam.Pipeline(interactive_runner.InteractiveRunner(), options=options)
###Output
_____no_output_____
###Markdown
**DO NOT RUN THE NEXT CELL IN THE NOTEBOOK!!!** The next cell defines all of the options for running the pipeline on Dataflow and we do not want to run this in the notebook. The cell is left here (uncommented) so that it will properly be included when we run `nbconvert` after exploring our pipeline.
###Code
from apache_beam.runners import DataflowRunner
options.view_as(StandardOptions).runner = 'DataflowRunner'
google_cloud_options = options.view_as(GoogleCloudOptions)
google_cloud_options.job_name = 'tweet-nlp-pipeline'
google_cloud_options.staging_location = 'gs://{}/binaries'.format(PROJECT_ID)
google_cloud_options.temp_location = 'gs://{}/temp'.format(PROJECT_ID)
google_cloud_options.region = 'us-central1'
p = beam.Pipeline(DataflowRunner(), options=options)
###Output
_____no_output_____
###Markdown
Now we are ready to start building our pipeline! We start by reading in tweets from our Pub/Sub topic using the `ReadFromPubSub` connector. After that we will use the `json.loads` function to parse the incoming JSON blob containing the text of the tweet and its attributes.
###Code
# So that Pandas Dataframes do not truncate data...
pd.set_option('display.max_colwidth', -1)
tweets = p | 'ReadTweet' >> beam.io.gcp.pubsub.ReadFromPubSub(topic=INPUT_TOPIC) | beam.Map(json.loads)
###Output
_____no_output_____
###Markdown
What we did in the previous cell was add two transformations to our pipelines DAG (Directed Acyclic Graph). We have not processed any data yet! We can use `ib.show` to ingest data from our Pub/Sub topic for 60 seconds (per our `capture_duration` option above) and store the data in an in-memory PCollection, we then apply `json.loads` to the elements of the PCollection and can visualize the results via Pandas. **WARNING:** The incoming tweets are (unfiltered) tweets containing the search term "pizza". Though the search term was chosen to be as uncontroversial as possible, anything could be in these tweets. Of course, this includes possibly very offensive material.
###Code
# NoExport
ib.show(tweets)
###Output
_____no_output_____
###Markdown
Now we can see the JSON blobs sent to Pub/Sub by the Twitter API. However we are only going to want certain properties of the messages for our goal. Let's take the "text", "created_at" and "source" fields for each message and pack them into a dictionary. We will create a custom function `parse_fields` and apply it in our pipeline once again using `beam.Map`.
###Code
def parse_fields(tweet):
trim = {}
trim['text'] = tweet['messages'][0]['data']['text']
trim['created_at'] = tweet['messages'][0]['data']['created_at']
trim['source']=tweet['messages'][0]['data']['source']
return trim
parsed_tweets = tweets | "Parse_Tweet" >> beam.Map(parse_fields)
###Output
_____no_output_____
###Markdown
Let us quickly use `ib.show` again to see the results of our parsing. Note that the output of the previous steps is still in an in-memory PCollection, so we do not have to wait a minute for data to come in through the Pub/Sub IO Connection again.
###Code
# NoExport
ib.show(parsed_tweets)
###Output
_____no_output_____
###Markdown
Note that the dictionaries are parsed by the interactive runner so that when we visualize the data it is presented as a table. Before we move on, we can use the `ib.show_graph` to visualize our pipeline.
###Code
# NoExport
ib.show_graph(p)
###Output
_____no_output_____
###Markdown
We can see the transforms (in boxes) with the cell numbers corresponding to them. In the circles between the tranforms, we can see the names of the corresponding PCollections. Note that between the `ReadTweet` and the `Map(loads)` transforms the name was generated by Beam since we did not assign a name ourselves.Now we are ready to begin applying machine learning to the data. The NLP (Natural Language Processing) API only supports certain languages for sentiment analysis. So, what we will do is first use the Translation API to detect the language. We will create a Python function, `detect_language`, to call the Translation API and add it to our pipeline once again using `beam.Map`.
###Code
def detect_language(tweet):
translate_client = translate.Client()
text = tweet['text']
result = translate_client.detect_language(text)
tweet['language'] = result['language']
tweet['lang_confidence'] = result['confidence']
return tweet
lang_tweets = parsed_tweets | "Detect_Language" >> beam.Map(detect_language)
###Output
_____no_output_____
###Markdown
Let us now detect the language of our tweets. Note that we will also record the confidence in the API's predictions ('lang_confidence') for later reference.
###Code
# NoExport
ib.show(lang_tweets)
###Output
_____no_output_____
###Markdown
Now we are ready to perform sentiment analysis on our tweets. We will invoke the NLP API to analyze the sentiment of tweets involving the term "pizza". Note that the translation of "pizza" is "pizza" in many languages, including French, German, Itaian, Portugese, and Spanish. These are lanaguages that are supported by the NLP API, so we will will filter based off the language detected by the Translation API. In the case that we are not working with one of these languages, we will assign a `None` value to the score and magnitude fields.As in the previous steps, we will invoke the API using a function and then call the function in our pipeline using `beam.Map`.
###Code
def analyze_sentiment(tweet):
client = language_v1.LanguageServiceClient()
type_ = enums.Document.Type.PLAIN_TEXT
if tweet['language'] in ['en', 'fr', 'de', 'it', 'pt', 'es']:
language = tweet['language']
document = {"content": tweet['text'], "type": type_, "language": language}
encoding_type = enums.EncodingType.UTF8
response = client.analyze_sentiment(document, encoding_type=encoding_type)
tweet['score'] = response.document_sentiment.score
tweet['magnitude'] = response.document_sentiment.magnitude
else:
tweet['score'] = None
tweet['magnitude'] = None
return tweet
analyzed_tweets = lang_tweets | "Analyze_Tweets" >> beam.Map(analyze_sentiment)
###Output
_____no_output_____
###Markdown
And as before, let us take a look into our processed tweets by using `ib.show`.
###Code
# NoExport
ib.show(analyzed_tweets, include_window_info=True)
###Output
_____no_output_____
###Markdown
We now have all of the information that we need to start performing our aggregations. However, there's one more thing we should address first. The date-timestamp (DTS) that Dataflow uses by default is the Pub/Sub publication time (when using the `ReadFromPubSub` connector). However, we would rather sort the tweets in the context of when they are published to Twitter. Above we can see that the `event_time` field and the `created_at` times are slightly different. We can replace the timestamp with the one in the `created_at` field.
###Code
def custom_timestamp(tweet):
ts = datetime.strptime(tweet["created_at"], "%Y-%m-%dT%H:%M:%S")
return beam.window.TimestampedValue(tweet, ts.timestamp())
analyzed_tweets_w_dts = analyzed_tweets | 'CustomTimestamp' >> beam.Map(custom_timestamp)
# NoExport
ib.show(analyzed_tweets_w_dts, include_window_info=True)
###Output
_____no_output_____
###Markdown
In our example here we will group our data into sliding windows of length 30 seconds and starting every 10 seconds. We do this by using the `beam.WindowInto` transform and specifying the window type, length, and offset using `beam.window.SlidingWindows`.
###Code
windowed_tweets = analyzed_tweets_w_dts | "Window" >> beam.WindowInto(beam.window.SlidingWindows(30, 10))
###Output
_____no_output_____
###Markdown
What does this actually do to our data in out PCollection? The best thing to do here is go ahead and take a peek into the output of the pipeline up to this point using `ib.show`. We will set the `include_window_info` flag to `True` so that we can peek into how windows are assigned.
###Code
# NoExport
ib.show(windowed_tweets, include_window_info=True)
###Output
_____no_output_____
###Markdown
Did you notice something above? Every tweet is now triplicated, with one entry for each window it belongs to. Another thing to notice is that we have simply *assigned* the windows at this point, the data has not been grouped into windows yet.We want to measure sentiment over time depending on the source of the tweet. To do this, let us create a "key-value" pair for each tweet. Strictly speaking, we do not have a key-value pair construction in Python, but Beam will treat the first value of an ordered pair as a "key" and the second value of the ordered pair as the "value".The key will be the source of the tweet and the value will be a dictionary of the score and magnitude of the tweet. We will be using both of these data points in the next transform. We follow a similar pattern from before: we create a Python function to perform our element-wise computation. However you may notice something new here. We `yield` instead of `return` at the end of our function. We do this because we want to return a generator instead of a single element. But why? Note that `create_source_key` does not return anything in the case that we did not assign a score above! So we either return nothing or a generator with a single element. We then add the transform to the pipeline using `beam.FlatMap`. `FlatMap` is perfect for any non-1:1 transform such as `create_source_key`; `FlatMap` expects the function being applied to return a generator and it will manage cycling through the generator when the PCollection is passed to the next transform.
###Code
def create_source_key(tweet):
if tweet['score']:
yield (tweet['source'], {'score': tweet['score'], 'magnitude': tweet['magnitude']})
prepped_tweets = windowed_tweets | "Create_Source_Key" >> beam.FlatMap(create_source_key)
# NoExport
ib.show(prepped_tweets)
###Output
_____no_output_____
###Markdown
Now we are ready to perform our aggregation. We will combine a weighted average of scores per window and per source. We will use the magnitude as our weight for the weighted average. However, there is not a built-in transform for performing this task!We will create our own custom combiner by extending `beam.CombineFn`. We need to define four functions when extending `beam.CombineFn` to create our custom combiner:1. `create_accumulator`: We initialize the information we will be passing from node to node. In our case we have an ordered pair (sum, count) where sum is the running sum of weighted scores.2. `add_input`: When we wish to include a new data point, how is it incorporated? We will add the magnitude times the score to the sum and increment the count by 1.3. `merge_accumulators`: We will be computing the accumulators where they live in the cluster, what do we do when we need to shuffle data for the final aggregation? This is why we are passing ordered pairs instead of averages, we can simple combine the sums and the counts.4. `extract_output`: This is the function that computes the final output. We finally combine our final weighted average by dividing the sum by the count. However, we need to anticipate the case that the count is 0 (as initally set). In this case, we will set the score to be `NaN`.Once we have created our custom combiner, we can apply it in our pipeline by calling `beam.CombinePerKey`.
###Code
class WeightedAverageFn(beam.CombineFn):
def create_accumulator(self):
return (0.0, 0)
def add_input(self, sum_count, input):
sum, count = sum_count
return sum + input['score'] * input['magnitude'], count + 1
def merge_accumulators(self, accumulators):
sums, counts = zip(*accumulators)
return sum(sums), sum(counts)
def extract_output(self, sum_count):
sum, count = sum_count
return {'score': sum / count, 'count': count} if count else {'score':float('NaN'), 'count': 0}
aggregated_tweets = prepped_tweets | "Aggregate_Weighted_Score" >> beam.CombinePerKey(WeightedAverageFn())
###Output
_____no_output_____
###Markdown
Let us take a quick peek at the output of our aggregations
###Code
# NoExport
ib.show(aggregated_tweets, include_window_info=True)
###Output
_____no_output_____
###Markdown
We're almost there! Let us just clean up our output to put it into a more convenient form for loading into BigQuery.
###Code
def parse_aggregation(agg_tweets):
result = {}
result['source'] = agg_tweets[0]
result['score'] = agg_tweets[1]['score']
result['count'] = agg_tweets[1]['count']
return result
parsed_aggregated_tweets = aggregated_tweets | "Parse_Aggregated_Results" >> beam.Map(parse_aggregation)
# NoExport
ib.show(parsed_aggregated_tweets,include_window_info=True)
###Output
_____no_output_____
###Markdown
We have created all of the transforms for our pipeline and we are ready to start analyzing and processing the entire real-time stream (versus working with a small in-memory PCollection). We will wrap up by defining two transforms to load data into BigQuery. We will load the aggregated tweet data (`parsed_aggregated_tweets`) and the unaggregated, analyzed tweets to a different table (`analyzed_tweets`). Keeping the unaggregated, analyzed tweets will allow us to go back and further analyze the individual tweets if another question arises without having to reprocess. Of course, we are paying to store the tweets in BigQuery, but this is much cheaper than having to reprocess.
###Code
table_spec_unagg = bigquery.TableReference(
projectId = PROJECT_ID,
datasetId = OUTPUT_DATASET,
tableId= OUTPUT_TABLE_UNAGG)
table_schema_unagg ='text:STRING, created_at:TIMESTAMP, source:STRING, language:STRING, lang_confidence:FLOAT64, score:FLOAT64, magnitude:FLOAT64'
bq_output_unagg = analyzed_tweets | 'WriteToBQ_Unagg'>> beam.io.WriteToBigQuery(table_spec_unagg,
schema=table_schema_unagg,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED)
table_spec_agg = bigquery.TableReference(
projectId = PROJECT_ID,
datasetId = OUTPUT_DATASET,
tableId= OUTPUT_TABLE_AGG)
table_schema_agg ='source:STRING, score:FLOAT64, count:INT64, window_start:TIMESTAMP'
bq_output_agg = parsed_aggregated_tweets | 'WriteToBQ_Agg'>> beam.io.WriteToBigQuery(table_spec_agg,
schema=table_schema_agg,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED)
###Output
_____no_output_____
###Markdown
Now we can finally go back and look at our completed graph. Note that by applying `bq_output_unagg` to `analyzed_tweets` we have created a branch in the pipeline.
###Code
# NoExport
ib.show_graph(p)
###Output
_____no_output_____
###Markdown
Everything is ready for deploying to Dataflow! We will use the `nbconvert` tool to export this Jupyter Notebook into a Python script, so we can execute the script in other environments without having to install a tool to run notebooks. The cells that were flagged as `NoExport` will not be included in the script. These were cells that used the interactive runner or cells used to work within the notebook environment that we don't need when submitting to Dataflow.The final cell of the notebook includes the `p.run()` call that we need to execute the pipeline on Dataflow. You do not need to run that cell within the notebook.
###Code
# NoExport
!jupyter nbconvert --to script --RegexRemovePreprocessor.patterns="['# NoExport']" TweetPipeline.ipynb
###Output
_____no_output_____
###Markdown
Let us go ahead and submit the job to Dataflow! We will do this by using executing the Python script we just created. After you run the cell be sure to check out the job running in Dataflow and the output in your BigQuery dataset.
###Code
# NoExport
!pip install apache_beam google-cloud-language google-cloud-translate google-apitools
!echo "google-cloud-translate==2.0.1" > requirements.txt
!python3 TweetPipeline.py --save_main_session --requirements_file requirements.txt
# Don't run this cell within the notebook!
logging.getLogger().setLevel(logging.INFO)
p.run()
###Output
_____no_output_____
###Markdown
Apache Beam Notebooks for Streaming NLP on Real-time Tweets In this demo we will walk through setting up a local client to gather tweets using the `tweepy` API. After that we will using the interactive runner in Apache Beam notebooks to build a pipeline to do natural language processing on tweets in real-time. One of the advantages of using the interactive runner is that we can explore the intermediate outputs for our pipeline while building the pipeline!At the end of the notebook we will turn the relevant parts of the notebook into a script where we can deploy our streaming pipeline on Cloud Dataflow.First, let us look at the script we will be using to gather our tweets and publish them to Pub/Sub.
###Code
# NoExport
!cat tweet-setup.sh
###Output
_____no_output_____
###Markdown
After installing some packages, we will run the `tweets-gatherer.py` script. This script will not be covered explicitly in the demo, but it is recommended to glance through the code and see how the Tweepy API and Pub/Sub client are being used. Note that you need to have a Twitter Developer Account to run this script. The free version of the account will suffice and you can sign up here. We need the the Twitter API Consumer Key/Secret and the Twitter API Access Key/Secret for our client to be able to search and pull tweets in real time. These tweets will be published to a Pub/Sub topic in your project created by the script above.Before moving forward, insert your Twitter Developer API keys, open a terminal (File > New > Terminal) and run the command `bash tweet-setup.sh`. If you already have a Pub/Sub topic named `tweet-nlp-demo` or a BigQuery dataset named `tweet_nlp_demo` then you can ignore the corresponding error messages. Before we begin to build our pipeline, we need to install a couple of Python client libraries. After doing this, you should reset the notebook kernel (Kernel > Restart Kernel) so that the packages are properly picked up. It may take a few minutes to install the packages.
###Code
# NoExport
%pip install google-cloud-translate google-cloud-language
###Output
_____no_output_____
###Markdown
We will start by importing the packages that we need for the notebook. The first code block contains packages that we will need when we submit the pipeline to Dataflow, so we will want to include the code cell in the exported script. **Before running the cell, be sure to change the Project ID to your own**. The rest of the variables (`OUTPUT_DATASET`, `OUTPUT_TABLE_UNAGG`,`OUTPUT_TABLE_AGG`, and `INPUT_TOPIC`) refer to objects created within the lab.
###Code
import argparse, os, json, logging
from datetime import datetime, timedelta
import json
import pandas as pd
import apache_beam as beam
from apache_beam.transforms import trigger
from apache_beam.io.gcp.internal.clients import bigquery
from apache_beam.options.pipeline_options import GoogleCloudOptions, PipelineOptions, SetupOptions, StandardOptions
import google.auth
from google.cloud import language_v1
from google.cloud.language_v1 import enums
from google.cloud import translate_v2 as translate
print('Beam Version:', beam.__version__)
PROJECT_ID = 'your-project-id-here' #TODO: CHANGE PROJECT ID
OUTPUT_DATASET = 'tweet_nlp_demo'
OUTPUT_TABLE_UNAGG = 'processed_tweet_data'
OUTPUT_TABLE_AGG = 'aggregated_tweet_data'
INPUT_TOPIC = "projects/{}/topics/tweet-nlp-demo".format(PROJECT_ID)
###Output
_____no_output_____
###Markdown
However, the next cell contains code to import the interactive runner we will use to explore the pipeline within the notebook. We do not want to include this in the final script so we will annotate it as such.
###Code
# NoExport
from apache_beam.runners.interactive import interactive_runner
import apache_beam.runners.interactive.interactive_beam as ib
###Output
_____no_output_____
###Markdown
Next we define our pipeline options. Since we wish to deal with data in real-time, we will set the streaming option to `True` to ensure that the pipeline runs indefinitely. The behavior differs slightly when we wish to use the interactive runner, but we will address that in just a moment.
###Code
# Setting up the Beam pipeline options.
options = PipelineOptions()
# Sets the pipeline mode to streaming, so we can stream the data from PubSub.
options.view_as(StandardOptions).streaming = True
# Sets the project to the default project in your current Google Cloud environment.
# The project will be used for creating a subscription to the PubSub topic.
_, options.view_as(GoogleCloudOptions).project = google.auth.default()
###Output
_____no_output_____
###Markdown
Now we set up our interactive runner. Note that we're setting a capture duration of 60 seconds. Instead of waiting indefinitely for more data to come in, we will collect 60 seconds worth of data and load it into an in-memory PCollection. That way we can visualize the results one transform at a time while building our pipeline. When we run the pipeline in Dataflow, we will want to run the pipeline indefintely.
###Code
# NoExport
ib.options.capture_duration = timedelta(seconds=60)
p = beam.Pipeline(interactive_runner.InteractiveRunner(), options=options)
###Output
_____no_output_____
###Markdown
**DO NOT RUN THE NEXT CELL IN THE NOTEBOOK!!!** The next cell defines all of the options for running the pipeline on Dataflow and we do not want to run this in the notebook. The cell is left here (uncommented) so that it will properly be included when we run `nbconvert` after exploring our pipeline.
###Code
from apache_beam.runners import DataflowRunner
options.view_as(StandardOptions).runner = 'DataflowRunner'
google_cloud_options = options.view_as(GoogleCloudOptions)
google_cloud_options.job_name = 'tweet-nlp-pipeline'
google_cloud_options.staging_location = 'gs://{}/binaries'.format(PROJECT_ID)
google_cloud_options.temp_location = 'gs://{}/temp'.format(PROJECT_ID)
google_cloud_options.region = 'us-central1'
p = beam.Pipeline(DataflowRunner(), options=options)
###Output
_____no_output_____
###Markdown
Now we are ready to start building our pipeline! We start by reading in tweets from our Pub/Sub topic using the `ReadFromPubSub` connector. After that we will use the `json.loads` function to parse the incoming JSON blob containing the text of the tweet and its attributes.
###Code
# So that Pandas Dataframes do not truncate data...
pd.set_option('display.max_colwidth', -1)
tweets = p | 'ReadTweet' >> beam.io.gcp.pubsub.ReadFromPubSub(topic=INPUT_TOPIC) | beam.Map(json.loads)
###Output
_____no_output_____
###Markdown
What we did in the previous cell was add two transformations to our pipelines DAG (Directed Acyclic Graph). We have not processed any data yet! We can use `ib.show` to ingest data from our Pub/Sub topic for 60 seconds (per our `capture_duration` option above) and store the data in an in-memory PCollection, we then apply `json.loads` to the elements of the PCollection and can visualize the results via Pandas. **WARNING:** The incoming tweets are (unfiltered) tweets containing the search term "pizza". Though the search term was chosen to be as uncontroversial as possible, anything could be in these tweets. Of course, this includes possibly very offensive material.
###Code
# NoExport
ib.show(tweets)
###Output
_____no_output_____
###Markdown
Now we can see the JSON blobs sent to Pub/Sub by the Twitter API. However we are only going to want certain properties of the messages for our goal. Let's take the "text", "created_at" and "source" fields for each message and pack them into a dictionary. We will create a custom function `parse_fields` and apply it in our pipeline once again using `beam.Map`.
###Code
def parse_fields(tweet):
trim = {}
trim['text'] = tweet['messages'][0]['data']['text']
trim['created_at'] = tweet['messages'][0]['data']['created_at']
trim['source']=tweet['messages'][0]['data']['source']
return trim
parsed_tweets = tweets | "Parse_Tweet" >> beam.Map(parse_fields)
###Output
_____no_output_____
###Markdown
Let us quickly use `ib.show` again to see the results of our parsing. Note that the output of the previous steps is still in an in-memory PCollection, so we do not have to wait a minute for data to come in through the Pub/Sub IO Connection again.
###Code
# NoExport
ib.show(parsed_tweets)
###Output
_____no_output_____
###Markdown
Note that the dictionaries are parsed by the interactive runner so that when we visualize the data it is presented as a table. Before we move on, we can use the `ib.show_graph` to visualize our pipeline.
###Code
# NoExport
ib.show_graph(p)
###Output
_____no_output_____
###Markdown
We can see the transforms (in boxes) with the cell numbers corresponding to them. In the circles between the tranforms, we can see the names of the corresponding PCollections. Note that between the `ReadTweet` and the `Map(loads)` transforms the name was generated by Beam since we did not assign a name ourselves.Now we are ready to begin applying machine learning to the data. The NLP (Natural Language Processing) API only supports certain languages for sentiment analysis. So, what we will do is first use the Translation API to detect the language. We will create a Python function, `detect_language`, to call the Translation API and add it to our pipeline once again using `beam.Map`.
###Code
def detect_language(tweet):
translate_client = translate.Client()
text = tweet['text']
result = translate_client.detect_language(text)
tweet['language'] = result['language']
tweet['lang_confidence'] = result['confidence']
return tweet
lang_tweets = parsed_tweets | "Detect_Language" >> beam.Map(detect_language)
###Output
_____no_output_____
###Markdown
Let us now detect the language of our tweets. Note that we will also record the confidence in the API's predictions ('lang_confidence') for later reference.
###Code
# NoExport
ib.show(lang_tweets)
###Output
_____no_output_____
###Markdown
Now we are ready to perform sentiment analysis on our tweets. We will invoke the NLP API to analyze the sentiment of tweets involving the term "pizza". Note that the translation of "pizza" is "pizza" in many languages, including French, German, Itaian, Portugese, and Spanish. These are lanaguages that are supported by the NLP API, so we will will filter based off the language detected by the Translation API. In the case that we are not working with one of these languages, we will assign a `None` value to the score and magnitude fields.As in the previous steps, we will invoke the API using a function and then call the function in our pipeline using `beam.Map`.
###Code
def analyze_sentiment(tweet):
client = language_v1.LanguageServiceClient()
type_ = enums.Document.Type.PLAIN_TEXT
if tweet['language'] in ['en', 'fr', 'de', 'it', 'pt', 'es']:
language = tweet['language']
document = {"content": tweet['text'], "type": type_, "language": language}
encoding_type = enums.EncodingType.UTF8
response = client.analyze_sentiment(document, encoding_type=encoding_type)
tweet['score'] = response.document_sentiment.score
tweet['magnitude'] = response.document_sentiment.magnitude
else:
tweet['score'] = None
tweet['magnitude'] = None
return tweet
analyzed_tweets = lang_tweets | "Analyze_Tweets" >> beam.Map(analyze_sentiment)
###Output
_____no_output_____
###Markdown
And as before, let us take a look into our processed tweets by using `ib.show`.
###Code
# NoExport
ib.show(analyzed_tweets, include_window_info=True)
###Output
_____no_output_____
###Markdown
We now have all of the information that we need to start performing our aggregations. However, there's one more thing we should address first. The date-timestamp (DTS) that Dataflow uses by default is the Pub/Sub publication time (when using the `ReadFromPubSub` connector). However, we would rather sort the tweets in the context of when they are published to Twitter. Above we can see that the `event_time` field and the `created_at` times are slightly different. We can replace the timestamp with the one in the `created_at` field.
###Code
def custom_timestamp(tweet):
ts = datetime.strptime(tweet["created_at"], "%Y-%m-%dT%H:%M:%S")
return beam.window.TimestampedValue(tweet, ts.timestamp())
analyzed_tweets_w_dts = analyzed_tweets | 'CustomTimestamp' >> beam.Map(custom_timestamp)
# NoExport
ib.show(analyzed_tweets_w_dts, include_window_info=True)
###Output
_____no_output_____
###Markdown
In our example here we will group our data into sliding windows of length 30 seconds and starting every 10 seconds. We do this by using the `beam.WindowInto` transform and specifying the window type, length, and offset using `beam.window.SlidingWindows`.
###Code
windowed_tweets = analyzed_tweets_w_dts | "Window" >> beam.WindowInto(beam.window.SlidingWindows(30, 10))
###Output
_____no_output_____
###Markdown
What does this actually do to our data in out PCollection? The best thing to do here is go ahead and take a peek into the output of the pipeline up to this point using `ib.show`. We will set the `include_window_info` flag to `True` so that we can peek into how windows are assigned.
###Code
# NoExport
ib.show(windowed_tweets, include_window_info=True)
###Output
_____no_output_____
###Markdown
Did you notice something above? Every tweet is now triplicated, with one entry for each window it belongs to. Another thing to notice is that we have simply *assigned* the windows at this point, the data has not been grouped into windows yet.We want to measure sentiment over time depending on the source of the tweet. To do this, let us create a "key-value" pair for each tweet. Strictly speaking, we do not have a key-value pair construction in Python, but Beam will treat the first value of an ordered pair as a "key" and the second value of the ordered pair as the "value".The key will be the source of the tweet and the value will be a dictionary of the score and magnitude of the tweet. We will be using both of these data points in the next transform. We follow a similar pattern from before: we create a Python function to perform our element-wise computation. However you may notice something new here. We `yield` instead of `return` at the end of our function. We do this because we want to return a generator instead of a single element. But why? Note that `create_source_key` does not return anything in the case that we did not assign a score above! So we either return nothing or a generator with a single element. We then add the transform to the pipeline using `beam.FlatMap`. `FlatMap` is perfect for any non-1:1 transform such as `create_source_key`; `FlatMap` expects the function being applied to return a generator and it will manage cycling through the generator when the PCollection is passed to the next transform.
###Code
def create_source_key(tweet):
if tweet['score']:
yield (tweet['source'], {'score': tweet['score'], 'magnitude': tweet['magnitude']})
prepped_tweets = windowed_tweets | "Create_Source_Key" >> beam.FlatMap(create_source_key)
# NoExport
ib.show(prepped_tweets)
###Output
_____no_output_____
###Markdown
Now we are ready to perform our aggregation. We will combine a weighted average of scores per window and per source. We will use the magnitude as our weight for the weighted average. However, there is not a built-in transform for performing this task!We will create our own custom combiner by extending `beam.CombineFn`. We need to define four functions when extending `beam.CombineFn` to create our custom combiner:1. `create_accumulator`: We initialize the information we will be passing from node to node. In our case we have an ordered pair (sum, count) where sum is the running sum of weighted scores.2. `add_input`: When we wish to include a new data point, how is it incorporated? We will add the magnitude times the score to the sum and increment the count by 1.3. `merge_accumulators`: We will be computing the accumulators where they live in the cluster, what do we do when we need to shuffle data for the final aggregation? This is why we are passing ordered pairs instead of averages, we can simple combine the sums and the counts.4. `extract_output`: This is the function that computes the final output. We finally combine our final weighted average by dividing the sum by the count. However, we need to anticipate the case that the count is 0 (as initally set). In this case, we will set the score to be `NaN`.Once we have created our custom combiner, we can apply it in our pipeline by calling `beam.CombinePerKey`.
###Code
class WeightedAverageFn(beam.CombineFn):
def create_accumulator(self):
return (0.0, 0)
def add_input(self, sum_count, input):
sum, count = sum_count
return sum + input['score'] * input['magnitude'], count + 1
def merge_accumulators(self, accumulators):
sums, counts = zip(*accumulators)
return sum(sums), sum(counts)
def extract_output(self, sum_count):
sum, count = sum_count
return {'score': sum / count, 'count': count} if count else {'score':float('NaN'), 'count': 0}
aggregated_tweets = prepped_tweets | "Aggregate_Weighted_Score" >> beam.CombinePerKey(WeightedAverageFn())
###Output
_____no_output_____
###Markdown
Let us take a quick peek at the output of our aggregations
###Code
# NoExport
ib.show(aggregated_tweets, include_window_info=True)
###Output
_____no_output_____
###Markdown
We're almost there! Let us just clean up our output to put it into a more convenient form for loading into BigQuery.
###Code
def parse_aggregation(agg_tweets):
result = {}
result['source'] = agg_tweets[0]
result['score'] = agg_tweets[1]['score']
result['count'] = agg_tweets[1]['count']
return result
parsed_aggregated_tweets = aggregated_tweets | "Parse_Aggregated_Results" >> beam.Map(parse_aggregation)
# NoExport
ib.show(parsed_aggregated_tweets,include_window_info=True)
###Output
_____no_output_____
###Markdown
We have created all of the transforms for our pipeline and we are ready to start analyzing and processing the entire real-time stream (versus working with a small in-memory PCollection). We will wrap up by defining two transforms to load data into BigQuery. We will load the aggregated tweet data (`parsed_aggregated_tweets`) and the unaggregated, analyzed tweets to a different table (`analyzed_tweets`). Keeping the unaggregated, analyzed tweets will allow us to go back and further analyze the individual tweets if another question arises without having to reprocess. Of course, we are paying to store the tweets in BigQuery, but this is much cheaper than having to reprocess.
###Code
table_spec_unagg = bigquery.TableReference(
projectId = PROJECT_ID,
datasetId = OUTPUT_DATASET,
tableId= OUTPUT_TABLE_UNAGG)
table_schema_unagg ='text:STRING, created_at:TIMESTAMP, source:STRING, language:STRING, lang_confidence:FLOAT64, score:FLOAT64, magnitude:FLOAT64'
bq_output_unagg = analyzed_tweets | 'WriteToBQ_Unagg'>> beam.io.WriteToBigQuery(table_spec_unagg,
schema=table_schema_unagg,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED)
table_spec_agg = bigquery.TableReference(
projectId = PROJECT_ID,
datasetId = OUTPUT_DATASET,
tableId= OUTPUT_TABLE_AGG)
table_schema_agg ='source:STRING, score:FLOAT64, count:INT64, window_start:TIMESTAMP'
bq_output_agg = parsed_aggregated_tweets | 'WriteToBQ_Agg'>> beam.io.WriteToBigQuery(table_spec_agg,
schema=table_schema_agg,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED)
###Output
_____no_output_____
###Markdown
Now we can finally go back and look at our completed graph. Note that by applying `bq_output_unagg` to `analyzed_tweets` we have created a branch in the pipeline.
###Code
# NoExport
ib.show_graph(p)
###Output
_____no_output_____
###Markdown
Everything is ready for deploying to Dataflow! We will use the `nbconvert` tool to export this Jupyter Notebook into a Python script, so we can execute the script in other environments without having to install a tool to run notebooks. The cells that were flagged as `NoExport` will not be included in the script. These were cells that used the interactive runner or cells used to work within the notebook environment that we don't need when submitting to Dataflow.The final cell of the notebook includes the `p.run()` call that we need to execute the pipeline on Dataflow. You do not need to run that cell within the notebook.
###Code
# NoExport
!jupyter nbconvert --to script --RegexRemovePreprocessor.patterns="['# NoExport']" TweetPipeline.ipynb
###Output
_____no_output_____
###Markdown
Let us go ahead and submit the job to Dataflow! We will do this by using executing the Python script we just created. After you run the cell be sure to check out the job running in Dataflow and the output in your BigQuery dataset.
###Code
# NoExport
!pip install apache_beam google-cloud-language google-cloud-translate google-apitools
!echo "google-cloud-translate==2.0.1" > requirements.txt
!python3 TweetPipeline.py --save_main_session --requirements_file requirements.txt
# Don't run this cell within the notebook!
logging.getLogger().setLevel(logging.INFO)
p.run()
###Output
_____no_output_____ |
kaggle_intro/kaggle_intro_iris.ipynb | ###Markdown
資料載入與探索
###Code
# Load in the train datasets
train = pd.read_csv('input/train.csv', encoding = "utf-8", dtype = {'type': np.int32})
test = pd.read_csv('input/test.csv', encoding = "utf-8")
submission = pd.read_csv('input/submission.csv', encoding = "utf-8", dtype = {'type': np.int32})
train.head(3)
test.head(3)
submission.head(3)
###Output
_____no_output_____
###Markdown
One-hot Encoding
###Code
df1 = pd.get_dummies(train['屬種'])
df1.sample(5)
###Output
_____no_output_____
###Markdown
LabelEncoding
###Code
df2 = train['屬種'].replace({'Iris-setosa':1,'Iris-versicolor':2,'Iris-virginica':3})
df2.sample(5)
###Output
_____no_output_____
###Markdown
Data clean - 缺值處理
###Code
#missing data
miss_sum = train.isnull().sum().sort_values(ascending=False)
miss_sum
#查詢那幾筆是空值
print(train[train['花萼寬度'].isnull()])
print("--------------------------------")
print(train[train['花萼長度'].isnull()])
#直接把 NaN drop (如果筆數很少,不影響建模的時候)
train_d_na = train.dropna().reset_index(drop=True)
train_d_na.isnull().sum().sort_values(ascending=False)
#將空值補平均數
#train.loc[train['花萼寬度'].isnull(),['花萼寬度']] = train['花萼寬度'].mean() #花萼寬度:第2欄
train[['花萼寬度']] = train[['花萼寬度']].fillna(np.mean(train[['花萼寬度']]))
train.plot(kind='line',y='花萼寬度',figsize=(10,6),fontsize=14,title='花萼寬度')
#將空值補眾數
#train.loc[train['花萼長度'].isnull(),['花萼長度']] = train['花萼長度'].mode()[0] #花萼長度:第1欄
train[['花萼長度']] = train[['花萼長度']].fillna(train['花萼長度'].mode()[0])
train.plot(kind='line',y='花萼長度',figsize=(10,6),fontsize=14,title='花萼長度')
from pandas.plotting import scatter_matrix
scatter_matrix( train[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']],figsize=(10, 10),color='b')
###Output
_____no_output_____
###Markdown
相關分析
###Code
corr = train[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']].corr()
print(corr)
import seaborn as sns
plt.rcParams['font.family']='DFKai-SB' #顯示中文
plt.figure(figsize=(10,10))
sns.heatmap(corr, square=True, annot=True, cmap="RdBu_r") #center=0, cmap="YlGnBu"
#sns.plt.show()
# http://seaborn.pydata.org/tutorial/color_palettes.html
###Output
_____no_output_____
###Markdown
離群值分析
###Code
#train[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']]
fig, axes = plt.subplots(nrows=2, ncols=4, figsize=(10, 10), sharey=True)
axes[0, 0].boxplot(train['花萼寬度'],showmeans=True)
axes[0, 0].set_title('訓:花萼寬度')
axes[0, 1].boxplot(train['花瓣寬度'],showmeans=True)
axes[0, 1].set_title('訓:花瓣寬度')
axes[0, 2].boxplot(train['花瓣長度'],showmeans=True)
axes[0, 2].set_title('訓:花瓣長度')
axes[0, 3].boxplot(train['花萼長度'],showmeans=True)
axes[0, 3].set_title('訓:花萼長度')
axes[1, 0].boxplot(test['花萼寬度'],showmeans=True)
axes[1, 0].set_title('測:花萼寬度')
axes[1, 1].boxplot(test['花瓣寬度'],showmeans=True)
axes[1, 1].set_title('測:花瓣寬度')
axes[1, 2].boxplot(test['花瓣長度'],showmeans=True)
axes[1, 2].set_title('測:花瓣長度')
axes[1, 3].boxplot(test['花萼長度'],showmeans=True)
axes[1, 3].set_title('測:花萼長度')
train.plot(kind='bar',y='花萼寬度',figsize=(30,6),fontsize=14,title='花萼寬度')
#IQR = Q3-Q1
IQR = np.percentile(train['花萼寬度'],75) - np.percentile(train['花萼寬度'],25)
#outlier = Q3 + 1.5*IQR , or. Q1 - 1.5*IQR
train[train['花萼寬度'] > np.percentile(train['花萼寬度'],75)+1.5*IQR]
#outlier = Q3 + 1.5*IQR , or. Q1 - 1.5*IQR
train[train['花萼寬度'] < np.percentile(train['花萼寬度'],25)-1.5*IQR]
#fix_X = X.drop(X.index[[5,23,40]])
#fix_y = y.drop(y.index[[5,23,40]])
###Output
_____no_output_____
###Markdown
切分資料 (從官方的training data切分出來)
###Code
#把示範用的 type 4, 資料去除, 以免干擾建模
train = train[train['type']!=4]
from sklearn.model_selection import train_test_split
X = train[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']]
y = train['type']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=100)
###Output
_____no_output_____
###Markdown
標準化
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
X_train_std[0:5]
y_test[0:5]
###Output
_____no_output_____
###Markdown
建立初步模型 KNN
###Code
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
knn = KNeighborsClassifier(n_neighbors=3, weights='uniform')
knn.fit(X_train_std, y_train)
print(metrics.classification_report(y_test, knn.predict(X_test_std)))
print(metrics.confusion_matrix(y_test, knn.predict(X_test_std)))
###Output
precision recall f1-score support
1 1.00 1.00 1.00 14
2 0.90 0.90 0.90 10
3 0.92 0.92 0.92 12
avg / total 0.94 0.94 0.94 36
[[14 0 0]
[ 0 9 1]
[ 0 1 11]]
###Markdown
Random Forest
###Code
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=500, criterion='gini', max_features='auto', oob_score=True)
rfc.fit(X_train, y_train) #不標準化
print("oob_score(accuary):",rfc.oob_score_)
print(metrics.classification_report(y_test, rfc.predict(X_test)))
###Output
oob_score(accuary): 0.916666666667
precision recall f1-score support
1 1.00 1.00 1.00 14
2 1.00 0.90 0.95 10
3 0.92 1.00 0.96 12
avg / total 0.97 0.97 0.97 36
###Markdown
貝式分類器
###Code
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(X_train_std, y_train)
print(metrics.classification_report(y_test, gnb.predict(X_test_std)))
print(metrics.confusion_matrix(y_test, gnb.predict(X_test_std)))
###Output
precision recall f1-score support
1 1.00 1.00 1.00 14
2 1.00 0.90 0.95 10
3 0.92 1.00 0.96 12
avg / total 0.97 0.97 0.97 36
[[14 0 0]
[ 0 9 1]
[ 0 0 12]]
###Markdown
SVM
###Code
from sklearn.svm import SVC
svc = SVC(C=1.0, kernel="rbf", probability=True)
svc.fit(X_train_std, y_train)
print(metrics.classification_report(y_test, svc.predict(X_test_std)))
print(metrics.confusion_matrix(y_test, svc.predict(X_test_std)))
###Output
precision recall f1-score support
1 1.00 1.00 1.00 14
2 1.00 0.90 0.95 10
3 0.92 1.00 0.96 12
avg / total 0.97 0.97 0.97 36
[[14 0 0]
[ 0 9 1]
[ 0 0 12]]
###Markdown
Stackingwebsite: http://rasbt.github.io/mlxtend/
###Code
#from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from mlxtend.classifier import StackingClassifier
import xgboost as xgb
clf1 = KNeighborsClassifier(n_neighbors=3, weights='uniform')
clf2 = RandomForestClassifier(n_estimators=500, criterion='gini', max_features='auto', oob_score=True)
clf3 = GaussianNB()
clf4 = SVC(C=1.0, kernel="rbf", probability=True)
meta_clf = xgb.XGBClassifier(n_estimators= 2000, max_depth= 4)
stacking_clf = StackingClassifier(classifiers=[clf1, clf2, clf3, clf4], meta_classifier=meta_clf)
clf1.fit(X_train_std, y_train)
clf2.fit(X_train, y_train)
clf3.fit(X_train_std, y_train)
clf4.fit(X_train_std, y_train)
stacking_clf.fit(X_train_std, y_train)
print('KNN Score:',clf1.score(X_test_std, y_test))
print('RF Score:',clf2.score(X_test, y_test))
print('GNB Score:',clf3.score(X_test_std, y_test))
print('SVC Score:',clf4.score(X_test_std, y_test))
print('Stacking Score:',stacking_clf.score(X_test_std, y_test))
###Output
C:\Anaconda3\lib\site-packages\sklearn\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
###Markdown
XGBoost詳細說明: (ENG) https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/ (CHT) http://www.itread01.com/articles/1476146171.html
###Code
import xgboost as xgb
gbm = xgb.XGBClassifier(n_estimators= 2000, max_depth= 4).fit(X_train, y_train)
print(metrics.classification_report(y_test, gbm.predict(X_test)))
print("Score:", gbm.score(X_test, y_test))
print(gbm.feature_importances_)
from xgboost import plot_importance
plot_importance(gbm, )
plt.show()
pred = gbm.predict(test[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']])
pred
# Generate Submission File
StackingSubmission = pd.DataFrame({ 'id': submission.id, 'type': pred })
StackingSubmission.to_csv("submission.csv", index=False)
submission = pd.read_csv('submission.csv', encoding = "utf-8", dtype = {'type': np.int32})
submission
test[20:30]
###Output
_____no_output_____
###Markdown
測試資料集的預測結果比較
###Code
#使用先前 training set的scale fit做縮放
test_std = sc.transform(test[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']])
submission_stk = stacking_clf.predict(test_std)
submission_stk
submission_rfc = rfc.predict(test[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']])
submission_rfc
submission_knn =knn.predict(test_std)
submission_knn
submission_gnb = gnb.predict(test_std)
submission_gnb
submission_svc = svc.predict(test_std)
submission_svc
from sklearn.ensemble import VotingClassifier
clf1 = knn
clf2 = rfc
clf3 = gnb
clf4 = svc
eclf = VotingClassifier(estimators=[('knn', clf1), ('rfc', clf2),('gnb', clf3),('svc',clf4)], voting='hard', weights=[1, 1, 1, 4])
eclf.fit(X_train_std, y_train)
print(metrics.classification_report(y_test, eclf.predict(X_test_std)))
submission_eclf = eclf.predict(test_std)
submission_eclf
###Output
_____no_output_____ |
day5_HyperOpt.ipynb | ###Markdown
Reading data
###Code
df = pd.read_hdf('data/car.h5')
df.shape
#df.columns.values
###Output
_____no_output_____
###Markdown
Dummy Model
###Code
feats = ['car_id']
X = df[ feats ].values
y = df['price_value'].values
model = DummyRegressor()
model.fit(X, y)
y_pred = model.predict(X)
mae(y, y_pred)
# Remove prices in currencies different than PLN
df = df[ df['price_currency'] == 'PLN' ]
df.shape
SUFIX_CAT = '_cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[ feat ].factorize()[0]
if SUFIX_CAT in feat:
df[feat] = factorized_value
else:
df[feat + SUFIX_CAT] = factorized_values
cat_feats = [x for x in df.columns if SUFIX_CAT in x]
cat_feats = [x for x in cat_feats if 'price' not in x]
len(cat_feats)
def run_model(model, feats):
X = df[ feats ].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
#Decision tree
run_model(DecisionTreeRegressor(max_depth=5), cat_feats)
#Random Forest
model = RandomForestRegressor(max_depth=5, n_estimators=50, random_state=0)
run_model(model, cat_feats)
#XGBoost
xgb_params = {
'max_depth':5,
'n_estimators':50,
'learning_rate':0.1,
'seed':0,
}
model = xgb.XGBRegressor(**xgb_params)
run_model(model, cat_feats)
xgb_params = {
'max_depth':5,
'n_estimators':50,
'learning_rate':0.1,
'seed':0,
}
m = xgb.XGBRegressor(**xgb_params)
m.fit(X, y)
imp = PermutationImportance(m, random_state=0).fit(X, y)
eli5.show_weights(imp, feature_names=cat_feats)
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x)=='None' else int(x) )
df['param_rok-produkcji'].unique()
df['param_moc']
df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x)=='None' else str(x).split(' ')[0])
df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x)=='None' else str(x).split('cm')[0].replace(' ',''))
feats = ['param_napęd_cat', 'param_rok-produkcji', 'param_stan_cat', 'param_skrzynia-biegów_cat', 'param_faktura-vat_cat', 'param_moc',
'param_marka-pojazdu_cat','param_typ_cat', 'feature_kamera-cofania_cat', 'param_pojemność-skokowa', 'seller_name_cat', 'param_kod-silnika_cat',
'feature_wspomaganie-kierownicy_cat', 'feature_asystent-pasa-ruchu_cat', 'feature_regulowane-zawieszenie_cat',
'feature_system-start-stop_cat', 'feature_światła-led_cat']
xgb_params = {
'max_depth':5,
'n_estimators':50,
'learning_rate':0.1,
'seed':0,
}
model = xgb.XGBRegressor(**xgb_params)
run_model(model, feats)
def obj_func(params):
print('Training with params: ')
print(params)
mean_mae, score_std = run_model(xgb.XGBRegressor(**xgb_params), feats)
return {'loss': np.abs(mean_mae), 'status': STATUS_OK}
#space
xgb_reg_params = {
'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype=int)),
'subsample': hp.quniform('subsample', 0.05, 1, 0.05),
'colsample_bytree': hp.quniform('colsample_bytree', 0.05, 1, 0.05),
'objective': 'reg:squarederror',
'n_estimators': 100,
'seed': 0,
}
#run
best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=10)
best
###Output
Training with params:
{'colsample_bytree': 0.35000000000000003, 'learning_rate': 0.1, 'max_depth': 6, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.25}
[21:13:34] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:13:38] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:13:41] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
Training with params:
{'colsample_bytree': 0.2, 'learning_rate': 0.2, 'max_depth': 6, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.15000000000000002}
[21:13:45] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:13:49] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:13:52] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
Training with params:
{'colsample_bytree': 0.9, 'learning_rate': 0.1, 'max_depth': 9, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.1}
[21:13:56] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:00] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:04] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
Training with params:
{'colsample_bytree': 0.45, 'learning_rate': 0.15000000000000002, 'max_depth': 15, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 1.0}
[21:14:07] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:11] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:15] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
Training with params:
{'colsample_bytree': 0.2, 'learning_rate': 0.15000000000000002, 'max_depth': 5, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.30000000000000004}
[21:14:18] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:22] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:26] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
Training with params:
{'colsample_bytree': 0.30000000000000004, 'learning_rate': 0.2, 'max_depth': 10, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.4}
[21:14:29] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:33] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:37] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
Training with params:
{'colsample_bytree': 0.15000000000000002, 'learning_rate': 0.15000000000000002, 'max_depth': 6, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.4}
[21:14:41] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:44] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:48] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
Training with params:
{'colsample_bytree': 0.8500000000000001, 'learning_rate': 0.05, 'max_depth': 12, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.75}
[21:14:52] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:55] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:14:59] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
Training with params:
{'colsample_bytree': 0.15000000000000002, 'learning_rate': 0.3, 'max_depth': 13, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.65}
[21:15:03] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:15:06] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:15:10] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
Training with params:
{'colsample_bytree': 0.9500000000000001, 'learning_rate': 0.15000000000000002, 'max_depth': 12, 'n_estimators': 100, 'objective': 'reg:squarederror', 'seed': 0, 'subsample': 0.25}
[21:15:14] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:15:17] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
[21:15:21] WARNING: /workspace/src/objective/regression_obj.cu:152: reg:linear is now deprecated in favor of reg:squarederror.
100%|██████████| 10/10 [01:50<00:00, 11.09s/it, best loss: 9610.499296539281]
|
2_Implementing_FunkSVD_Solution.ipynb | ###Markdown
Implementing FunkSVD - SolutionIn this notebook we will take a look at writing our own function that performs FunkSVD, which will follow the steps you saw in the previous video. If you find that you aren't ready to tackle this task on your own, feel free to skip to the following video where you can watch as I walk through the steps.To test our algorithm, we will run it on the subset of the data you worked with earlier. Run the cell below to get started.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import sparse
import svd_tests as t
%matplotlib inline
# Read in the datasets
movies = pd.read_csv('data/movies_clean.csv')
reviews = pd.read_csv('data/reviews_clean.csv')
del movies['Unnamed: 0']
del reviews['Unnamed: 0']
# Create user-by-item matrix
user_items = reviews[['user_id', 'movie_id', 'rating', 'timestamp']]
user_by_movie = user_items.groupby(['user_id', 'movie_id'])['rating'].max().unstack()
# Create data subset
user_movie_subset = user_by_movie[[73486, 75314, 68646, 99685]].dropna(axis=0)
ratings_mat = np.matrix(user_movie_subset)
print(ratings_mat)
###Output
[[ 10. 10. 10. 10.]
[ 10. 4. 9. 10.]
[ 8. 9. 10. 5.]
[ 9. 8. 10. 10.]
[ 10. 5. 9. 9.]
[ 6. 4. 10. 6.]
[ 9. 8. 10. 9.]
[ 10. 5. 9. 8.]
[ 7. 8. 10. 8.]
[ 9. 5. 9. 7.]
[ 9. 8. 10. 8.]
[ 9. 10. 10. 9.]
[ 10. 9. 10. 8.]
[ 5. 8. 5. 8.]
[ 10. 8. 10. 10.]
[ 9. 9. 10. 10.]
[ 9. 8. 8. 8.]
[ 10. 8. 1. 10.]
[ 5. 6. 10. 10.]
[ 8. 7. 10. 7.]]
###Markdown
`1.` You will use the **user_movie_subset** matrix to show that your FunkSVD algorithm will converge. In the below cell, use the comments and document string to assist you as you complete writing your own function to complete FunkSVD. You may also want to try to complete the funtion on your own without the assistance of comments. You may feel free to remove and add to the function in any way that gets you a working solution! **Notice:** There isn't a sigma matrix in this version of matrix factorization.
###Code
def FunkSVD(ratings_mat, latent_features=4, learning_rate=0.0001, iters=100):
'''
This function performs matrix factorization using a basic form of FunkSVD with no regularization
INPUT:
ratings_mat - (numpy array) a matrix with users as rows, movies as columns, and ratings as values
latent_features - (int) the number of latent features used
learning_rate - (float) the learning rate
iters - (int) the number of iterations
OUTPUT:
user_mat - (numpy array) a user by latent feature matrix
movie_mat - (numpy array) a latent feature by movie matrix
'''
# Set up useful values to be used through the rest of the function
n_users = ratings_mat.shape[0]
n_movies = ratings_mat.shape[1]
num_ratings = np.count_nonzero(~np.isnan(ratings_mat))
# initialize the user and movie matrices with random values
user_mat = np.random.rand(n_users, latent_features)
movie_mat = np.random.rand(latent_features, n_movies)
# initialize sse at 0 for first iteration
sse_accum = 0
# header for running results
print("Optimizaiton Statistics")
print("Iterations | Mean Squared Error ")
# for each iteration
for iteration in range(iters):
# update our sse
old_sse = sse_accum
sse_accum = 0
# For each user-movie pair
for i in range(n_users):
for j in range(n_movies):
# if the rating exists
if ratings_mat[i, j] > 0:
# compute the error as the actual minus the dot product of the user and movie latent features
diff = ratings_mat[i, j] - np.dot(user_mat[i, :], movie_mat[:, j])
# Keep track of the sum of squared errors for the matrix
sse_accum += diff**2
# update the values in each matrix in the direction of the gradient
for k in range(latent_features):
user_mat[i, k] += learning_rate * (2*diff*movie_mat[k, j])
movie_mat[k, j] += learning_rate * (2*diff*user_mat[i, k])
# print results for iteration
print("%d \t\t %f" % (iteration+1, sse_accum / num_ratings))
return user_mat, movie_mat
###Output
_____no_output_____
###Markdown
`2.` Try out your function on the **user_movie_subset** dataset. First try 4 latent features, a learning rate of 0.005, and 10 iterations. When you take the dot product of the resulting U and V matrices, how does the resulting **user_movie** matrix compare to the original subset of the data?
###Code
user_mat, movie_mat = FunkSVD(ratings_mat, latent_features=4, learning_rate=0.005, iters=10)
print(np.dot(user_mat, movie_mat))
print(ratings_mat)
###Output
[[ 10.28729394 9.18159883 10.3011113 10.10854516]
[ 8.40901963 7.25897547 9.17647614 8.96236641]
[ 7.96434463 6.96572226 8.29078269 7.62868227]
[ 9.49745495 8.16283023 10.0978691 9.6359178 ]
[ 8.44627578 7.27872196 9.0380299 8.64455622]
[ 6.93472885 6.11365411 7.10425854 6.83177471]
[ 9.13015257 7.93629198 9.75497583 9.22649788]
[ 8.24952963 7.03525062 8.87468261 8.03327882]
[ 8.30034121 7.13815678 9.19308563 8.50367206]
[ 7.75389741 6.62928807 8.24367755 7.5336925 ]
[ 8.90593631 7.57443516 9.54381082 8.76562802]
[ 9.68793559 8.53054925 9.9485221 9.55102636]
[ 9.29353082 8.45644298 9.40382861 9.31821645]
[ 6.64271593 5.7035421 7.01594244 6.62622346]
[ 9.60466595 8.29849156 10.21659348 9.96836171]
[ 9.88644628 8.80495757 10.09719035 9.61751354]
[ 8.35964657 7.52516593 8.2682908 8.46863883]
[ 7.33621077 6.63868097 7.08323205 7.0246992 ]
[ 8.25256313 6.95004658 9.05355206 8.48356331]
[ 8.17132078 7.11215273 8.47818648 8.13304922]]
[[ 10. 10. 10. 10.]
[ 10. 4. 9. 10.]
[ 8. 9. 10. 5.]
[ 9. 8. 10. 10.]
[ 10. 5. 9. 9.]
[ 6. 4. 10. 6.]
[ 9. 8. 10. 9.]
[ 10. 5. 9. 8.]
[ 7. 8. 10. 8.]
[ 9. 5. 9. 7.]
[ 9. 8. 10. 8.]
[ 9. 10. 10. 9.]
[ 10. 9. 10. 8.]
[ 5. 8. 5. 8.]
[ 10. 8. 10. 10.]
[ 9. 9. 10. 10.]
[ 9. 8. 8. 8.]
[ 10. 8. 1. 10.]
[ 5. 6. 10. 10.]
[ 8. 7. 10. 7.]]
###Markdown
**The predicted ratings from the dot product are already starting to look a lot like the original data values even after only 10 iterations. You can see some extreme low values that are not captured well yet. The 5 in the second to last row in the first column is predicted as an 8, and the 4 in the second row and second column is predicted to be a 7. Clearly the model is not done learning, but things are looking good.** `3.` Let's try out the function again on the **user_movie_subset** dataset. This time we will again use 4 latent features and a learning rate of 0.005. However, let's bump up the number of iterations to 250. When you take the dot product of the resulting U and V matrices, how does the resulting **user_movie** matrix compare to the original subset of the data? What do you notice about your error at the end of the 250 iterations?
###Code
user_mat, movie_mat = FunkSVD(ratings_mat, latent_features=4, learning_rate=0.005, iters=250)
print(np.dot(user_mat, movie_mat))
print(ratings_mat)
###Output
[[ 10.00001034 10.00000426 10.00000266 9.99999 ]
[ 10.000003 4.00000123 9.00000072 9.99999696]
[ 7.9999992 8.99999966 9.99999972 5.00000059]
[ 9.00002554 8.0000101 10.00000621 9.99997548]
[ 10.00000495 5.00000157 9.00000077 8.99999507]
[ 5.99998704 3.99999452 9.99999642 6.00001218]
[ 9.00000884 8.00000318 10.0000018 8.99999138]
[ 9.99999043 4.99999585 8.99999723 8.00000891]
[ 6.99999846 7.99999908 9.99999928 8.00000132]
[ 8.99999668 4.99999857 8.99999903 7.00000298]
[ 9.00001087 8.00000417 10.00000248 7.99998945]
[ 9.00000574 10.00000183 10.00000093 8.99999434]
[ 9.99998068 8.99999188 9.99999474 8.00001823]
[ 4.9999863 7.99999421 4.99999628 8.00001303]
[ 9.99999858 7.99999952 9.99999968 10.00000119]
[ 8.99999792 8.99999925 9.99999952 10.00000186]
[ 8.99999694 7.99999892 7.99999934 8.00000279]
[ 9.99999728 7.99999889 0.99999929 10.00000257]
[ 4.99999332 5.99999739 9.99999838 10.00000631]
[ 8.00001179 7.00000504 10.0000032 6.9999886 ]]
[[ 10. 10. 10. 10.]
[ 10. 4. 9. 10.]
[ 8. 9. 10. 5.]
[ 9. 8. 10. 10.]
[ 10. 5. 9. 9.]
[ 6. 4. 10. 6.]
[ 9. 8. 10. 9.]
[ 10. 5. 9. 8.]
[ 7. 8. 10. 8.]
[ 9. 5. 9. 7.]
[ 9. 8. 10. 8.]
[ 9. 10. 10. 9.]
[ 10. 9. 10. 8.]
[ 5. 8. 5. 8.]
[ 10. 8. 10. 10.]
[ 9. 9. 10. 10.]
[ 9. 8. 8. 8.]
[ 10. 8. 1. 10.]
[ 5. 6. 10. 10.]
[ 8. 7. 10. 7.]]
###Markdown
**In this case, we were able to completely reconstruct the item-movie matrix to obtain an essentially 0 mean squared error. I obtained 0 MSE on iteration 165.** The last time we placed an **np.nan** value into this matrix the entire svd algorithm in python broke. Let's see if that is still the case using your FunkSVD function. In the below cell, I have placed a nan into the first cell of your numpy array. `4.` Use 4 latent features, a learning rate of 0.005, and 250 iterations. Are you able to run your SVD without it breaking (something that was not true about the python built in)? Do you get a prediction for the nan value? What is your prediction for the missing value? Use the cells below to answer these questions.
###Code
ratings_mat[0, 0] = np.nan
ratings_mat
# run SVD on the matrix with the missing value
user_mat, movie_mat = FunkSVD(ratings_mat, latent_features=4, learning_rate=0.005, iters=250)
preds = np.dot(user_mat, movie_mat)
print("The predicted value for the missing rating is {}:".format(preds[0,0]))
print()
print("The actual value for the missing rating is {}:".format(ratings_mat[0,0]))
print()
assert np.isnan(preds[0,0]) == False
print("That's right! You just predicted a rating for a user-movie pair that was never rated!")
print("But if you look in the original matrix, this was actually a value of 10. Not bad!")
###Output
The predicted value for the missing rating is 11.252904946215294:
The actual value for the missing rating is nan:
That's right! You just predicted a rating for a user-movie pair that was never rated!
But if you look in the original matrix, this was actually a value of 10. Not bad!
###Markdown
Now let's extend this to a more realistic example. Unfortunately, running this function on your entire user-movie matrix is still not something you likely want to do on your local machine. However, we can see how well this example extends to 1000 users. In the above portion, you were using a very small subset of data with no missing values.`5.` Given the size of this matrix, this will take quite a bit of time. Consider the following hyperparameters: 4 latent features, 0.005 learning rate, and 20 iterations. Grab a snack, take a walk, and this should be done running in a bit.
###Code
# Setting up a matrix of the first 1000 users with movie ratings
first_1000_users = np.matrix(user_by_movie.head(1000))
# perform funkSVD on the matrix of the top 1000 users
user_mat, movie_mat = FunkSVD(first_1000_users, latent_features=4, learning_rate=0.005, iters=20)
###Output
Optimizaiton Statistics
Iterations | Mean Squared Error
1 23.043054
2 10.624768
3 7.314082
4 5.657515
5 4.612804
6 3.880947
7 3.335650
8 2.912864
9 2.575807
10 2.301409
11 2.074291
12 1.883753
13 1.722120
14 1.583750
15 1.464408
16 1.360851
17 1.270545
18 1.191467
19 1.121970
20 1.060690
###Markdown
`6.` Now that you have a set of predictions for each user-movie pair. Let's answer a few questions about your results. Provide the correct values to each of the variables below, and check your solutions using the tests below.
###Code
# How many actual ratings exist in first_1000_users
num_ratings = np.count_nonzero(~np.isnan(first_1000_users))
print("The number of actual ratings in the first_1000_users is {}.".format(num_ratings))
print()
# How many ratings did we make for user-movie pairs that didn't have ratings
ratings_for_missing = first_1000_users.shape[0]*first_1000_users.shape[1] - num_ratings
print("The number of ratings made for user-movie pairs that didn't have ratings is {}".format(ratings_for_missing))
# Test your results against the solution
assert num_ratings == 10852, "Oops! The number of actual ratings doesn't quite look right."
assert ratings_for_missing == 31234148, "Oops! The number of movie-user pairs that you made ratings for that didn't actually have ratings doesn't look right."
# Make sure you made predictions on all the missing user-movie pairs
preds = np.dot(user_mat, movie_mat)
assert np.isnan(preds).sum() == 0
print("Nice job! Looks like you have predictions made for all the missing user-movie pairs! But I still have one question... How good are they?")
###Output
Nice job! Looks like you have predictions made for all the missing user-movie pairs! But I still have one question... How good are they?
|
examples/normalized_coordinates.ipynb | ###Markdown
Simple Normalized Coordinates in ParticleGroup1D normalized coordinates originate from the normal form decomposition, where the transfer matrix that propagates phase space coordinates $(x, p)$ is decomposed as$M = A \cdot R(\theta) \cdot A^{-1}$And the matrix $A$ can be parameterized asA = $\begin{pmatrix}\sqrt{\beta} & 0\\-\alpha/\sqrt{\beta} & 1/\sqrt{\beta}\end{pmatrix}$
###Code
?A_mat_calc
# Make phase space circle. This will represent some normalized coordinates
theta = np.linspace(0, np.pi*2, 100)
zvec0 = np.array([np.cos(theta), np.sin(theta)])
plt.scatter(*zvec0)
# Make a 'beam' in 'lab coordinates'
MYMAT = np.array([[10, 0],[-3, 5]])
zvec = np.matmul(MYMAT , zvec0)
plt.scatter(*zvec)
###Output
_____no_output_____
###Markdown
With a beam, $\alpha$ and $\beta$ can be determined from moments of the covariance matrix.
###Code
?twiss_calc
# Calculate a sigma matrix, get the determinant
sigma_mat2 = np.cov(*zvec)
np.linalg.det(sigma_mat2)
# Get some twiss
twiss = twiss_calc(sigma_mat2)
twiss
# Analyzing matrices
A = A_mat_calc(twiss['beta'], twiss['alpha'])
A_inv = A_mat_calc(twiss['beta'], twiss['alpha'], inverse=True)
# A_inv turns this back into a circle.
zvec2 = np.matmul(A_inv, zvec)
plt.scatter(*zvec2)
###Output
_____no_output_____
###Markdown
Twiss parametersEffective Twiss parameters can be calculated from the second order moments of the particles.
###Code
# This does not change the phase space area
twiss_calc(np.cov(*zvec2))
# Reset plot
matplotlib.rcParams['figure.figsize'] = (13,8)
###Output
_____no_output_____
###Markdown
x_bar, px_bar, Jx, etc.These are essentially action-angle coordinates, calculated by using the an analyzing twiss dict
###Code
?normalized_particle_coordinate
# Get some example particles
P = ParticleGroup('data/bmad_particles2.h5')
# This is a typical transverse phase space plot
P.plot('x', 'px')
# If no twiss is given, then the analyzing matrix is computed from the beam itself.
normalized_particle_coordinate(P, 'x', twiss=None)
# This is equivelent
normalized_particle_coordinate(P, 'x', twiss=twiss_calc(P.cov('x', 'px')), mass_normalize=False)/np.sqrt(P.mass)
# And is given as a property:
P.x_bar
# The amplitude is defined as:
(P.x_bar**2 + P.px_bar**2)/2
# This is also given as a property
P.Jx
# Note the mass normalization is the same
P.Jx.mean(), P['mean_Jx'], P['norm_emit_x']
# This is now nice and roundish
P.plot('x_bar', 'px_bar')
# Jy also works. This gives some sense of where the emittance is larger.
P.plot('t', 'Jy')
# Sort by Jx:
P = P[np.argsort(P.Jx)]
# Now particles are ordered
plt.plot(P.Jx)
# This can be used to calculate the 95% emittance
P[0:int(0.95*len(P))]['norm_emit_x']
###Output
_____no_output_____
###Markdown
Simple 'matching'Often a beam needs to be 'matched' for tracking in some program.This is a 'faked' tranformation that ultimately would need to be realized by a focusing system.
###Code
def twiss_match(x, p, beta0=1, alpha0=0, beta1=1, alpha1=0):
"""
Simple Twiss matching.
Takes positions x and momenta p, and transforms them according to
initial Twiss parameters:
beta0, alpha0
into final Twiss parameters:
beta1, alpha1
This is simply the matrix ransformation:
xnew = ( sqrt(beta1/beta0) 0 ) . ( x )
pnew ( (alpha0-alpha1)/sqrt(beta0*beta1) sqrt(beta0/beta1) ) ( p )
Returns new x, p
"""
m11 = np.sqrt(beta1/beta0)
m21 = (alpha0-alpha1)/np.sqrt(beta0*beta1)
xnew = x * m11
pnew = x * m21 + p / m11
return xnew, pnew
# Get some Twiss
T0 = twiss_calc(P.cov('x', 'xp'))
T0
# Make a copy, and maniplulate
P2 = P.copy()
P2.x, P2.px = twiss_match(P.x, P.px/P['mean_p'], beta0=T0['beta'], alpha0=T0['alpha'], beta1=9, alpha1=-2)
P2.px *= P['mean_p']
twiss_calc(P2.cov('x', 'xp'))
# Make a dedicated routine
def matched_particles(particle_group, beta=None, alpha=None, plane='x', p0c=None, inplace=False):
"""
Perfoms simple Twiss 'matching' by applying a linear transformation to
x, px if plane == 'x', or x, py if plane == 'y'
Returns a new ParticleGroup
If inplace, a copy will not be made, and changes will be done in place.
"""
assert plane in ('x', 'y'), f'Invalid plane: {plane}'
if inplace:
P = particle_group
else:
P = particle_group.copy()
if not p0c:
p0c = P['mean_p']
# Use Bmad-style coordinates.
# Get plane.
if plane == 'x':
x = P.x
p = P.px/p0c
else:
x = P.y
p = P.py/p0c
# Get current Twiss
tx = twiss_calc(np.cov(x, p, aweights=P.weight))
# If not specified, just fill in the current value.
if alpha is None:
alpha = tx['alpha']
if beta is None:
beta = tx['beta']
# New coordinates
xnew, pnew = twiss_match(x, p, beta0=tx['beta'], alpha0=tx['alpha'], beta1=beta, alpha1=alpha)
# Set
if plane == 'x':
P.x = xnew
P.px = pnew*p0c
else:
P.y = xnew
P.py = pnew*p0c
return P
# Check
P3 = matched_particles(P, beta=None, alpha=-4, plane='y')
P.twiss(plane='y'), P3.twiss(plane='y')
# These functions are in statistics
from pmd_beamphysics.statistics import twiss_match, matched_particles
###Output
_____no_output_____
###Markdown
Simple Normalized Coordinates in ParticleGroup1D normalized coordinates originate from the normal form decomposition, where the transfer matrix that propagates phase space coordinates $(x, p)$ is decomposed as$M = A \cdot R(\theta) \cdot A^{-1}$And the matrix $A$ can be parameterized asA = $\begin{pmatrix}\sqrt{\beta} & 0\\-\alpha/\sqrt{\beta} & 1/\sqrt{\beta}\end{pmatrix}$
###Code
?A_mat_calc
# Make phase space circle. This will represent some normalized coordinates
theta = np.linspace(0, np.pi*2, 100)
zvec0 = np.array([np.cos(theta), np.sin(theta)])
plt.scatter(*zvec0)
# Make a 'beam' in 'lab coordinates'
MYMAT = np.array([[10, 0],[-3, 5]])
zvec = np.matmul(MYMAT , zvec0)
plt.scatter(*zvec)
###Output
_____no_output_____
###Markdown
With a beam, $\alpha$ and $\beta$ can be determined from moments of the covariance matrix.
###Code
?twiss_calc
# Calculate a sigma matrix, get the determinant
sigma_mat2 = np.cov(*zvec)
np.linalg.det(sigma_mat2)
# Get some twiss
twiss = twiss_calc(sigma_mat2)
twiss
# Analyzing matrices
A = A_mat_calc(twiss['beta'], twiss['alpha'])
A_inv = A_mat_calc(twiss['beta'], twiss['alpha'], inverse=True)
# A_inv turns this back into a circle.
zvec2 = np.matmul(A_inv, zvec)
plt.scatter(*zvec2)
###Output
_____no_output_____
###Markdown
Twiss parametersEffective Twiss parameters can be calculated from the second order moments of the particles.
###Code
# This does not change the phase space area
twiss_calc(np.cov(*zvec2))
# Reset plot
matplotlib.rcParams['figure.figsize'] = (13,8)
###Output
_____no_output_____
###Markdown
x_bar, px_bar, Jx, etc.These are essentially action-angle coordinates, calculated by using the an analyzing twiss dict
###Code
?normalized_particle_coordinate
# Get some example particles
P = ParticleGroup('data/bmad_particles2.h5')
# This is a typical transverse phase space plot
P.plot('x', 'px')
# If no twiss is given, then the analyzing matrix is computed from the beam itself.
normalized_particle_coordinate(P, 'x', twiss=None)
# This is equivelent
normalized_particle_coordinate(P, 'x', twiss=twiss_calc(P.cov('x', 'px')), mass_normalize=False)/np.sqrt(P.mass)
# And is given as a property:
P.x_bar
# The amplitude is defined as:
(P.x_bar**2 + P.px_bar**2)/2
# This is also given as a property
P.Jx
# Note the mass normalization is the same
P.Jx.mean(), P['mean_Jx'], P['norm_emit_x']
# This is now nice and roundish
P.plot('x_bar', 'px_bar')
# Jy also works. This gives some sense of where the emittance is larger.
P.plot('t', 'Jy')
# Sort by Jx:
P = P[np.argsort(P.Jx)]
# Now particles are ordered
plt.plot(P.Jx)
# This can be used to calculate the 95% emittance
P[0:int(0.95*len(P))]['norm_emit_x']
###Output
_____no_output_____
###Markdown
Simple 'matching'Often a beam needs to be 'matched' for tracking in some program.This is a 'faked' tranformation that ultimately would need to be realized by a focusing system.
###Code
def twiss_match(x, p, beta0=1, alpha0=0, beta1=1, alpha1=0):
"""
Simple Twiss matching.
Takes positions x and momenta p, and transforms them according to
initial Twiss parameters:
beta0, alpha0
into final Twiss parameters:
beta1, alpha1
This is simply the matrix ransformation:
xnew = ( sqrt(beta1/beta0) 0 ) . ( x )
pnew ( (alpha0-alpha1)/sqrt(beta0*beta1) sqrt(beta0/beta1) ) ( p )
Returns new x, p
"""
m11 = np.sqrt(beta1/beta0)
m21 = (alpha0-alpha1)/np.sqrt(beta0*beta1)
xnew = x * m11
pnew = x * m21 + p / m11
return xnew, pnew
# Get some Twiss
T0 = twiss_calc(P.cov('x', 'xp'))
T0
# Make a copy, and maniplulate
P2 = P.copy()
P2.x, P2.px = twiss_match(P.x, P.px/P['mean_p'], beta0=T0['beta'], alpha0=T0['alpha'], beta1=9, alpha1=-2)
P2.px *= P['mean_p']
twiss_calc(P2.cov('x', 'xp'))
# Make a dedicated routine
def matched_particles(particle_group, beta=None, alpha=None, plane='x', p0c=None, inplace=False):
"""
Perfoms simple Twiss 'matching' by applying a linear transformation to
x, px if plane == 'x', or x, py if plane == 'y'
Returns a new ParticleGroup
If inplace, a copy will not be made, and changes will be done in place.
"""
assert plane in ('x', 'y'), f'Invalid plane: {plane}'
if inplace:
P = particle_group
else:
P = particle_group.copy()
if not p0c:
p0c = P['mean_p']
# Use Bmad-style coordinates.
# Get plane.
if plane == 'x':
x = P.x
p = P.px/p0c
else:
x = P.y
p = P.py/p0c
# Get current Twiss
tx = twiss_calc(np.cov(x, p, aweights=P.weight))
# If not specified, just fill in the current value.
if alpha is None:
alpha = tx['alpha']
if beta is None:
beta = tx['beta']
# New coordinates
xnew, pnew = twiss_match(x, p, beta0=tx['beta'], alpha0=tx['alpha'], beta1=beta, alpha1=alpha)
# Set
if plane == 'x':
P.x = xnew
P.px = pnew*p0c
else:
P.y = xnew
P.py = pnew*p0c
return P
# Check
P3 = matched_particles(P, beta=None, alpha=-4, plane='y')
P.twiss(plane='y'), P3.twiss(plane='y')
# These functions are in statistics
from pmd_beamphysics.statistics import twiss_match, matched_particles
###Output
_____no_output_____
###Markdown
Simple Normalized Coordinates in ParticleGroup1D normalized coordinates originate from the normal form decomposition, where the transfer matrix that propagates phase space coordinates $(x, p)$ is decomposed as$M = A \cdot R(\theta) \cdot A^{-1}$And the matrix $A$ can be parameterized asA = $\begin{pmatrix}\sqrt{\beta} & 0\\-\alpha/\sqrt{\beta} & 1/\sqrt{\beta}\end{pmatrix}$
###Code
?A_mat_calc
# Make phase space circle. This will represent some normalized coordinates
theta = np.linspace(0, np.pi*2, 100)
zvec0 = np.array([np.cos(theta), np.sin(theta)])
plt.scatter(*zvec0)
# Make a 'beam' in 'lab coordinates'
MYMAT = np.array([[10, 0],[-3, 5]])
zvec = np.matmul(MYMAT , zvec0)
plt.scatter(*zvec)
###Output
_____no_output_____
###Markdown
With a beam, $\alpha$ and $\beta$ can be determined from moments of the covariance matrix.
###Code
?twiss_calc
# Calculate a sigma matrix, get the determinant
sigma_mat2 = np.cov(*zvec)
np.linalg.det(sigma_mat2)
# Get some twiss
twiss = twiss_calc(sigma_mat2)
twiss
# Analyzing matrices
A = A_mat_calc(twiss['beta'], twiss['alpha'])
A_inv = A_mat_calc(twiss['beta'], twiss['alpha'], inverse=True)
# A_inv turns this back into a circle.
zvec2 = np.matmul(A_inv, zvec)
plt.scatter(*zvec2)
###Output
_____no_output_____
###Markdown
Twiss parametersEffective Twiss parameters can be calculated from the second order moments of the particles.
###Code
# This does not change the phase space area
twiss_calc(np.cov(*zvec2))
# Reset plot
matplotlib.rcParams['figure.figsize'] = (13,8)
###Output
_____no_output_____
###Markdown
x_bar, px_bar, Jx, etc.These are essentially action-angle coordinates, calculated by using the an analyzing twiss dict
###Code
?normalized_particle_coordinate
# Get some example particles
P = ParticleGroup('data/bmad_particles2.h5')
# This is a typical transverse phase space plot
P.plot('x', 'px')
# If no twiss is given, then the analyzing matrix is computed from the beam itself.
normalized_particle_coordinate(P, 'x', twiss=None)
# This is equivelent
normalized_particle_coordinate(P, 'x', twiss=twiss_calc(P.cov('x', 'px')), mass_normalize=False)/np.sqrt(P.mass)
# And is given as a property:
P.x_bar
# The amplitude is defined as:
(P.x_bar**2 + P.px_bar**2)/2
# This is also given as a property
P.Jx
# Note the mass normalization is the same
P.Jx.mean(), P['mean_Jx'], P['norm_emit_x']
# This is now nice and roundish
P.plot('x_bar', 'px_bar')
# Jy also works. This gives some sense of where the emittance is larger.
P.plot('t', 'Jy')
# Sort by Jx:
P = P[np.argsort(P.Jx)]
# Now particles are ordered
plt.plot(P.Jx)
# This can be used to calculate the 95% emittance
P[0:int(0.95*len(P))]['norm_emit_x']
###Output
_____no_output_____
###Markdown
Simple 'matching'Often a beam needs to be 'matched' for tracking in some program.This is a 'faked' tranformation that ultimately would need to be realized by a focusing system.
###Code
def twiss_match(x, p, beta0=1, alpha0=0, beta1=1, alpha1=0):
"""
Simple Twiss matching.
Takes positions x and momenta p, and transforms them according to
initial Twiss parameters:
beta0, alpha0
into final Twiss parameters:
beta1, alpha1
This is simply the matrix ransformation:
xnew = ( sqrt(beta1/beta0) 0 ) . ( x )
pnew ( (alpha0-alpha1)/sqrt(beta0*beta1) sqrt(beta0/beta1) ) ( p )
Returns new x, p
"""
m11 = np.sqrt(beta1/beta0)
m21 = (alpha0-alpha1)/np.sqrt(beta0*beta1)
xnew = x * m11
pnew = x * m21 + p / m11
return xnew, pnew
# Get some Twiss
T0 = twiss_calc(P.cov('x', 'xp'))
T0
# Make a copy, and maniplulate
P2 = P.copy()
P2.x, P2.px = twiss_match(P.x, P.px/P['mean_p'], beta0=T0['beta'], alpha0=T0['alpha'], beta1=9, alpha1=-2)
P2.px *= P['mean_p']
twiss_calc(P2.cov('x', 'xp'))
# Make a dedicated routine
def matched_particles(particle_group, beta=None, alpha=None, plane='x', p0c=None, inplace=False):
"""
Perfoms simple Twiss 'matching' by applying a linear transformation to
x, px if plane == 'x', or x, py if plane == 'y'
Returns a new ParticleGroup
If inplace, a copy will not be made, and changes will be done in place.
"""
assert plane in ('x', 'y'), f'Invalid plane: {plane}'
if inplace:
P = particle_group
else:
P = particle_group.copy()
if not p0c:
p0c = P['mean_p']
# Use Bmad-style coordinates.
# Get plane.
if plane == 'x':
x = P.x
p = P.px/p0c
else:
x = P.y
p = P.py/p0c
# Get current Twiss
tx = twiss_calc(np.cov(x, p, aweights=P.weight))
# If not specified, just fill in the current value.
if alpha is None:
alpha = tx['alpha']
if beta is None:
beta = tx['beta']
# New coordinates
xnew, pnew = twiss_match(x, p, beta0=tx['beta'], alpha0=tx['alpha'], beta1=beta, alpha1=alpha)
# Set
if plane == 'x':
P.x = xnew
P.px = pnew*p0c
else:
P.y = xnew
P.py = pnew*p0c
return P
# Check
P3 = matched_particles(P, beta=None, alpha=-4, plane='y')
P.twiss(plane='y'), P3.twiss(plane='y')
# These functions are in statistics
from pmd_beamphysics.statistics import twiss_match, matched_particles
###Output
_____no_output_____ |
.ipynb_checkpoints/jpeg_charange-checkpoint.ipynb | ###Markdown
バイナリファイルダンププログラムでjpegファイルをダンプ
###Code
f_name="Parrots.jpg"
f=open(f_name,"rb")
s=f.read()
f.close
print(" ",end="")
for cnt in range(16):
print("{:02x} ".format(cnt),end="")
print("")
cnt=0
rows=0
for byte in s:
if cnt==0:
print("{:03x}# : ".format(rows),end="")
print("{:02x} ".format(byte),end="")
cnt+=1
if cnt==16:
cnt=0
print("")
rows+=1
###Output
00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f
000# : ff d8 ff e0 00 10 4a 46 49 46 00 01 01 01 00 48
001# : 00 48 00 00 ff db 00 43 00 05 03 04 04 04 03 05
002# : 04 04 04 05 05 05 06 07 0c 08 07 07 07 07 0f 0b
003# : 0b 09 0c 11 0f 12 12 11 0f 11 11 13 16 1c 17 13
004# : 14 1a 15 11 11 18 21 18 1a 1d 1d 1f 1f 1f 13 17
005# : 22 24 22 1e 24 1c 1e 1f 1e ff db 00 43 01 05 05
006# : 05 07 06 07 0e 08 08 0e 1e 14 11 14 1e 1e 1e 1e
007# : 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e
008# : 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e
009# : 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e 1e ff c0
00a# : 00 11 08 00 96 00 96 03 01 22 00 02 11 01 03 11
00b# : 01 ff c4 00 1f 00 00 01 05 01 01 01 01 01 01 00
00c# : 00 00 00 00 00 00 00 01 02 03 04 05 06 07 08 09
00d# : 0a 0b ff c4 00 b5 10 00 02 01 03 03 02 04 03 05
00e# : 05 04 04 00 00 01 7d 01 02 03 00 04 11 05 12 21
00f# : 31 41 06 13 51 61 07 22 71 14 32 81 91 a1 08 23
010# : 42 b1 c1 15 52 d1 f0 24 33 62 72 82 09 0a 16 17
011# : 18 19 1a 25 26 27 28 29 2a 34 35 36 37 38 39 3a
012# : 43 44 45 46 47 48 49 4a 53 54 55 56 57 58 59 5a
013# : 63 64 65 66 67 68 69 6a 73 74 75 76 77 78 79 7a
014# : 83 84 85 86 87 88 89 8a 92 93 94 95 96 97 98 99
015# : 9a a2 a3 a4 a5 a6 a7 a8 a9 aa b2 b3 b4 b5 b6 b7
016# : b8 b9 ba c2 c3 c4 c5 c6 c7 c8 c9 ca d2 d3 d4 d5
017# : d6 d7 d8 d9 da e1 e2 e3 e4 e5 e6 e7 e8 e9 ea f1
018# : f2 f3 f4 f5 f6 f7 f8 f9 fa ff c4 00 1f 01 00 03
019# : 01 01 01 01 01 01 01 01 01 00 00 00 00 00 00 01
01a# : 02 03 04 05 06 07 08 09 0a 0b ff c4 00 b5 11 00
01b# : 02 01 02 04 04 03 04 07 05 04 04 00 01 02 77 00
01c# : 01 02 03 11 04 05 21 31 06 12 41 51 07 61 71 13
01d# : 22 32 81 08 14 42 91 a1 b1 c1 09 23 33 52 f0 15
01e# : 62 72 d1 0a 16 24 34 e1 25 f1 17 18 19 1a 26 27
01f# : 28 29 2a 35 36 37 38 39 3a 43 44 45 46 47 48 49
020# : 4a 53 54 55 56 57 58 59 5a 63 64 65 66 67 68 69
021# : 6a 73 74 75 76 77 78 79 7a 82 83 84 85 86 87 88
022# : 89 8a 92 93 94 95 96 97 98 99 9a a2 a3 a4 a5 a6
023# : a7 a8 a9 aa b2 b3 b4 b5 b6 b7 b8 b9 ba c2 c3 c4
024# : c5 c6 c7 c8 c9 ca d2 d3 d4 d5 d6 d7 d8 d9 da e2
025# : e3 e4 e5 e6 e7 e8 e9 ea f2 f3 f4 f5 f6 f7 f8 f9
026# : fa ff da 00 0c 03 01 00 02 11 03 11 00 3f 00 e7
027# : 75 bd 3e 4b 48 41 75 c8 c5 72 61 89 76 00 77 af
028# : 5e f1 fd b4 5f 61 76 50 01 0b 5e 45 00 6f 30 92
029# : 3a 9a e5 ad 4f 95 97 4d e8 5d b4 87 23 26 ae a2
02a# : 05 35 15 a8 f9 39 a9 5c e0 64 57 33 35 43 98 81
02b# : de a4 59 b6 af 5a a2 d2 1e f4 88 e4 9e b5 2d 0c
02c# : d1 f3 89 19 ed 55 2e 24 4f ef 60 d2 a9 f9 0f 26
02d# : aa 5c 82 69 05 84 79 54 11 b7 07 de ad da 46 d3
02e# : 1c 46 a4 9a cd b6 b4 96 ea e5 61 8b 3b 98 fe 55
02f# : e9 de 18 f0 f8 82 14 06 3d cd ef 5a 53 a6 e6 f4
030# : 13 7c a7 28 9a 6d e3 2e 7c b3 8f a5 51 d4 6d 6e
031# : 6d d4 bb 44 c0 0a f6 8b 5d 17 2a 07 96 07 e1 51
032# : ea 3e 1b 8a 58 48 68 81 04 7a 57 5f d5 1d 8c 5b
033# : 67 85 7d a7 77 7a 64 d3 8c 75 ad cf 1b f8 66 6d
034# : 2a 46 b8 b7 53 e5 e7 91 5c 64 d3 12 2b 9f 95 c5
035# : d9 8d 24 d0 5e ca 1f 38 aa 0f 70 50 81 9a 49 a4
036# : 3b ba 55 79 58 33 0a bb 21 a6 6b 59 dd e4 0c 9a
037# : b8 c3 7f 35 95 67 1e e5 04 77 ad 7b 25 3b 42 b0
038# : e9 58 4e 6a 22 75 05 86 2d c0 9a 2a cc 0b b2 47
039# : 52 3d e8 a4 ea 3e 81 73 d3 fc 7f 78 ad 69 22 03
03a# : d4 57 9a a2 80 a7 38 ae 97 c4 d7 a6 ea 53 1a 9c
03b# : 8c d6 05 ca ed da 2b aa b4 f9 a4 38 44 58 9c f4
03c# : a9 49 c8 a8 22 61 56 95 49 5e 95 ca da 37 e5 7d
03d# : 8a 8c 70 4e 69 88 d8 6a b2 f0 93 cd 46 21 3b b8
03e# : a4 c4 4a 99 23 a9 a8 e6 07 15 6a 08 ce de 94 b2
03f# : c6 00 e9 52 08 bf e0 4b 64 7d 45 dd c6 48 c0 15
040# : ee 1e 1d d2 c3 44 ac 56 bc 57 c2 4e 60 bd 0c 78
041# : c9 15 f4 6f 84 16 39 74 c8 d8 63 38 e9 5e 86 0e
042# : cd 58 52 5a 8f 86 c9 54 01 b7 8a 59 ac 83 29 f9
043# : 70 2b 72 1b 6d c7 a5 49 25 aa 91 8e 2b bd c9 22
044# : 6c 8f 26 f1 7e 93 0d cd 94 ca ca 0f 1c f1 5f 3a
045# : f8 97 4c 6b 2d 46 58 94 65 33 90 45 7d 81 ae e9
046# : 2a d0 c8 73 c1 1d 2b e7 bf 89 1a 1f d8 f5 56 95
047# : 41 31 c9 d8 f6 35 e5 e3 6b 28 34 d9 9d f9 5e a7
048# : 96 2c 1b c9 18 aa d7 76 6d 13 07 03 e5 35 d2 3d
049# : b0 8e 70 d8 e0 9a 9a f2 c9 65 b5 38 1d ab ce a9
04a# : 8a 74 aa 2e cc c6 ac dd 39 ae cc c4 d1 40 32 14
04b# : c5 6f 47 6f b0 83 8a c3 d2 63 68 af b6 9e d5 d7
04c# : 08 83 44 0d 73 63 ea b8 54 f5 30 af 2b 48 a1 34
04d# : 38 60 47 71 45 5d 9a 2c aa 9c 51 57 4b 11 78 26
04e# : 69 0a ab 95 1b ad 6f 67 6c 9e 74 b0 24 c4 8e 0b
04f# : 33 1e 7f 95 47 0c b0 82 1c 47 12 e0 e0 0f 2d 72
050# : 6a 7f 19 78 a6 d3 50 85 04 56 b1 c3 26 d5 de 14
051# : 1c 02 3d 07 bd 73 10 ea 2d 73 28 82 d6 09 a5 72
052# : 39 11 a9 38 fc 07 35 c9 28 d4 9c be 2b 9f b9 51
053# : a7 82 c3 45 25 4e 31 7e 88 eb 46 a1 24 39 08 88
054# : 03 00 32 11 40 a9 05 eb b1 0d 22 a1 52 39 c9 18
055# : 1f 85 66 cf a0 f8 86 2d 34 df 49 a5 5d 8b 78 d4
056# : b3 95 03 2a a3 f8 8a f5 00 7a e2 a8 4f 67 af a4
057# : 2b 38 d3 e7 78 84 7e 67 0a 49 d9 82 d9 e3 b6 01
058# : 35 7f 56 a8 b7 4c bf af e0 ed a5 bf 03 4e f2 de
059# : c2 ea 4c a2 1b 77 23 aa 7d d2 7e 87 fa 56 6d dd
05a# : 84 b6 6e 0c 80 3c 6d f7 64 5e 55 bf c0 fb 56 49
05b# : d6 94 a1 56 3f 75 8f 5e 0d 4f a7 78 98 23 ac 32
05c# : e2 6b 72 08 74 27 ae 7b 63 fa d6 d4 ea 4e 1b ec
05d# : 78 39 9e 4d 81 c5 c7 9a 8d a1 2f c1 fc 8b d1 90
05e# : 29 b3 15 22 89 15 72 3c 96 2c 8c 03 29 3d 71 ef
05f# : 4c 74 39 19 35 d9 cc ad 73 f3 ea 94 e5 4a 6e 12
060# : 5a a3 57 42 8b 33 2e 3d 6b d9 bc 1d aa 98 12 38
061# : b7 8c 0a f2 5f 0d 46 ad 2f 03 24 0e 2b bc d0 6c
062# : ae 65 99 76 31 08 0f 38 ae 8a 35 79 15 d1 83 95
063# : f4 3d 8b 4d d4 61 9b 11 a1 dc e7 d2 9d a9 3d d2
064# : 02 63 b5 91 87 b0 ae 4e d3 cf d1 e4 8e e0 02 c9
065# : 9c 9a f4 fd 17 50 b5 d4 ec 52 44 2a c4 81 91 5b
066# : fd 61 cd db a9 17 b1 e6 d7 9a c2 3b 34 12 e5 58
067# : 76 6a f3 8f 1f 5b 25 ec 8c 84 02 36 f0 45 7a e7
068# : c4 ef 0b 89 ed 9a f6 c4 79 72 a8 c8 2a 3b d7 88
069# : bd ec 82 49 ad ef 0e 24 52 41 07 b1 af 27 19 52
06a# : 53 4e 12 26 a5 9c 75 3c c7 55 81 a2 90 a1 5e 55
06b# : b9 a9 ac b6 cb 0e d2 2b 43 c4 28 b2 5d 4a 57 1c
06c# : d6 4e 94 c5 26 28 7a 57 99 39 ba b4 3c e2 79 d5
06d# : 27 cd 0b 75 46 75 cd bf 93 a8 86 03 82 6b a2 b5
06e# : 01 ad c7 d2 a9 6a d0 e5 83 81 df 35 6e c0 fe e3
06f# : 15 58 a9 7b 4a 50 91 15 5f 34 53 24 2b 91 8f 43
070# : 45 3c 73 9a 2b 9a 9d 4b 46 c6 71 96 84 f2 a5 bd
071# : c4 ab 1c f1 06 47 3b 73 8c e3 f9 66 bd 23 c2 ab
072# : 36 84 d3 5b 44 be 5e 9d 32 0f 31 0a 15 65 e0 00
073# : 54 90 a4 90 71 db 9c e3 ef 63 3e 5d a6 b5 e3 6a
074# : 08 f1 be 1a 36 c8 2c 72 01 1e d8 39 fc 8d 7a ce
075# : 93 a5 ec 84 5f 39 72 1b 6b 4e 33 24 31 33 1e 09
076# : c4 88 c3 3d 47 18 18 ed d0 57 bb 82 8d a5 73 f6
077# : 4c ea 49 c5 45 f5 20 d0 7c 39 06 9d 77 aa de 1d
078# : 4e 6b 94 d4 14 81 6e 0b 1d 99 18 25 54 00 57 1c
079# : f5 e7 92 29 9f 64 1a 7a c8 5e 40 c5 ad ce 22 79
07a# : c0 c3 0d c7 01 c7 dd 56 dc 78 fa f7 c9 1d 0c 69
07b# : 6f 6b 62 b1 db af c9 b0 c8 a9 19 dd e6 11 83 80
07c# : 73 b3 92 78 e1 4f 23 03 bd 79 47 c5 4f 1d 5a 46
07d# : 6f 74 73 63 1c b7 7b 17 cb 1e 61 d9 6f 9e ac f9
07e# : c3 3c 80 e4 05 38 5e 84 86 af 4e 57 97 a9 f2 b4
07f# : e5 0c 2c 5d de 87 25 79 6b 3f 8a bc 4f 2b 45 6c
080# : 96 d6 b2 bb 07 b9 28 04 6a 72 77 36 46 37 1d c7
081# : 3d c9 ae f7 4a fd 9f a7 d5 6c e1 d4 34 ad 69 92
082# : 29 23 dc a2 68 fe 69 0f 6c 01 8d b9 f4 39 a9 7f
083# : 67 ff 00 09 d9 6b fe 1a bd d5 75 53 3c f2 ca cd
084# : 6f 09 69 0e 10 01 d4 01 ee dd 3a 70 2b e8 2f 09
085# : c6 74 fd 0e d2 cd 98 79 90 c4 03 1f 71 d4 d7 33
086# : a5 08 a6 a5 ab 09 66 15 6a 5a 50 d1 2d 8f 92 2d
087# : 60 68 22 48 4e e3 b3 83 b8 73 9e f5 33 29 3c 62
088# : b7 bc 5b 04 31 78 b3 57 8a 10 04 6b 7b 30 5c 7a
089# : 6f 35 9a 22 c3 02 05 78 d3 c5 c5 3e 54 7c e5 7c
08a# : 63 a9 51 b9 bd 59 a3 e1 d8 e7 b6 f9 d5 32 1b ad
08b# : 76 9a 6e a5 71 a6 4f 1c e1 48 53 d7 23 83 5c ae
08c# : 9b 36 c0 3d 2b be d0 d2 cf 54 b4 30 3e d3 91 d0
08d# : f6 35 d3 4a aa 9a b2 64 c6 5c db 33 d0 3c 3f 75
08e# : 61 e2 0d 34 24 6c a2 42 3e e9 f5 ac a9 ae 2f fc
08f# : 2d 7a 5d 43 9b 7c e4 8f 4a e3 07 f6 9f 84 b5 55
090# : b8 8b 7b da 13 ce 3b 0a f4 e4 bb b1 f1 2e 84 26
091# : 0c 85 b6 65 89 ed c7 5a d9 37 2d 1e e6 aa 57 f5
092# : 2f e9 be 25 b3 d6 2c f6 34 88 43 0e 79 af 15 f8
093# : 99 a3 58 dc 78 87 cc b2 d5 2d e1 52 c5 65 61 f3
094# : 7e 40 75 fc e9 9a f6 ad 6b a2 35 c4 76 97 4c 43
095# : 92 ac 7b 0f 61 ed ef 5c 7b cd a8 6a 37 5f 3e 6c
096# : d0 fc ca f3 a3 2e e5 c6 72 a3 1c 80 39 c9 c0 c6
097# : 48 ce 0d 71 d7 75 6a cf d9 42 37 6b 73 92 55 fd
098# : b6 91 44 97 1a 46 95 1b 31 9e f6 e6 5f 52 a1 54
099# : fe 5c ff 00 3a ad a3 78 37 50 d7 ef 24 ff 00 84
09a# : 6a c2 fe ee 24 e4 ce ea ab 10 f5 cb 9d a3 8f 6c
09b# : d7 55 e1 4f 04 c5 a8 ea 1f 67 d7 2f 66 46 2a 7c
09c# : b8 22 0a a6 47 56 60 d1 b1 24 e0 fc 8c 38 e7 83
09d# : 9c 65 73 d1 f8 ff 00 c4 5a 87 87 fc 15 e2 31 73
09e# : 72 91 db 5e ce b6 7a 44 36 bb 54 45 01 45 c8 03
09f# : 03 07 87 c8 39 fd 45 6f 83 ca a7 76 eb 3d fa 1a
0a0# : 47 0d 1e 56 e4 79 17 88 74 bb cd 32 73 67 7f 07
0a1# : 95 32 80 78 60 ca c3 b1 56 52 43 0f 70 6a 85 99
0a2# : c2 95 aa e7 c4 37 17 70 05 bc 78 e5 03 80 e5 36
0a3# : b2 fd 40 ea 29 f6 73 a4 99 2a 46 7b 8c fe a3 da
0a4# : a3 17 97 ce 8d 27 6d 52 38 eb 52 e5 8e 9b 17 55
0a5# : b1 45 44 ac 28 af 05 b4 8e 1b b1 da 44 91 c7 ab
0a6# : 5b 19 c3 34 46 55 0f 18 6d b9 07 b6 6b db 26 d4
0a7# : 7c 33 a4 68 46 fb 52 4b 3b 1d 3e dc 7c 8e c8 32
0a8# : 09 fe e8 c7 53 c7 a9 35 f3 fd a4 ad 1c 80 67 12
0a9# : 29 dc 32 7a 8a e7 fe 23 de ea d7 2b a7 a5 c5 d3
0aa# : 4b 65 18 65 8a 2e ca d9 3c 9f 53 8e 01 f6 c5 7d
0ab# : 5e 09 a4 dc 4f d8 38 85 c9 41 55 5d 0e eb e2 27
0ac# : c6 8b 9d 61 9e c3 c3 49 71 61 60 06 d3 70 e4 0b
0ad# : 89 87 a6 79 f2 d3 d8 72 7b e3 a5 79 7e ad e6 b5
0ae# : d9 b9 32 db 81 20 de 79 f9 98 9e 49 f7 c7 a7 bd
0af# : 65 c2 be 67 a8 3e e3 bf a5 59 93 4e 6b ec 08 66
0b0# : f2 de 39 09 50 d9 21 81 03 a1 19 c7 41 d6 bd 4e
0b1# : 64 8f 81 ab 39 cd f3 48 fa 9b f6 74 98 27 c3 db
0b2# : 18 40 c9 92 69 48 20 75 f9 8f 3f d2 bd 6a da e7
0b3# : ec ec af b4 85 1d 49 38 c0 ef 5e 3f fb 39 c1 37
0b4# : f6 3c 90 b0 75 86 dd 14 44 49 04 61 b9 27 3f 5c
0b5# : 7e 55 d6 7c 58 f1 24 5a 66 8f 25 85 a4 9b ae 6e
0b6# : 90 c4 31 d5 41 18 66 fc 8e 3e a6 b8 b1 4d 42 2e
0b7# : 67 6d 2a d1 85 1b cb a1 e3 ba bd f0 be d6 af af
###Markdown
jpegファイルのセグメント構造の抽出
###Code
marker_def={0xd8:"SOI",0xd9:"EOI",0xda:"SOS",0xe0:"APP0",0xdb:"DQT",0xc0:"SOF0",0xc4:"DHT"}
flag_marker= False
flag_seg=False
flag_seg_cnt=False
flag_seg_data=False
flag_SOI= False
flag_EOI= False
flag_SOS= False
flag_err=False
jpeg_struct=[]
seg_buf=[]
byte_bufs=b''
seg_count=0
f=open(f_name,"rb")
s=f.read()
f.close
for byte in s:
if flag_marker==False and byte==0xFF : #マーカーの判定
flag_marker=True
else:
####### マーカー処理 #########
if flag_marker==True :
#FF00マーカ処理
if byte==0x00 :
byte_bufs=byte_bufs+bytes.fromhex("{:02X}".format(0))
#辞書定義済みマーカ
elif byte in marker_def:
#SOI判定
if flag_SOI==False :
if marker_def[byte]=="SOI" :
flag_SOI=True
jpeg_struct=jpeg_struct+[["SOI"]]
else:
flag_err=True;
#EOI判定
elif marker_def[byte]=="EOI":
#IMAGE DATA格納
#jpeg_struct=jpeg_struct+[["IMG","{:d}".format(len(byte_bufs)),byte_bufs.hex()]]
jpeg_struct=jpeg_struct+[["IMG","{:d}".format(len(byte_bufs)),byte_bufs]]
jpeg_struct=jpeg_struct+[["EOI"]]
flag_EOI=True
#その他定義済マーカ(セグメント処理)
elif byte in marker_def:
seg_buf=[""+marker_def[byte]]
flag_seg=True
#SOS判定
if marker_def[byte]=="SOS":
flag_SOS=True
#未定義マーカ(セグメント処理)
else:
seg_buf=["FF{:X}".format(byte)]
flag_seg=True
flag_marker=False
else:
#セグメント処理
if flag_seg==True:
if(flag_seg_cnt==False):
seg_count=seg_count+1
seg_size_h=byte
flag_seg_cnt=True
elif(flag_seg_data==False):
seg_size=seg_size_h*256+byte
seg_buf=seg_buf+["{:d}".format(seg_size)]
seg_size=seg_size-2
byte_bufs=b''
flag_seg_data=True
else:
byte_bufs=byte_bufs+bytes.fromhex("{:02X}".format(byte))
seg_size=seg_size-1
if seg_size==0:
#seg_buf=seg_buf+[byte_bufs.hex()]
seg_buf=seg_buf+[byte_bufs]
jpeg_struct=jpeg_struct+[seg_buf]
byte_bufs=b''
flag_seg=False
flag_seg_cnt=False
flag_seg_data=False
#IMAGE DATA処理 (SOSセグメント後)
elif flag_SOS==True and flag_seg==False:
byte_bufs=byte_bufs+bytes.fromhex("{:02X}".format(byte))
#例外処理
else:
flag_err=True
if flag_err==True or flag_EOI==True:
break;
if flag_err==False and flag_EOI==True:
print("Succeeded!!")
###Output
Succeeded!!
###Markdown
抽出したjpegファイルの構造(リスト型 jpeg_struct)の出力
###Code
jpeg_struct
len(jpeg_struct)
for seg in jpeg_struct:
print(seg[0])
flag_SOI= False
flag_EOI= False
flag_SOS= False
flag_err=False
integer vlen[16]
for seg in jpeg_struct:
print(seg[0])
if(seg[0] == "IMG"):
print(" DATA LENGTH : ",seg[1],sep="")
else:
if(seg[0] == "SOI"):
flag_SOI=True
elif(seg[0] == "EOI"):
flag_EOI=True
else:
print(" SEG LENGTH : ",seg[1])
data=seg[2]
if(seg[0] == "APP0"):
print(" ID : ",data[0:4].decode(),sep="")
print(" Ver : ",data[5],".",data[6],sep="")
print(" U : ",data[7],sep="")
print(" Xd : ",data[8]*256+data[9],sep="")
print(" Yd : ",data[10]*256+data[11],sep="")
print(" Xd : ",data[12],sep="")
print(" Yd : ",data[13],sep="")
for i in range(data[12]*data[13]):
print(" RGB",i,":",data[14+i],sep="")
elif(seg[0] == "DQT"):
length = int(seg[1])-3
base = 0
while(length >0):
pqn=data[0]>>4
tqn=data[base]&0x0F;
if(pqn==0):
qlen=64;
else:
qlen=128;
print(" Pq",tqn," : ",pqn,sep="")
print(" Tq",tqn," : ",tqn,sep="")
for i in range(qlen):
print(" Q",tqn,"-",ascii(i)," : ",data[base+1+i],sep="")
length-=qlen
base+=qlen
elif(seg[0] == "SOF0"):
nf=data[5]
print(" P : ",data[1])
print(" Y : ",data[1]*256+data[2],sep="")
print(" X : ",data[3]*256+data[4],sep="")
print(" Nf : ",data[5])
for i in range(nf):
print(" C",i+1," : ",data[6+i*3],sep="")
print(" H",i+1," : ",data[7+i*3]>>4,sep="")
print(" V",i+1," : ",data[7+i*3]&0x0F,sep="")
print(" Tq",i+1," : ",data[8+i*3],sep="")
elif(seg[0] == "DHT"):
thn=data[0]&0x0f
tcn=data[0]>>4
print(" Tc",thn," : ",tcn)
print(" Th",thn," : ",thn)
vlen=[]
for i in range(16):
vlen+= [data[1+i]]
print(" L",i+1," ; ",data[1+i],sep="")
base = 17
for i in range(16):
for j in ragne(vlen[i]):
if(tcn==0):
print(" V",i+1,"-",j+1," : ",data[base+j],sep="")
else:
print(" V",i+1,"-",j+1," : ",data[base+j]>>4,",",data[base+j]&0x0F,sep="")
base+=vlen[i]
elif(seg[0] == "SOS"):
ns=data[0]
print(" Ns : ",ns)
for i in range(ns):
print(" Cs",i+1," : ",data[1+i*2],sep="")
print(" Td",i+1," : ",data[2+i*2]>>4,sep="")
print(" Ta",i+1," : ",data[2+i*2]&0x0F,sep="")
jpeg_struct[2]
ascii(10)
###Output
_____no_output_____
###Markdown
matplotによるjpegファイル読み込み
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import matplotlib.colors as mpcol
img = mpimg.imread(f_name)
imgplot = plt.imshow(img)
imgplot.axes.set_xticks([]) #x軸の目盛を削除
imgplot.axes.set_yticks([]) #y軸の目盛を削除
###Output
_____no_output_____
###Markdown
読み込んだimgの確認numpyのndarray型の模様150×150 pixel のRGB3色データと思われる。
###Code
type(img)
img.shape
img.dtype
img.size,150*150*3
###Output
_____no_output_____ |
bosch.ipynb | ###Markdown
Kaggle - Bosch Production Line Performance (Handling Large Data With Limited Memory)Welcome! This jupyter notebook will demonstrate how to work with large datasets in python by analyzing production line data associated with the Bosch Kaggle competition (https://www.kaggle.com/c/bosch-production-line-performance). [This notebook is still a work in progress and will be updated as I improve algorithm performance.]Questions, comments, suggestions, and corrections can be sent to [email protected]. Business ChallengeBosch, a manufacturing company, teamed up with Kaggle to challenge teams to create a classification algorithm that predicts "internal failures along the manufacturing process using thousands of measurements and tests made for each component along the assembly line." DataBosch provided six huge files worth of data for the challenge (https://www.kaggle.com/c/bosch-production-line-performance/data). Three sets of training data--numeric, categorical, and dates--and the equivalent sets of test data. They contain a large number of features (one of the largest sets ever hosted on Kaggle), and the uncompressed files come out to **14.3 GB**. One of the largest difficulties associated with the competition is handing this amount of data. One strategy is to move the data to Amazon Web Services and use big data tools like Spark and Hadoop. Often, however, we are forced to extract value from data given real-world constraints like less memory and processing power. In this notebook, I'll work through an alternative approach where I split and simplify the data in order to process it on my 8GB RAM laptop.Let's start by examining the training data. Because the files are so large, we can't do the usual practice of using pandas to read the .CSV file into a dataframe. Instead, let's just look at a few lines.
###Code
import pandas as pd
line_count = 0
extracted_lines = []
with open('train_numeric.csv') as f:
for line in f:
if line_count < 6:
extracted_lines.append(line)
line_count += 1
else:
break
for line in extracted_lines:
print line[:40], '...', line[-40:]
###Output
Id,L0_S0_F0,L0_S0_F2,L0_S0_F4,L0_S0_F6,L ... 4258,L3_S51_F4260,L3_S51_F4262,Response
4,0.03,-0.034,-0.197,-0.179,0.118,0.116, ... ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0
6,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ... ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0
7,0.088,0.086,0.003,-0.052,0.161,0.025,- ... ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0
9,-0.036,-0.064,0.294,0.33,0.074,0.161,0 ... ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0
11,-0.055,-0.086,0.294,0.33,0.118,0.025, ... ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,0
###Markdown
We see that each line in train_numeric.csv represents a component with an Id, a long list of features (many of which are blank), and a Response indicating passage or failure of QC. Further examination shows that only 0.58% of Responses are failures, or *1*.Because we already have more data than we can handle, we're going to simplify by only working with train_numeric.csv and disregard train_categorical.csv and train_date.csv. Furthermore, we need to deal with the fact that train_numeric.csv is larger than we can handle and also is highly imbalanced. To do this, we're going to pull out all of the rows with Positive responses and randomly sample an equivalent number of negative rows. We'll make a new .CSV file that is 1/100th the size of the original and is now equally balanced.
###Code
import random
line_count = 0
extracted_positive_lines = []
with open('train_numeric.csv') as f:
for line in f:
if line_count == 0:
extracted_positive_lines.append(line)
line_count += 1
elif line[-2] == '1':
extracted_positive_lines.append(line)
line_count = 0
extracted_negative_lines = []
with open('train_numeric.csv') as f:
for line in f:
if line_count == 0:
line_count += 1
continue
if line_count > 0 and random.random() < 0.0058:
extracted_negative_lines.append(line)
combined_extracted_lines = extracted_positive_lines + extracted_negative_lines
with open('train_numeric_short.csv', 'w') as f:
for line in combined_extracted_lines:
f.write(line)
###Output
_____no_output_____
###Markdown
Now we can move the new .CSV to a pandas dataframe and replace the empty features with *0*.
###Code
train_numeric_short_df = pd.read_csv('train_numeric_short.csv')
train_numeric_short_df.fillna(value=0, inplace=True)
train_numeric_short_df.shape
###Output
_____no_output_____
###Markdown
We're now working with 13769 samples with 968 features not including Id and Response. Let's use train_test_split from sklearn.cross_validation to split our training data, which will let us quickly evaluate and compare various classifiers.
###Code
from sklearn.cross_validation import train_test_split
X = train_numeric_short_df.drop(['Response', 'Id'], axis=1)
y = train_numeric_short_df['Response']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
###Output
_____no_output_____
###Markdown
Comparing ClassifiersWith our training data split into new training and test sets, we can feed it into various sci-kit learn classifiers. The Kaggle competition is being judged using the Matthews correlation coefficient, so we'll use that to find the best classifier.* https://en.wikipedia.org/wiki/Matthews_correlation_coefficient* http://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.htmlAdditionally, we can use [recursive feature elimination with cross-validation](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html). Our data set is high dimensional with 968 features. Removing features of low importance can reduce the model complexity, overfitting, and training time.
###Code
from sklearn.metrics import matthews_corrcoef
from sklearn.feature_selection import RFECV
###Output
_____no_output_____
###Markdown
We can start with a simple [logistic regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) combined with the recursive feature elimination.
###Code
from sklearn.linear_model import LogisticRegression
clf = RFECV(LogisticRegression(), step=200)
clf.fit(X_train, y_train)
y_output = clf.predict(X_test)
matthews_corrcoef(y_test, y_output)
###Output
_____no_output_____
###Markdown
Next, let's try a [linear SVC model](http://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html).
###Code
from sklearn.svm import LinearSVC
clf = RFECV(LinearSVC(), step=200)
clf.fit(X_train, y_train)
y_output = clf.predict(X_test)
matthews_corrcoef(y_test, y_output)
###Output
_____no_output_____
###Markdown
Let's try the [ExtraTreesClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.ExtraTreesClassifier.html).
###Code
from sklearn.ensemble import ExtraTreesClassifier
forest = ExtraTreesClassifier(n_estimators=250, random_state=0)
clf = RFECV(forest, step=200)
clf.fit(X_train, y_train)
y_output = clf.predict(X_test)
matthews_corrcoef(y_test, y_output)
###Output
_____no_output_____
###Markdown
Now that we've settled on the ExtraTreesClassifier, let's retrain it using our full training set from before we split it with train_test_split.
###Code
clf.fit(X, y)
###Output
_____no_output_____
###Markdown
We're ready to analyze the actual test data provided by Bosch. As with the training data, though, the 2.1 GB file is quite large for my laptop. We can split up the test data into files of 100000 lines each and get predictions for each smaller file and then stitch the predictions back together for a final submission file.Fortunately, pandas can read .CSV files in chunks which makes it easy to split up the test data file.
###Code
test = pd.read_csv('test_numeric.csv', chunksize=100000)
file_number = 0
for chunk in test:
path = 'test_data/short' + str(file_number) + '.csv'
chunk.to_csv(path)
file_number += 1
for i in range(12):
test_numeric_short_df = pd.read_csv('test_data/short' + str(i) + '.csv').fillna(value=0)
Ids = test_numeric_short_df.ix[:,'Id']
X_test_real = test_numeric_short_df.drop(['Id', 'Unnamed: 0'], axis=1)
y_output_real = selector.predict(X_test_real)
output = pd.Series(y_output_real, name='Response')
output = pd.concat([Ids, output], axis=1)
output.to_csv('test_output/test_output' + str(i) + '.csv', index=False)
###Output
_____no_output_____
###Markdown
Now we just have to put our prediction files together into a single file.
###Code
import shutil
shutil.copyfile('test_output/test_output0.csv', 'test_output/output_combined.csv')
output_combined = open('test_output/output_combined.csv', 'a')
for i in range(1,12):
lines = open('test_output/test_output' + str(i) + '.csv', 'r').readlines()
for line in lines[1:]:
output_combined.write(line)
output_combined.close()
###Output
_____no_output_____ |
supplementary_notebooks/set_default_entropy_radius.ipynb | ###Markdown
Tuning of latent space entropy radiusTo evaluate how clusters overlap in the latent space, we compute the mean entropy of cluster assignment across all datapoints that fall within a radius of given encoded training instances. This notebook explores how the number of neighbors that fall within that radius on the latent space depends on several variables (ie number of clusters and encoding dimensions).
###Code
data_path = "../../Desktop/deepoftesttemp/"
# Load data and tag a few test videos
proj = deepof.data.project(path=data_path, arena_dims=[380]).run()
rules = proj.rule_based_annotation()
coords = proj.get_coords(propagate_annotations=False)
list(range(2500, 15001, 2500))
# Load the models, and try different radii
# each dataset is rank 3: encoding dimensions, number of clusters, and different radii
x, y = np.zeros([6, 6, 100]), np.zeros([6, 6, 100])
# Iterate over encoding dimensions
for a, n in enumerate(tqdm(range(2500, 15001, 2500))):
X_train, _, _, _ = coords.preprocess(shuffle=True, window_size=25, test_videos=0)
X_train = X_train[np.random.choice(range(X_train.shape[0]), n, replace=False)]
for b, d in enumerate((2, 4, 6, 8, 10, 12)):
gmvaep = SEQ_2_SEQ_GMVAE(encoding=d, number_of_components=15).build(
X_train.shape
)[3]
# Get encoer and grouper from full model
cluster_means = [
layer for layer in gmvaep.layers if layer.name == "latent_distribution"
][0]
cluster_assignment = [
layer for layer in gmvaep.layers if layer.name == "cluster_assignment"
][0]
encoder = tf.keras.models.Model(gmvaep.layers[0].input, cluster_means.output)
grouper = tf.keras.models.Model(
gmvaep.layers[0].input, cluster_assignment.output
)
# Use encoder and grouper to predict on validation data
encoding = encoder.predict(X_train)
groups = grouper.predict(X_train)
pdist = pairwise_distances(encoding)
for i, r in enumerate(np.linspace(0, 5, 100)):
x[a][b][i], y[a][b][i] = (
np.round(r, 7),
np.median(np.sum(pdist < r, axis=0)),
)
# Select number of average neighbors to aim for
N = 100
fig, (ax1, ax2) = plt.subplots(
1, 2, figsize=(12, 4), dpi=100, facecolor="w", edgecolor="k", sharey=True
)
plt.suptitle("Samples in latent space neighborhood for a given radius")
# Plot number of neighbors in radius versus number of clusters
for i, t in enumerate(range(2500, 15001, 2500)):
ax1.plot(x[i][2], y[i][2], label="t={}".format(t))
# Plot number of neighbors in radius versus encoding dimensions
for i, d in enumerate([2, 4, 6, 8, 10, 12]):
ax2.plot(x[5][i], y[5][i], label="enc={}".format(d))
ax1.set_xlabel("radius")
ax1.set_ylabel("samples in neighborhood")
ax1.legend()
# ax1.set_xlim(0,2)
# ax1.set_ylim(0,100)
ax1.axhline(N, linestyle="--", c="r", linewidth=0.5)
ax2.set_xlabel("radius")
ax2.set_ylabel("samples in neighborhood")
ax2.axhline(N, linestyle="--", c="r", linewidth=0.5)
ax2.legend()
plt.show()
# Fit sigmoid functions to the data in the second plot, and compute the radius that yields K neighbors in average for
# each curve
def sigmoid(x, L, x0, k, b):
y = L / (1 + np.exp(-k * (x - x0))) + b
return y
def fit_sigmoid(x, y):
p0 = [max(y), np.median(x), 1, min(y)]
popt, pcov = curve_fit(sigmoid, x, y, p0, method="dogbox")
return popt
def retrieve_x_from_sigmoid(x, y, n):
L, x0, k, b = fit_sigmoid(x, y)
x_given_k = -(np.log(L / (n - b) - 1) / k) + x0
return x_given_k
# Interpolate to get the radius that will yield n neighbors in each setting
x_given_n = np.zeros([6, 6])
_x_given_n = np.zeros([6, 6])
y_given_n = np.array([list(range(2500, 15001, 2500)), [2, 4, 6, 8, 10, 12]])
for i in range(6):
for j in range(6):
x_given_n[i][j] = retrieve_x_from_sigmoid(x[i][j], y[i][j], 100)
# Fit a line to the data to get an equation of how #neighbors varies with encoding dimensions
# The retrieved equation will be the default radius!
res1 = linregress(np.log2(y_given_n[0]), x_given_n[:, 2])
print(res1)
res2 = linregress(y_given_n[1], x_given_n[5])
print(res2)
# Compute radius for an example
def radius_given_n_and_dim(n, dim, coefs, inpt):
return coefs[0] * np.log2(n) + coefs[1] * dim + inpt
radius_given_n_and_dim(15000 * 5, 6, res3.coef_, res3.intercept_)
###Output
_____no_output_____
###Markdown
To select a good default for the radius r, we make the value depend on the variables we find relationships with, such as the number of dimensions in the latent space.
###Code
fig, (ax1, ax2) = plt.subplots(
1, 2, figsize=(12, 5), dpi=100, facecolor="w", edgecolor="k", sharey=True
)
ax1.scatter(np.log2(y_given_n[0]), x_given_n[:, 2])
ax1.plot(
np.log2(y_given_n[0]),
res1.intercept + res1.slope * np.log2(y_given_n[0]),
"r",
label="y={}*x+{}".format(np.round(res1.slope, 2), np.round(res1.intercept, 2)),
)
ax1.set_ylabel("radius to reach {} samples in neighborhood".format(N))
ax1.set_xlabel("number of encoded examples")
ax2.scatter(y_given_n[1], x_given_n[5])
ax2.plot(
y_given_n[1],
res2.intercept + res2.slope * y_given_n[1],
"r",
label="y={}*x+{}".format(np.round(res2.slope, 2), np.round(res2.intercept, 2)),
)
ax2.set_ylabel("radius to reach {} samples in neighborhood".format(N))
ax2.set_xlabel("number of dimensions")
plt.suptitle(
"Relationship between radius to reach {} average neighbors \n \
before training and neighborhood crowdedness".format(
N
)
)
ax1.legend()
ax2.legend()
plt.ylim(0)
plt.show()
# Fit a hyperplane to both features
res3 = LinearRegression()
X = np.array([list(i) for i in product(np.log2(y_given_n[0]), y_given_n[1])])
res3.fit(X, x_given_n.flatten(order="C"))
print(
"log2(samples) coef: {}\n\
dimension coef: {}".format(
*np.round(res3.coef_, 25)
)
)
print("intercept:", np.round(res3.intercept_, 25))
print()
print("r2_score:", np.round(r2_score(x_given_n.flatten(), res3.predict(X)), 5))
%matplotlib inline
# Let's represent how both variables evolve in a 3D space
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(111, projection="3d")
# Get combinations of predictors
prod = np.array([list(i) for i in product(y_given_n[0], y_given_n[1])])
n, d = prod[:, 0], prod[:, 1]
ax.scatter3D(
np.log2(n),
d,
x_given_n,
c="red",
label="z={}*x + {}*y + {}".format(
*np.round(res3.coef_, 5), np.round(res3.intercept_, 5)
),
)
x1, x2 = np.meshgrid(X[:, 0], X[:, 1])
ax.plot_surface(
x1,
x2,
(res3.coef_[0] * x1 + res3.coef_[1] * x2 + res3.intercept_),
cmap=cm.coolwarm,
linewidth=1,
antialiased=True,
)
ax.set_xlabel("number of samples")
ax.set_ylabel("number of dimensions")
ax.set_zlabel("radius to reach {} samples in neighborhood".format(N))
ax.legend()
plt.show()
###Output
_____no_output_____ |
ACBstats/acb_stats.ipynb | ###Markdown
New stats format in acb.comPlaying with the new stats format in acb.com (launched in October 2019)
###Code
import pandas as pd
season = 2019
urls = [
'http://www.acb.com/estadisticas-individuales/{}/temporada_id/{}/tipo_id/0'.format(x, season)
for x in
[
'valoracion',
'puntos',
'rebotes',
'asistencias',
'robos', 'tapones',
'mas-menos',
'minutos',
'tiros3',
'tiros3-porciento',
'tiros2',
'tiros2-porciento',
'tiros1',
'tiros1-porciento',
'rebotes-defensivos',
'rebotes-ofensivos',
'faltas-recibidas',
'faltas-cometidas',
'mates'
]
]
data = pd.concat([pd.read_html(url)[0].iloc[:, 1:] for url in urls], axis=0).drop_duplicates()
data.columns = [
'name', 'games', 'minutes', 'points',
'3p_converted', '3p_attempted', '3p_percentage',
'2p_converted', '2p_attempted', '2p_percentage',
'1p_converted', '1p_attempted', '1p_percentage',
'offensive_rebounds', 'deffensive_rebounds', 'rebounds',
'assists', 'steals', 'turnovers',
'blocks', 'received_blocks',
'dunks', 'faults', 'received_faults',
'plus_minus', 'pir'
]
data = data.set_index('name')
data.describe()
###Output
_____no_output_____
###Markdown
PIR and plus-minus
###Code
data[['pir', 'plus_minus']].sum(axis=1).sort_values(ascending=False).head(18)
###Output
_____no_output_____
###Markdown
Offensive players
###Code
(
data[
['points',
'offensive_rebounds',
'assists',
'received_faults',
'3p_converted',
'2p_converted',
'1p_converted',
'plus_minus']
].sum(axis=1) -
data[
['3p_attempted',
'2p_attempted',
'1p_attempted',
'turnovers',
'received_blocks'
]
].sum(axis=1)
).sort_values(ascending=False).head(18)
###Output
_____no_output_____
###Markdown
Deffensive players
###Code
(
data[
['deffensive_rebounds',
'steals',
'blocks',
'plus_minus']
].sum(axis=1) - data['faults']
).sort_values(ascending=False).head(18)
###Output
_____no_output_____
###Markdown
Team players
###Code
(data['plus_minus'] + data['minutes'] / 2 - data['pir']).sort_values(ascending=False).head(18)
###Output
_____no_output_____
###Markdown
Assists by turnover
###Code
((data['assists'] + 1) / (data['turnovers'] + 1)).sort_values(ascending=False).head(18)
###Output
_____no_output_____
###Markdown
Up in the air
###Code
(
data['dunks'] + data['blocks'] - data['received_blocks'] + data['2p_converted'] - data['2p_attempted']
).sort_values(ascending=False).head(18)
###Output
_____no_output_____
###Markdown
Greedy
###Code
(
data[['3p_attempted', '2p_attempted', 'turnovers', 'received_blocks']].sum(axis=1) -
data[['assists','plus_minus']].sum(axis=1)
).sort_values(ascending=False).head(18)
###Output
_____no_output_____ |
clase_16_EstadisticaInferencial2/2_checkpoint.ipynb | ###Markdown
--- Estadística Inferencial Imports
###Code
import scipy.stats as stats
import pandas as pd
import numpy as np
import math
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Ejercicio: Tests sobre una proporciónFuimos contratados por una empresa de lotería para averiguar la proporción de clientes que compra determinado producto. La firma va a mantener su plan de marketing actual si esta proporción es de 50% o más pero va a triplicar su gasto en publicidad en caso contrario. El dataset que vamos a usar es de datos sintéticos (construido por nosotros) usando la función `generar`https://numpy.org/doc/1.18/reference/random/generated/numpy.random.Generator.binomial.htmlnumpy.random.Generator.binomial
###Code
def generar(trials, p, obs):
random_generator = np.random.default_rng()
data = random_generator.binomial(trials, p, obs)
result = pd.DataFrame(data, columns= ['compra'])
return result
p_generacion = 0.4
trials = 1
obs = 100
data_ej3 = generar(trials, p_generacion, obs)
#sns.distplot(data_ej3)
sns.histplot(data_ej3, kde = True, stat = 'density', binrange=(-0.5, 1.5));
###Output
_____no_output_____ |
content/lessons/05/Watch-Me-Code/WMC2-Say-My-Name.ipynb | ###Markdown
Watch Me Code 2: Say My NameSame program as WMC 1, but with a for loop.
###Code
name = input("What is your name? ")
times = int(input("How many times would you like me to say your name %s? " % name))
for i in range(times):
print(name)
###Output
J. Bob Biggywiggy
J. Bob Biggywiggy
J. Bob Biggywiggy
###Markdown
Watch Me Code 2: Say My NameSame program as WMC 1, but with a for loop.
###Code
name = input("What is your name? ")
times = int(input("How many times would you like me to say your name %s? " % name))
for i in range(times):
print(name)
###Output
What is your name? bob
How many times would you like me to say your name bob? 3
bob
bob
bob
###Markdown
Watch Me Code 2: Say My NameSame program as WMC 1, but with a for loop.
###Code
name = input("What is your name? ")
times = int(input("How many times would you like me to say your name %s? " % name))
for i in range(times):
print(name)
###Output
What is your name? bob
How many times would you like me to say your name bob? 3
bob
bob
bob
|
notebooks/parcels/Local/salishmap.ipynb | ###Markdown
**Map SalishSea**
###Code
%matplotlib inline
import numpy as np
import xarray as xr
import os
from matplotlib import pyplot as plt, animation, rc
from mpl_toolkits.axes_grid1 import make_axes_locatable
import matplotlib.colors as mcolors
from cartopy import crs, feature
import cmocean
cmap = cmocean.cm.deep
###Output
_____no_output_____
###Markdown
Paths
###Code
# Define paths
paths = {
'NEMO': '/results2/SalishSea/nowcast-green.201905/',
'coords': '/Users/jvalenti/MOAD/SSC_masks/coordinates_seagrid_SalishSea201702.nc',
'mask': '/Users/jvalenti/MOAD/SSC_masks/mesh_mask201702.nc',
'out': '/Users/jvalenti/MOAD/analysis-jose/notebooks/results/',
}
###Output
_____no_output_____
###Markdown
Simulation
###Code
coords = xr.open_dataset(paths['coords'], decode_times=False)
mask = xr.open_dataset(paths['mask'])
# create some data to use for the plot
dt = 0.001
t = np.arange(0.0, 10.0, dt)
r = np.exp(-t[:1000]/0.05) # impulse response
x = np.random.randn(len(t))
s = np.convolve(x, r)[:len(x)]*dt # colored noise
fig = plt.figure(figsize=(9, 4),facecolor='white')
ax = fig.add_subplot(121)
# the main axes is subplot(111) by default
plt.plot(t, s)
plt.axis([0, 1, 1.1*np.amin(s), 2*np.amax(s)])
plt.xlabel('time (s)')
plt.ylabel('current (nA)')
plt.title('Subplot 1: \n Gaussian colored noise')
axins = ax.inset_axes([0.5, 0.5, 0.47, 0.47])
axins.hist(s, 400)
#plt.title('Probability')
axins.set_xticklabels('')
axins.set_yticklabels('')
plt.show()
# # Make map
# blevels = list(np.arange(0,450,15))
# fig, ax = plt.subplots(figsize=(38, 16), subplot_kw={'projection': crs.Mercator()})
# ax.set_extent([-125.5, -122, 48, 50.5], crs=crs.PlateCarree())
# ax.add_feature(feature.GSHHSFeature('low', facecolor='lightgray',edgecolor='lightgray'),zorder=2)
# ax.add_feature(feature.RIVERS, edgecolor='k',zorder=5)
# #ax.add_feature(feature.OCEAN,zorder=1)
# im=ax.contourf(coords.nav_lon, coords.nav_lat, mask.mbathy[0,:,:]*10,zorder=1,transform=crs.PlateCarree(),cmap=cmap,levels=blevels)
# #plt.contour(coords.nav_lon, coords.nav_lat, mask.mbathy[0,:,:]*10,zorder=1,transform=crs.PlateCarree(),colors='w',levels=blevels,linewidths=0.05)
# #plt.xticks(fontsize=14)
# #plt.yticks(fontsize=14)
# gl = ax.gridlines(
# linestyle='--', color='gray', draw_labels=True,
# xlocs=range(-125, -121), ylocs=range(47, 52),zorder=5)
# gl.top_labels, gl.right_labels = False, False
# cbar = fig.colorbar(im, location='bottom',aspect=60,shrink=0.3,pad=0.05)
# cbar.set_label('Depth [m]')
# ax.text(-0.05, 0.55, 'Latitude', va='bottom', ha='center',
# rotation='vertical', rotation_mode='anchor',
# transform=ax.transAxes, fontsize=14,weight="bold")
# ax.text(0.5, -0.05, 'Longitude', va='bottom', ha='center',
# rotation='horizontal', rotation_mode='anchor',
# transform=ax.transAxes, fontsize=14,weight="bold")
# #axins = ax.inset_axes([0.65, 0.75, 0.5, 0.5],projection=crs.PlateCarree())
# ax.set_extent([-160, -75, 65, 25], crs=crs.PlateCarree())
# ax.add_feature(feature.GSHHSFeature('intermediate', edgecolor='k', facecolor='lightgray'))
# ax.add_feature(feature.BORDERS,zorder=3)
# #plt.title('Probability')
# gl = axins.gridlines(crs=crs.PlateCarree(), draw_labels=True, xlocs=np.linspace(-150,-50,5), ylocs=np.linspace(55,35,3),
# linewidth=2, color='gray', alpha=0.5, linestyle='--')
# gl.xlabel_style = {'size': 25}
# gl.ylabel_style = {'size': 25}
# gl.bottom_labels, gl.left_labels = False, False
# plt.show()
# #plt.savefig("/Users/jvalenti/Desktop/baty.pdf")
# Make map
blevels = list(np.arange(0,450,15))
fig, ax = plt.subplots(figsize=(38, 16), subplot_kw={'projection': crs.Mercator()})
ax.set_extent([-125.5, -122, 48, 50.5], crs=crs.PlateCarree())
ax.add_feature(feature.GSHHSFeature('high', facecolor='lightgray',edgecolor='lightgray'),zorder=2)
ax.add_feature(feature.RIVERS, edgecolor='k',zorder=5)
#ax.add_feature(feature.OCEAN,zorder=1)
im=ax.contourf(coords.nav_lon, coords.nav_lat, mask.mbathy[0,:,:]*10,zorder=1,transform=crs.PlateCarree(),cmap=cmap,levels=blevels)
#plt.contour(coords.nav_lon, coords.nav_lat, mask.mbathy[0,:,:]*10,zorder=1,transform=crs.PlateCarree(),colors='w',levels=blevels,linewidths=0.05)
#plt.xticks(fontsize=14)
#plt.yticks(fontsize=14)
gl = ax.gridlines(
linestyle='--', color='gray', draw_labels=True,
xlocs=range(-125, -121), ylocs=range(47, 52),zorder=5)
gl.top_labels, gl.right_labels = False, False
cbar = fig.colorbar(im, location='bottom',aspect=60,shrink=0.3,pad=0.05)
cbar.set_label('Depth [m]')
ax.text(-0.05, 0.55, 'Latitude', va='bottom', ha='center',
rotation='vertical', rotation_mode='anchor',
transform=ax.transAxes, fontsize=14,weight="bold")
ax.text(0.5, -0.05, 'Longitude', va='bottom', ha='center',
rotation='horizontal', rotation_mode='anchor',
transform=ax.transAxes, fontsize=14,weight="bold")
plt.savefig("/Users/jvalenti/Desktop/baty.pdf")
states_provinces = feature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none')
# Make map
fig, ax = plt.subplots(figsize=(20, 16), subplot_kw={'projection': crs.Mercator()})
ax.set_extent([-160, -75, 65, 25], crs=crs.PlateCarree())
ax.add_feature(feature.GSHHSFeature('intermediate', edgecolor='k', facecolor='lightgray'))
#ax.add_feature(feature.OCEAN,zorder=1)
ax.add_feature(feature.BORDERS,zorder=3)
gl = ax.gridlines(crs=crs.PlateCarree(), draw_labels=True, xlocs=np.linspace(-150,-50,5), ylocs=np.linspace(55,35,3),
linewidth=2, color='gray', alpha=0.5, linestyle='--')
gl.xlabel_style = {'size': 25}
gl.ylabel_style = {'size': 25}
gl.bottom_labels, gl.left_labels = False, False
plt.savefig("/Users/jvalenti/Desktop/map.pdf")
###Output
/Users/jvalenti/conda_envs/parcels/lib/python3.7/site-packages/cartopy/crs.py:825: ShapelyDeprecationWarning: __len__ for multi-part geometries is deprecated and will be removed in Shapely 2.0. Check the length of the `geoms` property instead to get the number of parts of a multi-part geometry.
if len(multi_line_string) > 1:
/Users/jvalenti/conda_envs/parcels/lib/python3.7/site-packages/cartopy/crs.py:877: ShapelyDeprecationWarning: Iteration over multi-part geometries is deprecated and will be removed in Shapely 2.0. Use the `geoms` property to access the constituent parts of a multi-part geometry.
for line in multi_line_string:
/Users/jvalenti/conda_envs/parcels/lib/python3.7/site-packages/cartopy/crs.py:944: ShapelyDeprecationWarning: __len__ for multi-part geometries is deprecated and will be removed in Shapely 2.0. Check the length of the `geoms` property instead to get the number of parts of a multi-part geometry.
if len(p_mline) > 0:
|
demo/ALENN_Demo.ipynb | ###Markdown
ALENN - Demo Notebook Quickstart GuideDonovan PlattMathematical Institute, University of OxfordInstitute for New Economic Thinking at the Oxford Martin SchoolCopyright (c) 2020, University of Oxford. All rights reserved.Distributed under a BSD 3-Clause licence. See the accompanying LICENCE file for further details. OverviewThis notebook provides, through the use of a simple illustrative example, a complete tutorial on the use of the ALENN package to perform Bayesian estimation for economic simulation models using the neural network-based approach introduced by Platt (2021) in the paper *[Bayesian Estimation of Economic Simulation Models Using Neural Networks](https://link.springer.com/article/10.1007/s10614-021-10095-9)*. In general, the workflow presented here should require minimal adjustment (changing the model function, empirical dataset, priors, and sampler settings) in order to be applied to new examples. Step 1 Importing of PackagesAs a natural starting point, we begin by importing any required Python packages. With the exception of ALENN, which we assume has already been installed as per the instructions provided in the accompanying README file, all other imported libraries are now fairly standard in most data science workflows.
###Code
# Import the ALENN ABM Estimation Package
import alenn
# Import Plotting Libraries
import matplotlib.pyplot as plt
# Import Numerical Computation Libraries
import numpy as np
import pandas as pd
# Import General Mathematical Libraries
from scipy import stats
# Import Data Storage Libraries
import pickle as pkl
# Import System Libraries
import os
import logging
# Disable Tensorflow Deprecation Warnings
logging.disable(logging.WARNING)
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
# Tensorflow 2.x deprecates many Tensorflow 1.x methods, causing Tensorflow 1.15.0 to output a large number
# of (harmless) deprecation warnings when performing the first likelihood calculation. This can be very
# distracting, leading us to disable them.
###Output
_____no_output_____
###Markdown
Step 2 Creating the Likelihood and Posterior Estimator ObjectThe primary functionally of ALENN is implemented in the `MDNPosterior` class, which contains all the methods required to estimate the likelihood and posterior. It thus follows that the first step in the estimation pipeline is creating an `MDNPosterior` object by calling its constructor method, `alenn.mdn.MDNPosterior`.If no arguments are provided to the constructor, the default neural network architecture introduced in the paper is used. If an alternative is required, however, this can easily be specified through the use of keyword arguments. As an example, increasing the number of lags to 4 and decreasing the number of hidden layers to 2 could be achieved by calling `alenn.mdn.MDNPosterior(num_lags = 4, num_layers = 2)`. Further details can be obtained by consulting the class docstring: ```python?alenn.mdn.MDNPosterior```
###Code
# Create an MDN Posterior Approximator Object (Uses Default Settings from the Paper)
posterior = alenn.mdn.MDNPosterior()
###Output
--------------------------------------
Successfully created a new MDN object:
--------------------------------------
Number of lags: 3
Number of mixture components: 16
Number of neurons per layer: 32
Number of hidden layers: 3
Batch size: 512
Number of epochs: 12
Activation function: relu
Input noise: 0.2
Output noise: 0.2
--------------------------------------
###Markdown
Step 3 Specifying the Candidate ModelAt this stage, all we have done is defined a generic posterior estimator object. In order to actually apply the estimator to a given problem, we need to provide the object with additional information. We begin with the candidate model.From the perspective of ALENN, the model is a black box capable of producing simulated time series data. Therefore, the candidate model is provided to ALENN in the form of a function that takes in a 1-d numpy array or list of parameter values and returns a model output matrix as a 2-d numpy array. Ensuring that the model is correctly specified and matches ALENN's input, processing, and output requirements is perhaps the most critical part of this process and should therefore be approached with care.To elaborate, the model function should take, as input, a 1-d numpy array, $\mathbf{\theta}$, containing values for each of the model's free parameters (those that should be estimated). The function should then proceed to generate a corresponding set of $R$ model Monte Carlo replications. Each of these replications is a single time series of length $T_{sim}$ generated by the model for the same set of parameter values as the remaining replications, $\mathbf{\theta}$, but a different random seed, $i$. Once generated, each replication should be stored as a single column in a $T_{sim} \times R$ numpy array that is returned as the final output by the model function.It is important to note that, although the choice of seed for each replication is arbitrary, the same set of seeds must be used throughout the entire estimation experiment, i.e. the model function should always use the same set of seeds, regardless of the value of $\mathbf{\theta}$ at which the function is evaluated. Footnote 44 in the paper provides a more detailed discussion. Additionally, in most practical examples, the generation of simulated data using the candidate model is likely to be computationally-expensive and thus a bottleneck in the inference process. We therefore suggest that, if the model is costly to simulate, that the model function should generate the replications in parallel.Finally, as suggested by the model function output structure introduced above, this version of ALENN currently only supports univariate time series model outputs. Note, however, that the methodology itself is generally applicable to multivariate outputs and a multivariate extension to this library is likely to be released in the near future.
###Code
# Specify the Simulated Data Characteristics
T_sim = 1000 # Length of each Monte Carlo replication
R = 100 # Number of Monte Carlo replications
seed_set = 7 # The set of seeds associated with the model replications
# In most cases, we suggest that either (T_sim = 1000 and R = 100) or (T_sim = 2000 and R = 50) be considered.
# The seed_set variable can be interpreted as defining an arbitrary set of 100 random seeds.
# Define the Candidate Model Function
def model(theta):
return np.diff(alenn.models.random_walk(700, 0.4, 0.5, theta[0], theta[1], T_sim, R, seed_set), axis = 0)
# Add the Model Function to the MDNPosterior Object
posterior.set_model(model)
# In the above, we have selected the random walk examined in the paper's comparative experiments. This model,
# along with the other models considered in the paper, are implemented as part of ALENN and can be accessed via
# alenn.models as above (see the corresponding file for more details).
#
# In this case, we are attempting to estimate the pre- and post-break volatility and have fixed all other parameters
# to their default values. Notice that we also consider the series of first differences to induce stationarity.
# While stationarity is not an assumption of the methodology, it may be advantageous to consider stationarity
# transformations if a given non-stationary model proves to be difficult to estimate.
###Output
Model function successfully set.
----------------------------------------------------------------------------
###Markdown
Step 4 Specifying the Model PriorsAs in any Bayesian exercise, we must specify a prior over the model parameters. In ALENN, the prior is specifiedin the form of a special data structure. A prior function must be defined separately for each free parameter and each function of this type should take in a single value for that parameter and return a corresponding prior density value. These functions should be stored in a Python list.In all cases, the order of the density functions in the prior list must correspond to the order in which the parameters are passed to the model function. More concretely, if the model function takes in values for parameters $[\sigma_1, \sigma_2]$, the prior list must have form $[p(\sigma_1), p(\sigma_2)]$.
###Code
# Define Parameter Priors
priors = [stats.uniform(loc = 0, scale = 10).pdf,
stats.uniform(loc = 0, scale = 10).pdf]
# Add the Model Priors to the MDNPosterior Object
posterior.set_prior(priors)
# In the above, we have defined uniform priors over [0, 10] for both the pre- and post-break volatility. In most
# applications, we recommend that users make use of SciPy's stats module to define the priors, as we have. This
# results in greater readability and can help avoid errors in the prior specification.
###Output
Model prior successfully set. The model has 2 free parameters.
----------------------------------------------------------------------------
###Markdown
Step 5 Loading the Empirical DataTo complete the problem specification, we are finally required to provide the `MDNPosterior` object with a set of empirical data. This process is rather straightforward and simply requires that the data be provided in the form of a 1-d numpy array.While longer empirical time series are always preferred if available, we typically consider $T_{emp} = 1000$ for problems involving $1-4$ free parameters and $T_{emp} = 2000$ for problems involving $5-10$ free parameters. In many cases, however, we suspect that a significant reduction in the number of data points would be viable, particularly when the data provides a reasonable level of information regarding the model parameters.
###Code
# Load the Empirical Data
with open('data/Demo_Data', 'rb') as f:
empirical = pkl.load(f)
# Add the Empirical Data to the MDNPosterior Object
posterior.load_data(empirical)
# The empirical data loaded above is a synthetic series of 999 (first-differenced) observations generated by the
# random walk model when initialised using the parameter values associated with the first free parameter set
# introduced in the paper's comparative exercises. Our exercise here can thus be seen as a replication of the
# associated comparative experiment.
#
# In a true empirical application, this series would simply be replaced by a series measured from the actual
# real-world system being modelled.
###Output
Empirical data successfully loaded. There are 999 observations in total.
----------------------------------------------------------------------------
###Markdown
Step 6 Sampling the PosteriorWith the `MDNPosterior` object now completely specified, we are able to evaluate the posterior for arbitrary values of $\mathbf{\theta}$ and hence sample it using MCMC. As discussed in detail in Appendix 2, we make use of the adaptive Metropolis-Hastings algorithm proposed by Griffin and Walker (2013).As in the case of the posterior, the sampler is also implemented as an object, in this case being an instantiation of the `AdaptiveMCMC` class. In order to perform the sampling procedure, a number of key components must be specified and passed to the object. These include:* Parameter ranges over which to conduct the initial sweep of the parameter space. This is specified in the form of two 1-d numpy arrays that contain, in the same order as is associated with the list of priors discussed in Step 4, the lower and upper bounds for each parameter respectively.* The desired number of samples per sample set. In general, we recommend that this is set to $K = 70$.* The desired number of sample sets to be generated. As a rule of thumb, we suggest generating $S = 5000$ sets for problems involving $1 - 4$ free parameters and $15000$ sets for problems involving $5 - 10$ free parameters. Of course, common convergence diagnostics, such as Galman and Ruben's R, could certainly be used to ensure that a sufficient number of samples has been generated.
###Code
# Create an Adaptive MCMC Sampler Object
sampler = alenn.mcmc.AdaptiveMCMC(K = 70, S = 5000)
# Define the Parameter Bounds
theta_lower = np.array([0, 0])
theta_upper = np.array([10, 10])
# Add the Posterior Approximator and Parameter Ranges to the Newly-created Object
sampler.set_posterior(posterior)
sampler.set_initialisation_ranges(theta_lower, theta_upper)
# Please note that the set_posterior method must be called before the set_initialisation_ranges method.
# Initiate the Sampling Process
sampler.sample_posterior()
###Output
-----------------------------------------------
Successfully created a new MCMC sampler object:
-----------------------------------------------
Number of sample sets: 5000
Number of samples per set: 70
-----------------------------------------------
MDNPosterior object successfully loaded.
----------------------------------------------------------------------------
Initialisation ranges successfully set.
Lower Bound Upper Bound
Parameter
1 0 10
2 0 10
----------------------------------------------------------------------------
###Markdown
Step 7 Processing the Obtained SamplesOnce the sampling procedure has concluded, all that remains is the processing of the obtained samples into meaningful outputs, i.e. tables or marginal posterior plots.The aforementioned samples may be extracted from the `AdaptiveMCMC` object using the `process_samples` method, which requires the specification of a single integer argument, `burn_in`. This argument specifies the number of sample sets that should be discarded as part of an initial burning-in period, as is standard in all MCMC algorithms, and we typically recommend burning-in periods of $1500-2500$ sample sets for $S = 5000$ and $7500-10000$ sample sets for $S = 15000$. Of course, some problems may require alternative configurations depending on their associated convergence rates and we therefore recommend that multiple chains be generated by repeating Step 6 several times in order to diagnose convergence when applying the methodology.The `process_samples` method returns the obtained samples in the form of a 2-d numpy array, where each column represents the posterior samples obtained for a given parameter, with the columns following the same parameter order as the original model function. The method output also contains a final, extra column consisting of the the associated log-likelihood samples.
###Code
# Result Table
# Note that we illustrate the construction of a result table for a single chain, whereas the corresponding result
# in Section 4.1 is associated with 5 chains.
# Process the Sampler Output
samples = sampler.process_samples(burn_in = 2500)
# Calculate the Posterior Mean
pos_mean = samples[:, :posterior.num_param].mean(axis = 0)
# Calculate the Posterior Standard Deviation
pos_std = samples[:, :posterior.num_param].std(axis = 0)
# Construct a Result Table
result_table = pd.DataFrame(np.array([pos_mean, pos_std]).transpose(), columns = ['Posterior Mean', 'Posterior Std. Dev.'])
result_table.index.name = 'Parameter'
result_table.index += 1
# Display the Result Table
print('Final Estimation Results:')
print('')
print(result_table)
# Marginal Posterior Plots
# Note that we illustrate the construction of marginal posterior plots for a single chain, whereas the corresponding
# result in Section 4.1 is associated with 5 chains.
# Process the Sampler Output
samples = sampler.process_samples(burn_in = 2500)
# Set the Parameter Names
param_names = [r'$\sigma_1$', r'$\sigma_2$']
# Set-Up the Figure
fig = plt.figure(figsize = (5 * posterior.num_param, 5))
# Loop Over the Free Parameters
for i in range(posterior.num_param):
# Plot the Posterior Histogram
plt.subplot(1, posterior.num_param, i + 1)
plt.hist(samples[:, i], 25, density = True, color = 'b', alpha = 0.5)
# Plot the Prior Density
prior_range = np.linspace(samples[:, i].min() * 0.9, samples[:, i].max() * 1.1, 100)
plt.plot(prior_range, [priors[i](x) for x in prior_range], color = 'r', alpha = 0.75)
# Note that we are only plotting the prior for a limited range such that it extends only slightly
# beyond the posterior. This is done to improve the clarity of presentation. In reality, the prior is
# substantially wider than the posterior and would extend from 0 to 10 for this example.
# Plot the Posterior Mean
plt.axvline(x = samples[:, i].mean(), c = 'k', linestyle = 'dashed', alpha = 0.75)
# Label the Plot
plt.xlabel(param_names[i])
plt.ylabel(r'$p($' + param_names[i] + r'$)$')
plt.legend(['Prior Density', 'Posterior Mean', 'Posterior Density'], fontsize = 8)
# Set the Figure Layout
plt.tight_layout()
# Display the Figure
plt.show()
###Output
_____no_output_____ |
03.PyTorch基础(入门)/autograd.ipynb | ###Markdown
自动求导这次课程我们会了解 PyTorch 中的自动求导机制,自动求导是 PyTorch 中非常重要的特性,能够让我们避免手动去计算非常复杂的导数,这能够极大地减少了我们构建模型的时间,这也是其前身 Torch 这个框架所不具备的特性,下面我们通过例子看看 PyTorch 自动求导的独特魅力以及探究自动求导的更多用法。
###Code
import torch
from torch.autograd import Variable
###Output
_____no_output_____
###Markdown
简单情况的自动求导下面我们显示一些简单情况的自动求导,"简单"体现在计算的结果都是标量,也就是一个数,我们对这个标量进行自动求导。
###Code
x = Variable(torch.Tensor([2]), requires_grad=True)
y = x + 2
z = y ** 2 + 3
print(z)
###Output
Variable containing:
19
[torch.FloatTensor of size 1]
###Markdown
通过上面的一些列操作,我们从 x 得到了最后的结果out,我们可以将其表示为数学公式$$z = (x + 2)^2 + 3$$那么我们从 z 对 x 求导的结果就是 $$\frac{\partial z}{\partial x} = 2 (x + 2) = 2 (2 + 2) = 8$$如果你对求导不熟悉,可以查看以下[网址进行复习](https://baike.baidu.com/item/%E5%AF%BC%E6%95%B01)
###Code
# 使用自动求导
z.backward()
print(x.grad)
###Output
Variable containing:
8
[torch.FloatTensor of size 1]
###Markdown
对于上面这样一个简单的例子,我们验证了自动求导,同时可以发现发现使用自动求导非常方便。如果是一个更加复杂的例子,那么手动求导就会显得非常的麻烦,所以自动求导的机制能够帮助我们省去麻烦的数学计算,下面我们可以看一个更加复杂的例子。
###Code
x = Variable(torch.randn(10, 20), requires_grad=True)
y = Variable(torch.randn(10, 5), requires_grad=True)
w = Variable(torch.randn(20, 5), requires_grad=True)
out = torch.mean(y - torch.matmul(x, w)) # torch.matmul 是做矩阵乘法
out.backward()
###Output
_____no_output_____
###Markdown
如果你对矩阵乘法不熟悉,可以查看下面的[网址进行复习](https://baike.baidu.com/item/%E7%9F%A9%E9%98%B5%E4%B9%98%E6%B3%95/5446029?fr=aladdin)
###Code
# 得到 x 的梯度
print(x.grad)
# 得到 y 的的梯度
print(y.grad)
# 得到 w 的梯度
print(w.grad)
###Output
Variable containing:
0.1342 0.1342 0.1342 0.1342 0.1342
0.0507 0.0507 0.0507 0.0507 0.0507
0.0328 0.0328 0.0328 0.0328 0.0328
-0.0086 -0.0086 -0.0086 -0.0086 -0.0086
0.0734 0.0734 0.0734 0.0734 0.0734
-0.0042 -0.0042 -0.0042 -0.0042 -0.0042
0.0078 0.0078 0.0078 0.0078 0.0078
-0.0769 -0.0769 -0.0769 -0.0769 -0.0769
0.0672 0.0672 0.0672 0.0672 0.0672
0.1614 0.1614 0.1614 0.1614 0.1614
-0.0042 -0.0042 -0.0042 -0.0042 -0.0042
-0.0970 -0.0970 -0.0970 -0.0970 -0.0970
-0.0364 -0.0364 -0.0364 -0.0364 -0.0364
-0.0419 -0.0419 -0.0419 -0.0419 -0.0419
0.0134 0.0134 0.0134 0.0134 0.0134
-0.0251 -0.0251 -0.0251 -0.0251 -0.0251
0.0586 0.0586 0.0586 0.0586 0.0586
-0.0050 -0.0050 -0.0050 -0.0050 -0.0050
0.1125 0.1125 0.1125 0.1125 0.1125
-0.0096 -0.0096 -0.0096 -0.0096 -0.0096
[torch.FloatTensor of size 20x5]
###Markdown
上面数学公式就更加复杂,矩阵乘法之后对两个矩阵对应元素相乘,然后所有元素求平均,有兴趣的同学可以手动去计算一下梯度,使用 PyTorch 的自动求导,我们能够非常容易得到 x, y 和 w 的导数,因为深度学习中充满大量的矩阵运算,所以我们没有办法手动去求这些导数,有了自动求导能够非常方便地解决网络更新的问题。 复杂情况的自动求导上面我们展示了简单情况下的自动求导,都是对标量进行自动求导,可能你会有一个疑问,如何对一个向量或者矩阵自动求导了呢?感兴趣的同学可以自己先去尝试一下,下面我们会介绍对多维数组的自动求导机制。
###Code
m = Variable(torch.FloatTensor([[2, 3]]), requires_grad=True) # 构建一个 1 x 2 的矩阵
n = Variable(torch.zeros(1, 2)) # 构建一个相同大小的 0 矩阵
print(m)
print(n)
# 通过 m 中的值计算新的 n 中的值
n[0, 0] = m[0, 0] ** 2
n[0, 1] = m[0, 1] ** 3
print(n)
###Output
Variable containing:
4 27
[torch.FloatTensor of size 1x2]
###Markdown
将上面的式子写成数学公式,可以得到 $$n = (n_0,\ n_1) = (m_0^2,\ m_1^3) = (2^2,\ 3^3) $$ 下面我们直接对 n 进行反向传播,也就是求 n 对 m 的导数。这时我们需要明确这个导数的定义,即如何定义$$\frac{\partial n}{\partial m} = \frac{\partial (n_0,\ n_1)}{\partial (m_0,\ m_1)}$$ 在 PyTorch 中,如果要调用自动求导,需要往`backward()`中传入一个参数,这个参数的形状和 n 一样大,比如是 $(w_0,\ w_1)$,那么自动求导的结果就是:$$\frac{\partial n}{\partial m_0} = w_0 \frac{\partial n_0}{\partial m_0} + w_1 \frac{\partial n_1}{\partial m_0}$$$$\frac{\partial n}{\partial m_1} = w_0 \frac{\partial n_0}{\partial m_1} + w_1 \frac{\partial n_1}{\partial m_1}$$
###Code
n.backward(torch.ones_like(n)) # 将 (w0, w1) 取成 (1, 1)
print(m.grad)
###Output
Variable containing:
4 27
[torch.FloatTensor of size 1x2]
###Markdown
通过自动求导我们得到了梯度是 4 和 27,我们可以验算一下$$\frac{\partial n}{\partial m_0} = w_0 \frac{\partial n_0}{\partial m_0} + w_1 \frac{\partial n_1}{\partial m_0} = 2 m_0 + 0 = 2 \times 2 = 4$$$$\frac{\partial n}{\partial m_1} = w_0 \frac{\partial n_0}{\partial m_1} + w_1 \frac{\partial n_1}{\partial m_1} = 0 + 3 m_1^2 = 3 \times 3^2 = 27$$通过验算我们可以得到相同的结果 多次自动求导通过调用 backward 我们可以进行一次自动求导,如果我们再调用一次 backward,会发现程序报错,没有办法再做一次。这是因为 PyTorch 默认做完一次自动求导之后,计算图就被丢弃了,所以两次自动求导需要手动设置一个东西,我们通过下面的小例子来说明。
###Code
x = Variable(torch.FloatTensor([3]), requires_grad=True)
y = x * 2 + x ** 2 + 3
print(y)
y.backward(retain_graph=True) # 设置 retain_graph 为 True 来保留计算图
print(x.grad)
y.backward() # 再做一次自动求导,这次不保留计算图
print(x.grad)
###Output
Variable containing:
16
[torch.FloatTensor of size 1]
###Markdown
可以发现 x 的梯度变成了 16,因为这里做了两次自动求导,所以讲第一次的梯度 8 和第二次的梯度 8 加起来得到了 16 的结果。 **小练习**定义$$x = \left[\begin{matrix}x_0 \\x_1\end{matrix}\right] = \left[\begin{matrix}2 \\3\end{matrix}\right]$$$$k = (k_0,\ k_1) = (x_0^2 + 3 x_1,\ 2 x_0 + x_1^2)$$我们希望求得$$j = \left[\begin{matrix}\frac{\partial k_0}{\partial x_0} & \frac{\partial k_0}{\partial x_1} \\\frac{\partial k_1}{\partial x_0} & \frac{\partial k_1}{\partial x_1}\end{matrix}\right]$$参考答案:$$\left[\begin{matrix}4 & 3 \\2 & 6 \\\end{matrix}\right]$$
###Code
x = Variable(torch.FloatTensor([2, 3]), requires_grad=True)
k = Variable(torch.zeros(2))
k[0] = x[0] ** 2 + 3 * x[1]
k[1] = x[1] ** 2 + 2 * x[0]
print(k)
j = torch.zeros(2, 2)
k.backward(torch.FloatTensor([1, 0]), retain_graph=True)
j[0] = x.grad.data
x.grad.data.zero_() # 归零之前求得的梯度
k.backward(torch.FloatTensor([0, 1]))
j[1] = x.grad.data
print(j)
###Output
4 3
2 6
[torch.FloatTensor of size 2x2]
|
class09_similarity_R_template.ipynb | ###Markdown
CS446/546 - Class Session 13 - Similarity and Hierarchical ClusteringIn this class session we are going to hierachically cluster (based on Sorensen-Dice similarity) vertices in a directed graph from a landmark paper on human gene regulation (Neph et al., Cell, volume 150, pages 1274-1286, 2012; see PDF on Canvas) Let's start by having a look at the Neph et al. data file, `neph_gene_network.txt`. It is in edge-list format, with no header and no "interaction" column. Just two columns, first column contains the "regulator gene" and the second column contains the "target gene": head neph_gene_network.txt AHR BCL6 AHR BHLHE41 AHR BPTF AHR CEBPA AHR CNOT3 AHR CREB1 Now let's load the packages that we will need for this exercise
###Code
suppressPackageStartupMessages(library(igraph))
###Output
_____no_output_____
###Markdown
Using `read.table`, read the file `shared/neph_gene_network.txt`; name the two columns of the resulting data frame, `regulator` and `target`. Since there is no header, we will use `header=FALSE`:
###Code
edge_list_neph <- read.table("shared/neph_gene_network.txt",
header=FALSE,
sep="\t",
stringsAsFactors=FALSE,
col.names=c("regulator","target"))
###Output
_____no_output_____
###Markdown
Load the edge-list data into a Graph object in igraph, using `graph_from_data_frame`. Make the graph undirected
###Code
neph_graph <- graph_from_data_frame(edge_list_neph, directed=FALSE)
summary(neph_graph)
###Output
IGRAPH c6d5c31 UN-- 538 47945 --
+ attr: name (v/c)
###Markdown
Get the adjacency matrix for the graph, using `get.adjacency`, and assign to matrix `g_matrix`
###Code
g_matrix <- get.adjacency(neph_graph)
###Output
_____no_output_____ |
6_analyze.ipynb | ###Markdown
This notebook would be used to visualize the various audio features obtained
###Code
import os
import pickle
import soundfile as sf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.style as ms
from tqdm import tqdm
import librosa
import math
import random
import pandas as pd
from IPython.display import Audio
import librosa.display
ms.use('seaborn-muted')
%matplotlib inline
audio_vectors = pickle.load(open('data/pre-processed/audio_vectors_1.pkl', 'rb'))
y1 = audio_vectors['Ses01F_script01_2_F011'] # Angry
y2 = audio_vectors['Ses01F_script02_2_F036'] # Sad
min_len = min(len(y1), len(y2))
y1, y2 = y1[:min_len], y2[:min_len]
sr = 44100
Audio(y1, rate=sr)
plt.figure(figsize=(15,2))
librosa.display.waveplot(y1, sr=sr, max_sr=1000, alpha=0.25, color='r')
librosa.display.waveplot(y2, sr=sr, max_sr=1000, alpha=0.25, color='b')
rmse1 = librosa.feature.rmse(y1 + 0.0001)[0]
rmse2 = librosa.feature.rmse(y2 + 0.0001)[0]
# plt.figure(figsize=(15,2))
plt.plot(rmse1, color='r')
plt.plot(rmse2, color='b')
plt.ylabel('RMSE')
silence1 = 0
for e in rmse1:
if e <= 0.3 * np.mean(rmse1):
silence1 += 1
silence2 = 0
for e in rmse2:
if e <= 0.3 * np.mean(rmse2):
silence2 += 1
print(silence1/float(len(rmse1)), silence2/float(len(rmse2)))
y1_harmonic = librosa.effects.hpss(y1)[0]
y2_harmonic = librosa.effects.hpss(y2)[0]
# plt.figure(figsize=(5,2))
plt.plot(y1, color='r')
plt.plot(y2, color='b')
plt.ylabel('Harmonics')
autocorr1 = librosa.core.autocorrelate(y1)
autocorr2 = librosa.core.autocorrelate(y2)
plt.figure(figsize=(15,2))
plt.plot(autocorr2, color='b')
plt.ylabel('Autocorrelations')
cl = 0.45 * np.mean(abs(y2))
center_clipped = []
for s in y2:
if s >= cl:
center_clipped.append(s - cl)
elif s <= -cl:
center_clipped.append(s + cl)
elif np.abs(s) < cl:
center_clipped.append(0)
new_autocorr = librosa.core.autocorrelate(np.array(center_clipped))
plt.figure(figsize=(15,2))
plt.plot(autocorr2, color='yellow')
plt.plot(new_autocorr, color='pink')
plt.ylabel('Center-clipped Autocorrelation')
###Output
_____no_output_____
###Markdown
This notebook would be used to visualize the various audio features obtained
###Code
import os
import pickle
import soundfile as sf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.style as ms
from tqdm import tqdm
import librosa
import math
import random
import pandas as pd
from IPython.display import Audio
import librosa.display
ms.use('seaborn-muted')
%matplotlib inline
audio_vectors = pickle.load(open('data/pre-processed/audio_vectors_1.pkl', 'rb'))
y1 = audio_vectors['Ses01F_script01_2_F011'] # Angry
y2 = audio_vectors['Ses01F_script02_2_F036'] # Sad
min_len = min(len(y1), len(y2))
y1, y2 = y1[:min_len], y2[:min_len]
sr = 44100
Audio(y1, rate=sr)
plt.figure(figsize=(15,2))
librosa.display.waveplot(y1, sr=sr, max_sr=1000, alpha=0.25, color='r')
librosa.display.waveplot(y2, sr=sr, max_sr=1000, alpha=0.25, color='b')
rmse1 = librosa.feature.rmse(y1 + 0.0001)[0]
rmse2 = librosa.feature.rmse(y2 + 0.0001)[0]
# plt.figure(figsize=(15,2))
plt.plot(rmse1, color='r')
plt.plot(rmse2, color='b')
plt.ylabel('RMSE')
silence1 = 0
for e in rmse1:
if e <= 0.3 * np.mean(rmse1):
silence1 += 1
silence2 = 0
for e in rmse2:
if e <= 0.3 * np.mean(rmse2):
silence2 += 1
print(silence1/float(len(rmse1)), silence2/float(len(rmse2)))
y1_harmonic = librosa.effects.hpss(y1)[0]
y2_harmonic = librosa.effects.hpss(y2)[0]
# plt.figure(figsize=(5,2))
plt.plot(y1, color='r')
plt.plot(y2, color='b')
plt.ylabel('Harmonics')
autocorr1 = librosa.core.autocorrelate(y1)
autocorr2 = librosa.core.autocorrelate(y2)
plt.figure(figsize=(15,2))
plt.plot(autocorr2, color='b')
plt.ylabel('Autocorrelations')
cl = 0.45 * np.mean(abs(y2))
center_clipped = []
for s in y2:
if s >= cl:
center_clipped.append(s - cl)
elif s <= -cl:
center_clipped.append(s + cl)
elif np.abs(s) < cl:
center_clipped.append(0)
new_autocorr = librosa.core.autocorrelate(np.array(center_clipped))
plt.figure(figsize=(15,2))
plt.plot(autocorr2, color='yellow')
plt.plot(new_autocorr, color='pink')
plt.ylabel('Center-clipped Autocorrelation')
###Output
_____no_output_____
###Markdown
This notebook would be used to visualize the various audio features obtained
###Code
import os
import pickle
import soundfile as sf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.style as ms
from tqdm import tqdm
import librosa
import math
import random
import pandas as pd
from IPython.display import Audio
import librosa.display
ms.use('seaborn-muted')
%matplotlib inline
audio_vectors = pickle.load(open('data/pre-processed/audio_vectors_1.pkl', 'rb'))
y1 = audio_vectors['Ses01F_script01_2_F011'] # Angry
y2 = audio_vectors['Ses01F_script02_2_F036'] # Sad
min_len = min(len(y1), len(y2))
y1, y2 = y1[:min_len], y2[:min_len]
sr = 44100
Audio(y1, rate=sr)
plt.figure(figsize=(15,2))
librosa.display.waveplot(y1, sr=sr, max_sr=1000, alpha=0.25, color='r')
librosa.display.waveplot(y2, sr=sr, max_sr=1000, alpha=0.25, color='b')
rmse1 = librosa.feature.rmse(y1 + 0.0001)[0]
rmse2 = librosa.feature.rmse(y2 + 0.0001)[0]
# plt.figure(figsize=(15,2))
plt.plot(rmse1, color='r')
plt.plot(rmse2, color='b')
plt.ylabel('RMSE')
silence1 = 0
for e in rmse1:
if e <= 0.3 * np.mean(rmse1):
silence1 += 1
silence2 = 0
for e in rmse2:
if e <= 0.3 * np.mean(rmse2):
silence2 += 1
print(silence1/float(len(rmse1)), silence2/float(len(rmse2)))
y1_harmonic = librosa.effects.hpss(y1)[0]
y2_harmonic = librosa.effects.hpss(y2)[0]
# plt.figure(figsize=(5,2))
plt.plot(y1, color='r')
plt.plot(y2, color='b')
plt.ylabel('Harmonics')
autocorr1 = librosa.core.autocorrelate(y1)
autocorr2 = librosa.core.autocorrelate(y2)
plt.figure(figsize=(15,2))
plt.plot(autocorr2, color='b')
plt.ylabel('Autocorrelations')
cl = 0.45 * np.mean(abs(y2))
center_clipped = []
for s in y2:
if s >= cl:
center_clipped.append(s - cl)
elif s <= -cl:
center_clipped.append(s + cl)
elif np.abs(s) < cl:
center_clipped.append(0)
new_autocorr = librosa.core.autocorrelate(np.array(center_clipped))
plt.figure(figsize=(15,2))
plt.plot(autocorr2, color='yellow')
plt.plot(new_autocorr, color='pink')
plt.ylabel('Center-clipped Autocorrelation')
###Output
_____no_output_____ |
Covid_19_VGG_16_Model.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
import pandas as pd
import numpy as np
import os
import tensorflow as tf
import keras
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
image_size = [224,224]
data_path = 'Data'
vgg = VGG16(input_shape= image_size+[3],weights='imagenet',include_top=False)
vgg.output
x = vgg.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
x = Dense(1024, activation='relu')(x)
x = Dense(512, activation='relu')(x)
preds = Dense(2,activation='softmax')(x)
model = Model(inputs = vgg.input,outputs=preds)
model.summary()
for layer in vgg.layers:
layer.trainable = False
train_datagen=ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator=train_datagen.flow_from_directory('drive/My Drive/covid-data/' ,
target_size=(224,224),
color_mode='rgb' ,
batch_size=32,
class_mode='categorical' ,
shuffle = True)
model.compile(optimizer='Adam',
loss='categorical_crossentropy' ,
metrics=['accuracy'])
step_size_train=train_generator.n//train_generator.batch_size
r = model.fit_generator(generator=train_generator,
steps_per_epoch=step_size_train,
epochs=5)
plt.plot(r.history['loss'], label='train loss')
plt.legend()
plt.show()
plt.savefig('LossVal_loss')
plt.plot(r.history['accuracy'])
plt.title('model accuracy')
plt.ylabel('acc')
plt.xlabel('epoch')
plt.show()
from tensorflow.keras.models import load_model
model.save('covid.h5')
import tensorflow as tf
from tensorflow.keras.preprocessing import image
import numpy as np
from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.models import load_model
model = load_model('covid.h5')
img_path = 'drive/My Drive/covid-data/Test/Normal/2.jpeg'
img = image.load_img(img_path,target_size=(224,224))
x= image.img_to_array(img)
x = np.expand_dims(x,axis=0)
img_data = preprocess_input(x)
rslt = model.predict(img_data)
print(rslt)
if rslt[0][0] == 1:
prediction = 'Not a covid patient'
else:
prediction = 'Covid patient'
print(prediction)
###Output
_____no_output_____ |
tutorial/source/jit.ipynb | ###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.1.0')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.contrib.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.3.3')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/contrib.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.contrib.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.3.4')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/contrib.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.6.0')
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.8.1')
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.3.0')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.5.1')
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.contrib.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.3.0')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/contrib.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.7.0')
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.contrib.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.3.0')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/contrib.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.5.0')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.contrib.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.3.1')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/contrib.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.3.1')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.5.2')
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.0.0')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.4.0')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.4.0')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.4.1')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.3.1')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.2.1')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('0.5.1')
pyro.enable_validation(True) # <---- This is always a good idea!
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/uber/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
###Markdown
Using the PyTorch JIT Compiler with PyroThis tutorial shows how to use the PyTorch [jit compiler](https://pytorch.org/docs/master/jit.html) in Pyro models. Summary:- You can use compiled functions in Pyro models.- You cannot use pyro primitives inside compiled functions.- If your model has static structure, you can use a `Jit*` version of an `ELBO` algorithm, e.g. ```diff - Trace_ELBO() + JitTrace_ELBO() ```- The [HMC](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC) and [NUTS](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS) classes accept `jit_compile=True` kwarg.- Models should input all tensors as `*args` and all non-tensors as `**kwargs`.- Each different value of `**kwargs` triggers a separate compilation.- Use `**kwargs` to specify all variation in structure (e.g. time series length).- To ignore jit warnings in safe code blocks, use `with pyro.util.ignore_jit_warnings():`.- To ignore all jit warnings in `HMC` or `NUTS`, pass `ignore_jit_warnings=True`. Table of contents- [Introduction](Introduction)- [A simple model](A-simple-model)- [Varying structure](Varying-structure)
###Code
import os
import torch
import pyro
import pyro.distributions as dist
from torch.distributions import constraints
from pyro import poutine
from pyro.distributions.util import broadcast_shape
from pyro.infer import Trace_ELBO, JitTrace_ELBO, TraceEnum_ELBO, JitTraceEnum_ELBO, SVI
from pyro.infer.mcmc import MCMC, NUTS
from pyro.infer.autoguide import AutoDiagonalNormal
from pyro.optim import Adam
smoke_test = ('CI' in os.environ)
assert pyro.__version__.startswith('1.8.0')
###Output
_____no_output_____
###Markdown
IntroductionPyTorch 1.0 includes a [jit compiler](https://pytorch.org/docs/master/jit.html) to speed up models. You can think of compilation as a "static mode", whereas PyTorch usually operates in "eager mode".Pyro supports the jit compiler in two ways. First you can use compiled functions inside Pyro models (but those functions cannot contain Pyro primitives). Second, you can use Pyro's jit inference algorithms to compile entire inference steps; in static models this can reduce the Python overhead of Pyro models and speed up inference.The rest of this tutorial focuses on Pyro's jitted inference algorithms: [JitTrace_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_elbo.JitTrace_ELBO), [JitTraceGraph_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.tracegraph_elbo.JitTraceGraph_ELBO), [JitTraceEnum_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.traceenum_elbo.JitTraceEnum_ELBO), [JitMeanField_ELBO](http://docs.pyro.ai/en/dev/inference_algos.htmlpyro.infer.trace_mean_field_elbo.JitTraceMeanField_ELBO), [HMC(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.HMC), and [NUTS(jit_compile=True)](http://docs.pyro.ai/en/dev/mcmc.htmlpyro.infer.mcmc.NUTS). For further reading, see the [examples/](https://github.com/pyro-ppl/pyro/tree/dev/examples) directory, where most examples include a `--jit` option to run in compiled mode. A simple modelLet's start with a simple Gaussian model and an [autoguide](http://docs.pyro.ai/en/dev/infer.autoguide.html).
###Code
def model(data):
loc = pyro.sample("loc", dist.Normal(0., 10.))
scale = pyro.sample("scale", dist.LogNormal(0., 3.))
with pyro.plate("data", data.size(0)):
pyro.sample("obs", dist.Normal(loc, scale), obs=data)
guide = AutoDiagonalNormal(model)
data = dist.Normal(0.5, 2.).sample((100,))
###Output
_____no_output_____
###Markdown
First let's run as usual with an SVI object and `Trace_ELBO`.
###Code
%%time
pyro.clear_param_store()
elbo = Trace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 2.71 s, sys: 31.4 ms, total: 2.74 s
Wall time: 2.76 s
###Markdown
Next to run with a jit compiled inference, we simply replace```diff- elbo = Trace_ELBO()+ elbo = JitTrace_ELBO()```Also note that the `AutoDiagonalNormal` guide behaves a little differently on its first invocation (it runs the model to produce a prototype trace), and we don't want to record this warmup behavior when compiling. Thus we call the `guide(data)` once to initialize, then run the compiled SVI,
###Code
%%time
pyro.clear_param_store()
guide(data) # Do any lazy initialization before compiling.
elbo = JitTrace_ELBO()
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(2 if smoke_test else 1000):
svi.step(data)
###Output
CPU times: user 1.1 s, sys: 30.4 ms, total: 1.13 s
Wall time: 1.16 s
###Markdown
Notice that we have a more than 2x speedup for this small model.Let us now use the same model, but we will instead use MCMC to generate samples from the model's posterior. We will use the No-U-Turn(NUTS) sampler.
###Code
%%time
nuts_kernel = NUTS(model)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We can compile the potential energy computation in NUTS using the `jit_compile=True` argument to the NUTS kernel. We also silence JIT warnings due to the presence of tensor constants in the model by using `ignore_jit_warnings=True`.
###Code
%%time
nuts_kernel = NUTS(model, jit_compile=True, ignore_jit_warnings=True)
pyro.set_rng_seed(1)
mcmc_run = MCMC(nuts_kernel, num_samples=100).run(data)
###Output
_____no_output_____
###Markdown
We notice a significant increase in sampling throughput when JIT compilation is enabled. Varying structureTime series models often run on datasets of multiple time series with different lengths. To accomodate varying structure like this, Pyro requires models to separate all model inputs into tensors and non-tensors.$^\dagger$- Non-tensor inputs should be passed as `**kwargs` to the model and guide. These can determine model structure, so that a model is compiled for each value of the passed `**kwargs`.- Tensor inputs should be passed as `*args`. These must not determine model structure. However `len(args)` may determine model structure (as is used e.g. in semisupervised models).To illustrate this with a time series model, we will pass in a sequence of observations as a tensor `arg` and the sequence length as a non-tensor `kwarg`:
###Code
def model(sequence, num_sequences, length, state_dim=16):
# This is a Gaussian HMM model.
with pyro.plate("states", state_dim):
trans = pyro.sample("trans", dist.Dirichlet(0.5 * torch.ones(state_dim)))
emit_loc = pyro.sample("emit_loc", dist.Normal(0., 10.))
emit_scale = pyro.sample("emit_scale", dist.LogNormal(0., 3.))
# We're doing manual data subsampling, so we need to scale to actual data size.
with poutine.scale(scale=num_sequences):
# We'll use enumeration inference over the hidden x.
x = 0
for t in pyro.markov(range(length)):
x = pyro.sample("x_{}".format(t), dist.Categorical(trans[x]),
infer={"enumerate": "parallel"})
pyro.sample("y_{}".format(t), dist.Normal(emit_loc[x], emit_scale),
obs=sequence[t])
guide = AutoDiagonalNormal(poutine.block(model, expose=["trans", "emit_scale", "emit_loc"]))
# This is fake data of different lengths.
lengths = [24] * 50 + [48] * 20 + [72] * 5
sequences = [torch.randn(length) for length in lengths]
###Output
_____no_output_____
###Markdown
Now lets' run SVI as usual.
###Code
%%time
pyro.clear_param_store()
elbo = TraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 52.4 s, sys: 270 ms, total: 52.7 s
Wall time: 52.8 s
###Markdown
Again we'll simply swap in a `Jit*` implementation```diff- elbo = TraceEnum_ELBO(max_plate_nesting=1)+ elbo = JitTraceEnum_ELBO(max_plate_nesting=1)```Note that we are manually specifying the `max_plate_nesting` arg. Usually Pyro can figure this out automatically by running the model once on the first invocation; however to avoid this extra work when we run the compiler on the first step, we pass this in manually.
###Code
%%time
pyro.clear_param_store()
# Do any lazy initialization before compiling.
guide(sequences[0], num_sequences=len(sequences), length=len(sequences[0]))
elbo = JitTraceEnum_ELBO(max_plate_nesting=1)
svi = SVI(model, guide, Adam({'lr': 0.01}), elbo)
for i in range(1 if smoke_test else 10):
for sequence in sequences:
svi.step(sequence, # tensor args
num_sequences=len(sequences), length=len(sequence)) # non-tensor args
###Output
CPU times: user 21.9 s, sys: 201 ms, total: 22.1 s
Wall time: 22.2 s
|
simple_stat_election20.ipynb | ###Markdown
###Code
library(tidyverse)
Election20df = read_csv("https://raw.githubusercontent.com/tonmcg/US_County_Level_Election_Results_08-20/master/2020_US_County_Level_Presidential_Results.csv")
head(Election20df)
# Question: What is the total number of votes?
Election20df %>% select ( total_votes ) %>% sum()
sum( Election20df$total_votes ) /1E6
###Output
_____no_output_____
###Markdown
Question: What are the total number votes for GOP or DEM?
###Code
Election20df %>% select ( votes_gop ) %>% sum()
Election20df %>% select ( votes_dem ) %>% sum()
mystate = "California"
Californiadf <-
Election20df %>% filter( state_name == mystate) %>% arrange( per_point_diff)
names( Californiadf )[8] = "percentage_for_GOP"
ggplot(Californiadf, aes(percentage_for_GOP)) + geom_histogram()
Statedf <-
Election20df %>% select( state_name, votes_gop, votes_dem, total_votes ) %>% group_by( state_name ) %>% summarise_if( is.numeric, sum)
Statedf$percentage_for_GOP = Statedf$votes_gop / Statedf$total_votes
ggplot(Statedf, aes(percentage_for_GOP)) + geom_histogram()
# Question: Find out which state has the nearly 95% DEM voting percentage?
# There are many ways to do this.
Statedf %>% filter( percentage_for_GOP < 0.1 )
mean( Statedf$percentage_for_GOP) #average
quantile( Statedf$percentage_for_GOP )
###Output
_____no_output_____
###Markdown
Label the states to deep red, red, swing, blue, deep blueReference: https://stackoverflow.com/questions/21050021/create-category-based-on-range-in-r
###Code
groups = cut( Statedf$percentage_for_GOP, c(0, 0.4, 0.47, 0.53, 0.6, 1) )
levels(groups) = c("deepblue", "blue", "swing", "red", "deepred")
Statedf$groups = groups
Censusdf = read_csv("https://raw.githubusercontent.com/hongqin/USA-census-county-level/main/USA-County-level-census-2010-2019.csv")
head(Censusdf)
Election20df$Location = paste( Election20df$county_name, Election20df$state_name, sep=", " )
Election20df$Location %in% Censusdf$Location
EleCen.df = merge( Election20df, Censusdf, by="Location")
Statedf2 <- EleCen.df %>% select( state_name, votes_gop, votes_dem, total_votes, '2019' ) %>% group_by( state_name ) %>% summarise_if( is.numeric, sum)
head(Statedf2)
names( Statedf2)[5] = "population"
Statedf$population = Statedf2$population[match( Statedf$state_name , Statedf2$state_name ) ]
model1 = lm( Statedf$percentage_for_GOP ~ Statedf$population)
summary(model1)
model2 = lm( Statedf$population ~ Statedf$groups)
summary(model2)
ggplot( Statedf, aes(x=groups, y=population)) + geom_point()
StateArea = read_csv("https://raw.githubusercontent.com/hongqin/data-USstates/master/state-areas.csv")
names( StateArea) = c("state_name", "area")
Statedf$area = StateArea$area[ match( Statedf$state_name , StateArea$state_name ) ]
Statedf$pop_density = Statedf$population / Statedf$area
model = lm( Statedf$percentage_for_GOP ~ Statedf$pop_density)
summary(model)
ggplot( Statedf, aes(x=pop_density, y=percentage_for_GOP)) + geom_point()
Statedf3 <- Statedf %>% filter( percentage_for_GOP > 0.1)
ggplot( Statedf3, aes(x=pop_density, y=percentage_for_GOP)) +
geom_point() +
geom_smooth(method='lm')
summary(lm(Statedf3$percentage_for_GOP~ Statedf3$pop_density))
ggplot( Statedf3, aes(x=groups, y=pop_density)) + geom_point()
ggplot( Statedf3, aes(x=groups, y=pop_density)) + geom_boxplot()
deepred_pop_densities <-
Statedf3 %>% filter( groups == "deepred") %>% select( pop_density)
deepblue_pop_densities <-
Statedf3 %>% filter( groups == "deepblue") %>% select( pop_density)
t.test( deepblue_pop_densities, deepred_pop_densities, alternative = "greater")
ggplot(Statedf, aes(x=state_name, y=percentage_for_GOP)) + geom_bar(stat='identity', width=.5)+ theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
ggplot(Statedf, aes(x=reorder(state_name, percentage_for_GOP), y=percentage_for_GOP)) + geom_bar(stat='identity', width=.5)+ theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
###Output
_____no_output_____
###Markdown
###Code
library(tidyverse)
Election20df = read_csv("https://raw.githubusercontent.com/tonmcg/US_County_Level_Election_Results_08-20/master/2020_US_County_Level_Presidential_Results.csv")
head(Election20df)
###Output
Warning message in system("timedatectl", intern = TRUE):
“running command 'timedatectl' had status 1”
── [1mAttaching packages[22m ─────────────────────────────────────── tidyverse 1.3.1 ──
[32m✔[39m [34mggplot2[39m 3.3.3 [32m✔[39m [34mpurrr [39m 0.3.4
[32m✔[39m [34mtibble [39m 3.1.2 [32m✔[39m [34mdplyr [39m 1.0.6
[32m✔[39m [34mtidyr [39m 1.1.3 [32m✔[39m [34mstringr[39m 1.4.0
[32m✔[39m [34mreadr [39m 1.4.0 [32m✔[39m [34mforcats[39m 0.5.1
── [1mConflicts[22m ────────────────────────────────────────── tidyverse_conflicts() ──
[31m✖[39m [34mdplyr[39m::[32mfilter()[39m masks [34mstats[39m::filter()
[31m✖[39m [34mdplyr[39m::[32mlag()[39m masks [34mstats[39m::lag()
[36m──[39m [1m[1mColumn specification[1m[22m [36m────────────────────────────────────────────────────────[39m
cols(
state_name = [31mcol_character()[39m,
county_fips = [31mcol_character()[39m,
county_name = [31mcol_character()[39m,
votes_gop = [32mcol_double()[39m,
votes_dem = [32mcol_double()[39m,
total_votes = [32mcol_double()[39m,
diff = [32mcol_double()[39m,
per_gop = [32mcol_double()[39m,
per_dem = [32mcol_double()[39m,
per_point_diff = [32mcol_double()[39m
)
###Markdown
Question: What is the total number of votes?
###Code
Election20df %>% select ( total_votes ) %>% sum()
sum( Election20df$total_votes ) /1E6
###Output
_____no_output_____
###Markdown
Question: What are the total number votes for GOP or DEM?
###Code
Election20df %>% select ( votes_gop ) %>% sum()
Election20df %>% select ( votes_dem ) %>% sum()
mystate = "California"
Californiadf <-
Election20df %>% filter( state_name == mystate) %>% arrange( per_point_diff)
names( Californiadf )[8] = "percentage_for_GOP"
ggplot(Californiadf, aes(percentage_for_GOP)) + geom_histogram()
###Output
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
###Markdown
Examine the results by state
###Code
Statedf <-
Election20df %>% select( state_name, votes_gop, votes_dem, total_votes ) %>% group_by( state_name ) %>% summarise_if( is.numeric, sum)
Statedf$percentage_for_GOP = Statedf$votes_gop / Statedf$total_votes
ggplot(Statedf, aes(percentage_for_GOP)) + geom_histogram()
###Output
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
###Markdown
Question: Find out which state has the nearly 95% DEM voting percentage?
###Code
# There are many ways to do this.
Statedf %>% filter( percentage_for_GOP < 0.1 )
mean( Statedf$percentage_for_GOP) #average
quantile( Statedf$percentage_for_GOP )
###Output
_____no_output_____
###Markdown
Label the states to deep red, red, swing, blue, deep blueReference: https://stackoverflow.com/questions/21050021/create-category-based-on-range-in-r
###Code
groups = cut( Statedf$percentage_for_GOP, c(0, 0.4, 0.47, 0.53, 0.6, 1) )
levels(groups) = c("deepblue", "blue", "swing", "red", "deepred")
Statedf$groups = groups
###Output
_____no_output_____
###Markdown
Get the census data
###Code
Censusdf = read_csv("https://raw.githubusercontent.com/hongqin/USA-census-county-level/main/USA-County-level-census-2010-2019.csv")
head(Censusdf)
Election20df$Location = paste( Election20df$county_name, Election20df$state_name, sep=", " )
Election20df$Location %in% Censusdf$Location
###Output
_____no_output_____
###Markdown
Merge election and census data
###Code
EleCen.df = merge( Election20df, Censusdf, by="Location")
Statedf2 <- EleCen.df %>% select( state_name, votes_gop, votes_dem, total_votes, '2019' ) %>% group_by( state_name ) %>% summarise_if( is.numeric, sum)
head(Statedf2)
names( Statedf2)[5] = "population"
Statedf$population = Statedf2$population[match( Statedf$state_name , Statedf2$state_name ) ]
model1 = lm( Statedf$percentage_for_GOP ~ Statedf$population)
summary(model1)
model2 = lm( Statedf$population ~ Statedf$groups)
summary(model2)
ggplot( Statedf, aes(x=groups, y=population)) + geom_point()
###Output
_____no_output_____
###Markdown
Get the state area data
###Code
StateArea = read_csv("https://raw.githubusercontent.com/hongqin/data-USstates/master/state-areas.csv")
names( StateArea) = c("state_name", "area")
Statedf$area = StateArea$area[ match( Statedf$state_name , StateArea$state_name ) ]
Statedf$pop_density = Statedf$population / Statedf$area
model = lm( Statedf$percentage_for_GOP ~ Statedf$pop_density)
summary(model)
ggplot( Statedf, aes(x=pop_density, y=percentage_for_GOP)) + geom_point()
Statedf3 <- Statedf %>% filter( percentage_for_GOP > 0.1)
ggplot( Statedf3, aes(x=pop_density, y=percentage_for_GOP)) +
geom_point() +
geom_smooth(method='lm')
summary(lm(Statedf3$percentage_for_GOP~ Statedf3$pop_density))
ggplot( Statedf3, aes(x=groups, y=pop_density)) + geom_point()
ggplot( Statedf3, aes(x=groups, y=pop_density)) + geom_boxplot()
deepred_pop_densities <-
Statedf3 %>% filter( groups == "deepred") %>% select( pop_density)
deepblue_pop_densities <-
Statedf3 %>% filter( groups == "deepblue") %>% select( pop_density)
t.test( deepblue_pop_densities, deepred_pop_densities, alternative = "greater")
ggplot(Statedf, aes(x=state_name, y=percentage_for_GOP)) + geom_bar(stat='identity', width=.5)+ theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
ggplot(Statedf, aes(x=reorder(state_name, percentage_for_GOP), y=percentage_for_GOP)) + geom_bar(stat='identity', width=.5)+ theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
###Output
_____no_output_____ |
beginner-lessons/geospatial-data/gd-6.ipynb | ###Markdown
Introduction to Geospatial Data Part 5 of 5 Storing geography in the computer ReminderContinue with the lessonBy continuing with this lesson you are granting your permission to take part in this research study for the Hour of Cyberinfrastructure: Developing Cyber Literacy for GIScience project. In this study, you will be learning about cyberinfrastructure and related concepts using a web-based platform that will take approximately one hour per lesson. Participation in this study is voluntary.Participants in this research must be 18 years or older. If you are under the age of 18 then please exit this webpage or navigate to another website such as the Hour of Code at https://hourofcode.com, which is designed for K-12 students.If you are not interested in participating please exit the browser or navigate to this website: http://www.umn.edu. Your participation is voluntary and you are free to stop the lesson at any time.For the full description please navigate to this website: Gateway Lesson Research Study Permission.
###Code
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
import hourofci
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<style>
.output_prompt{opacity:0;}
</style>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
###Output
_____no_output_____
###Markdown
The world is infinitely complex This is a photo looking down towards the SE on the mountain resort town of Queenstown, New Zealand (at 45.03 N lat, 168.66 E long). How many different kinds of things do you see here? How can we decide what to measure and record? And how can we structure data about this complex world into tables to represent this????? A famous GIScientist once said"People cultivate fields (but manipulate objects)" **This phrase summarizes the most important distinction we make when capturing geospatial data -- Is the world made up of fields or objects? ** by Helen Couclelis, 1992, "People Manipulate Objects (but Cultivate Fields): Beyond the Raster-Vector Debate in GIS" from the book *Theories and Methods of Spatio-Temporal Reasoning in Geographic Space: International Conference GIS — From Space to Territory: Theories and Methods of Spatio-Temporal Reasoning* Pisa, Italy, September 21–23, 1992 (pp.65-77) Think about the picture of Queenstown we looked at earlier. The rolling surface of the landscape is continuous. There's land or water, at various elevations, everywhere. That's a *field*. Elevation is the classic field. There is a value of elevation everywhere. Then consider all the manmade structures in the picture. There are buildings, lightposts, roads. These are *objects*. The object world view is mostly empty, with objects scattered around. So, let's see if you can separate these two perspectives.{{IFrame("supplementary/sort-field-object.html", width=970, height=530)}} Now let's look at some geospatial data that are coded as either objects or fields. Starting with field data, here is a file of elevation measurements in the area to the south of Queenstown which is located near the center top of the image. You can see the lakes in the pale grey color.
###Code
import rasterio
from matplotlib import pyplot
filepath = 'https://dds.cr.usgs.gov/srtm/version1/Islands/S46E168.hgt.zip'
raster = rasterio.open(filepath, 'r')
pyplot.imshow(raster.read(1), cmap='terrain')
pyplot.show()
###Output
_____no_output_____
###Markdown
Now we can look at how the field data is actually stored.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
What we're seeing here is the beginning and end of the first three and last three lines of the file. What's all this??? Field data is usually stored as *rasters*.To store the world into a raster, the surface of the earth is divided into a grid of equal sized cells that covers a specific chunk of the earth, say a square that is 10 m by 10 m. Each cell is given a value that represents the data that has been measured on the earth in that cell. In the raster in this graphic, the building has been coded with the value green and the road has been coded with the value red. So, let's look again at that field data. Run both of these code chunks.
###Code
print("The Raster is", raster.width, "cells wide and", raster.height, "cells high")
raster.bounds
###Output
_____no_output_____
###Markdown
These show us that the NW (top left) corner of the area covered is 45 S latitude and 168 E longitude and the area covered is 1 degree of latitude high and 1 degree of longitude wide. Since 1 degree is 3600 seconds and we have ~1200 cells, this means the cell dimensions are 3600/1200 = ~3 arc seconds of a degree (that's approx 64m wide and 90m high at this latitude). Each row in the file shows us the average elevation value (in meters) in each cell across a row of the grid. Run this code to see the file again.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
Note that the elevations are much higher in the NE corner (as evidenced by the high values at the end of the first few rows) and lower along the southern edge (shown in the final rows). Now let's look at how object data is stored - hint, it's completely different! And WAY more complex. We'll start simple. When you ask Google to show you all the nearby restaurants on a map, you get a map with a bunch of pins, some with labels. You can click on them and find out information about those places. Those dots represent restaurant objects.For example... Here's a map of Queenstown showing some points of interest. Now we're looking north and the camera point for the photo used earlier is the cleared area at the top of the hill on the left.This link will take you to this map live in Google Maps. Now, let's see how that data is stored in a file. In Try-it Exercise 1 you looked at a point dataset. Remember this? (click the arrow to the left of the code)
###Code
!wget -O ne_50m_populated_places_simple.zip https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_populated_places_simple.zip
!unzip -n ne_50m_populated_places_simple.zip
import geopandas
cities = geopandas.read_file("ne_50m_populated_places_simple.shp")
cities.head()
###Output
_____no_output_____
###Markdown
In the table we just generated, each row has - an object ID- some data about various attributes for that object- then a column with an entry that is the point location Click back one slide to check this out. Now let's see again how that table can generate the dots on a map...
###Code
from ipyleaflet import Map, GeoData
cities_layer= GeoData(geo_dataframe = cities)
mymap = Map(center=(-43,168), zoom = 5)
mymap.add_layer(cities_layer)
mymap
###Output
_____no_output_____
###Markdown
OK, let's get back to fields and objects and how we put them into the computer. Remember this? These two graphics show the two most common data models for geospatial data. Fields are stored as grids called rasters and there is a value everywhere. Objects, which are scattered around mostly empty space, are stored as vectors. So, tell me more about vectors, you say... Vectors usually come in three varieties - points, lines and polygons. Points are good for things like cities on a world map, or lightpoles and signposts on a neighborhood map. Lines are for rivers, roads, railways, boundaries - that sort of thing. Polygons are areas. So they're used for lakes, building footprints, parks. Vector data has two components. These components can be stored together in a table by including one or more columns that provide the direct georeference (e.g. lat and long). OR, these components can be stored separately. Attributes with an object ID in one table and the geometry labelled with the same IDs in a separate file. By the way, it's important to know that you can't mix up points, lines and polygons in a single geospatial data file. If you want a map that shows points, lines and polygons, then you'll need at least three different datasets, one for each type of vector object.Remember the rivers data in our Try-it Exercise 1? Let's add it to the map. First, we'll get it again, just in case it's not currently loaded. (click the arrow to the left)
###Code
!wget -O ne_10m_rivers_lake_centerlines.zip https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_rivers_lake_centerlines.zip
!unzip -n ne_10m_rivers_lake_centerlines.zip
rivers = geopandas.read_file("ne_10m_rivers_lake_centerlines.shp")
rivers_layer = GeoData(geo_dataframe = rivers, style={'color':'blue'})
###Output
_____no_output_____
###Markdown
(wait for the asterisk to turn into a number...) then go to the next slide and we'll add it to the cities data...
###Code
mymap2 = Map(center=(-43,168), zoom = 5)
mymap2.add_layer(cities_layer)
mymap2.add_layer(rivers_layer)
mymap2
###Output
_____no_output_____
###Markdown
OK, now let's practice these concepts. For each of the following kinds of geospatial data, choose the data model (raster or vector) that it's most likely to be stored in. {{IFrame("supplementary/sort-raster-vector.html", width=970, height=430)}} Well done! Now you know a little bit about geospatial data. If you have worked through this lesson carefully, you should now be able to: 1. Explain what is special about geospatial data.2. Describe how location can be measured and recorded in geospatial data.3. Explain the difference between raster and vector data.4. Identify several different types of geospatial data.5. Load and view different kinds of geospatial data in Python Notebooks. If you still have time, feel free to go back to the two Try-It exercises and try out downloading some different datasets from the sources. Make maps of different parts of the earth or of different days from the Johns Hopkins data server. If you want to learn more about geospatial data, you can go on to the intermediate Geospatial Data lesson.Or you can go back and complete some of the other introductory lessons as they all touch on the use of geospatial data. Congratulations!**You have finished an Hour of CI!**But, before you go ... 1. Please fill out a very brief questionnaire to provide feedback and help us improve the Hour of CI lessons. It is fast and your feedback is very important to let us know what you learned and how we can improve the lessons in the future.2. If you would like a certificate, then please type your name below and click "Create Certificate" and you will be presented with a PDF certificate.Take the questionnaire and provide feedback
###Code
# This code cell loads the Interact Textbox that will ask users for their name
# Once they click "Create Certificate" then it will add their name to the certificate template
# And present them a PDF certificate
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
from ipywidgets import interact
def make_cert(learner_name, lesson_name):
cert_filename = 'hourofci_certificate.pdf'
img = Image.open("../../supplementary/hci-certificate-template.jpg")
draw = ImageDraw.Draw(img)
cert_font = ImageFont.load_default()
cert_font = ImageFont.truetype('../../supplementary/times.ttf', 150)
cert_font2 = ImageFont.truetype('../../supplementary/times.ttf', 100)
w,h = cert_font.getsize(learner_name)
draw.text( xy = (1650-w/2,1100-h/2), text = learner_name, fill=(0,0,0),font=cert_font)
draw.text( xy = (1650-w/2 - 12*int(len(lesson_name)),1100-h/2 + 750), text = lesson_name, fill=(0,0,0),font=cert_font2)
img.save(cert_filename, "PDF", resolution=100.0)
return cert_filename
interact_cert=interact.options(manual=True, manual_name="Create Certificate")
@interact_cert(name="Your Name")
def f(name):
print("Congratulations",name)
filename = make_cert(name, 'The Geospatial Data Beginner Lesson')
print("Download your certificate by clicking the link below.")
###Output
_____no_output_____
###Markdown
Download your certificate
###Code
IFrame(src = '../../supplementary/confetti.html', width="700", height="430")
###Output
_____no_output_____
###Markdown
Introduction to Geospatial Data Part 5 of 5 Storing geography in the computer ReminderContinue with the lessonBy continuing with this lesson you are granting your permission to take part in this research study for the Hour of Cyberinfrastructure: Developing Cyber Literacy for GIScience project. In this study, you will be learning about cyberinfrastructure and related concepts using a web-based platform that will take approximately one hour per lesson. Participation in this study is voluntary.Participants in this research must be 18 years or older. If you are under the age of 18 then please exit this webpage or navigate to another website such as the Hour of Code at https://hourofcode.com, which is designed for K-12 students.If you are not interested in participating please exit the browser or navigate to this website: http://www.umn.edu. Your participation is voluntary and you are free to stop the lesson at any time.For the full description please navigate to this website: Gateway Lesson Research Study Permission.
###Code
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
import hourofci
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<style>
.output_prompt{opacity:0;}
</style>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
###Output
_____no_output_____
###Markdown
The world is infinitely complex This is a photo looking down towards the SE on the mountain resort town of Queenstown, New Zealand (at 45.03 N lat, 168.66 E long). How many different kinds of things do you see here? How can we decide what to measure and record? And how can we structure data about this complex world into tables to represent this????? A famous GIScientist once said"People cultivate fields (but manipulate objects)" **This phrase summarizes the most important distinction we make when capturing geospatial data -- Is the world made up of fields or objects? ** by Helen Couclelis, 1992, "People Manipulate Objects (but Cultivate Fields): Beyond the Raster-Vector Debate in GIS" from the book *Theories and Methods of Spatio-Temporal Reasoning in Geographic Space: International Conference GIS — From Space to Territory: Theories and Methods of Spatio-Temporal Reasoning* Pisa, Italy, September 21–23, 1992 (pp.65-77) Think about the picture of Queenstown we looked at earlier. The rolling surface of the landscape is continuous. There's land or water, at various elevations, everywhere. That's a *field*. Elevation is the classic field. There is a value of elevation everywhere. Then consider all the manmade structures in the picture. There are buildings, lightposts, roads. These are *objects*. The object world view is mostly empty, with objects scattered around. So, let's see if you can separate these two perspectives.{{IFrame("supplementary/sort-field-object.html", width=970, height=530)}} Now let's look at some geospatial data that are coded as either objects or fields. Starting with field data, here is a file of elevation measurements in the area to the south of Queenstown which is located near the center top of the image. You can see the lakes in the pale grey color.
###Code
import rasterio
from matplotlib import pyplot
filepath = 'https://dds.cr.usgs.gov/srtm/version1/Islands/S46E168.hgt.zip'
raster = rasterio.open(filepath, 'r')
pyplot.imshow(raster.read(1), cmap='terrain')
pyplot.show()
###Output
_____no_output_____
###Markdown
Now we can look at how the field data is actually stored.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
What we're seeing here is the beginning and end of the first three and last three lines of the file. What's all this??? Field data is usually stored as *rasters*.To store the world into a raster, the surface of the earth is divided into a grid of equal sized cells that covers a specific chunk of the earth, say a square that is 10 m by 10 m. Each cell is given a value that represents the data that has been measured on the earth in that cell. In the raster in this graphic, the building has been coded with the value green and the road has been coded with the value red. So, let's look again at that field data. Run both of these code chunks.
###Code
print("The Raster is", raster.width, "cells wide and", raster.height, "cells high")
raster.bounds
###Output
_____no_output_____
###Markdown
These show us that the NW (top left) corner of the area covered is 45 S latitude and 168 E longitude and the area covered is 1 degree of latitude high and 1 degree of longitude wide. Since 1 degree is 3600 seconds and we have ~1200 cells, this means the cell dimensions are 3600/1200 = ~3 arc seconds of a degree (that's approx 64m wide and 90m high at this latitude). Each row in the file shows us the average elevation value (in meters) in each cell across a row of the grid. Run this code to see the file again.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
Note that the elevations are much higher in the NE corner (as evidenced by the high values at the end of the first few rows) and lower along the southern edge (shown in the final rows). Now let's look at how object data is stored - hint, it's completely different! And WAY more complex. We'll start simple. When you ask Google to show you all the nearby restaurants on a map, you get a map with a bunch of pins, some with labels. You can click on them and find out information about those places. Those dots represent restaurant objects.For example... Here's a map of Queenstown showing some points of interest. Now we're looking north and the camera point for the photo used earlier is the cleared area at the top of the hill on the left.This link will take you to this map live in Google Maps. Now, let's see how that data is stored in a file. In Try-it Exercise 1 you looked at a point dataset. Remember this? (click the arrow to the left of the code)
###Code
import geopandas
#cities = geopandas.read_file("https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_populated_places.zip")
cities = geopandas.read_file("https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_populated_places_simple.zip")
cities.head()
###Output
_____no_output_____
###Markdown
In the table we just generated, each row has - an object ID- some data about various attributes for that object- then a column with an entry that is the point location Click back one slide to check this out. Now let's see again how that table can generate the dots on a map...
###Code
from ipyleaflet import Map, GeoData
cities_layer= GeoData(geo_dataframe = cities)
mymap = Map(center=(-43,168), zoom = 5)
mymap.add_layer(cities_layer)
mymap
###Output
_____no_output_____
###Markdown
OK, let's get back to fields and objects and how we put them into the computer. Remember this? These two graphics show the two most common data models for geospatial data. Fields are stored as grids called rasters and there is a value everywhere. Objects, which are scattered around mostly empty space, are stored as vectors. So, tell me more about vectors, you say... Vectors usually come in three varieties - points, lines and polygons. Points are good for things like cities on a world map, or lightpoles and signposts on a neighborhood map. Lines are for rivers, roads, railways, boundaries - that sort of thing. Polygons are areas. So they're used for lakes, building footprints, parks. Vector data has two components. These components can be stored together in a table by including one or more columns that provide the direct georeference (e.g. lat and long). OR, these components can be stored separately. Attributes with an object ID in one table and the geometry labelled with the same IDs in a separate file. By the way, it's important to know that you can't mix up points, lines and polygons in a single geospatial data file. If you want a map that shows points, lines and polygons, then you'll need at least three different datasets, one for each type of vector object.Remember the rivers data in our Try-it Exercise 1? Let's add it to the map. First, we'll get it again, just in case it's not currently loaded. (click the arrow to the left)
###Code
rivers = geopandas.read_file("https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_rivers_lake_centerlines.zip")
rivers_layer = GeoData(geo_dataframe = rivers, style={'color':'blue'})
###Output
_____no_output_____
###Markdown
(wait for the asterisk to turn into a number...) then go to the next slide and we'll add it to the cities data...
###Code
mymap2 = Map(center=(-43,168), zoom = 5)
mymap2.add_layer(cities_layer)
mymap2.add_layer(rivers_layer)
mymap2
###Output
_____no_output_____
###Markdown
OK, now let's practice these concepts. For each of the following kinds of geospatial data, choose the data model (raster or vector) that it's most likely to be stored in. {{IFrame("supplementary/sort-raster-vector.html", width=970, height=430)}} Well done! Now you know a little bit about geospatial data. If you have worked through this lesson carefully, you should now be able to: 1. Explain what is special about geospatial data.2. Describe how location can be measured and recorded in geospatial data.3. Explain the difference between raster and vector data.4. Identify several different types of geospatial data.5. Load and view different kinds of geospatial data in Python Notebooks. If you still have time, feel free to go back to the two Try-It exercises and try out downloading some different datasets from the sources. Make maps of different parts of the earth or of different days from the Johns Hopkins data server. If you want to learn more about geospatial data, you can go on to the intermediate Geospatial Data lesson.Or you can go back and complete some of the other introductory lessons as they all touch on the use of geospatial data. Congratulations!**You have finished an Hour of CI!**But, before you go ... 1. Please fill out a very brief questionnaire to provide feedback and help us improve the Hour of CI lessons. It is fast and your feedback is very important to let us know what you learned and how we can improve the lessons in the future.2. If you would like a certificate, then please type your name below and click "Create Certificate" and you will be presented with a PDF certificate.Take the questionnaire and provide feedback
###Code
# This code cell has a tag "Hide" (Setting by going to Toolbar > View > Cell Toolbar > Tags)
# Code input is hidden when the notebook is loaded and can be hide/show using the toggle button "Toggle raw code" at the top
# This code cell loads the Interact Textbox that will ask users for their name
# Once they click "Create Certificate" then it will add their name to the certificate template
# And present them a PDF certificate
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
from ipywidgets import interact
def make_cert(learner_name):
cert_filename = 'hourofci_certificate.pdf'
img = Image.open("../../supplementary/hci-certificate-template.jpg")
draw = ImageDraw.Draw(img)
cert_font = ImageFont.load_default()
cert_font = ImageFont.truetype('../../supplementary/times.ttf', 150)
w,h = cert_font.getsize(learner_name)
draw.text( xy = (1650-w/2,1100-h/2), text = learner_name, fill=(0,0,0),font=cert_font)
img.save(cert_filename, "PDF", resolution=100.0)
return cert_filename
interact_cert=interact.options(manual=True, manual_name="Create Certificate")
@interact_cert(name="Your Name")
def f(name):
print("Congratulations",name)
filename = make_cert(name)
print("Download your certificate by clicking the link below.")
###Output
_____no_output_____
###Markdown
Download your certificate
###Code
IFrame(src = '../../supplementary/confetti.html', width="700", height="430")
###Output
_____no_output_____
###Markdown
Introduction to Geospatial Data Part 5 of 5 Storing geography in the computer ReminderContinue with the lessonBy continuing with this lesson you are granting your permission to take part in this research study for the Hour of Cyberinfrastructure: Developing Cyber Literacy for GIScience project. In this study, you will be learning about cyberinfrastructure and related concepts using a web-based platform that will take approximately one hour per lesson. Participation in this study is voluntary.Participants in this research must be 18 years or older. If you are under the age of 18 then please exit this webpage or navigate to another website such as the Hour of Code at https://hourofcode.com, which is designed for K-12 students.If you are not interested in participating please exit the browser or navigate to this website: http://www.umn.edu. Your participation is voluntary and you are free to stop the lesson at any time.For the full description please navigate to this website: Gateway Lesson Research Study Permission.
###Code
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
import hourofci
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<style>
.output_prompt{opacity:0;}
</style>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
###Output
_____no_output_____
###Markdown
The world is infinitely complex This is a photo looking down towards the SE on the mountain resort town of Queenstown, New Zealand (at 45.03 N lat, 168.66 E long). How many different kinds of things do you see here? How can we decide what to measure and record? And how can we structure data about this complex world into tables to represent this????? A famous GIScientist once said"People cultivate fields (but manipulate objects)" **This phrase summarizes the most important distinction we make when capturing geospatial data -- Is the world made up of fields or objects? ** by Helen Couclelis, 1992, "People Manipulate Objects (but Cultivate Fields): Beyond the Raster-Vector Debate in GIS" from the book *Theories and Methods of Spatio-Temporal Reasoning in Geographic Space: International Conference GIS — From Space to Territory: Theories and Methods of Spatio-Temporal Reasoning* Pisa, Italy, September 21–23, 1992 (pp.65-77) Think about the picture of Queenstown we looked at earlier. The rolling surface of the landscape is continuous. There's land or water, at various elevations, everywhere. That's a *field*. Elevation is the classic field. There is a value of elevation everywhere. Then consider all the manmade structures in the picture. There are buildings, lightposts, roads. These are *objects*. The object world view is mostly empty, with objects scattered around. So, let's see if you can separate these two perspectives.{{IFrame("supplementary/sort-field-object.html", width=970, height=530)}} Now let's look at some geospatial data that are coded as either objects or fields. Starting with field data, here is a file of elevation measurements in the area to the south of Queenstown which is located near the center top of the image. You can see the lakes in the pale grey color.
###Code
import rasterio
from matplotlib import pyplot
filepath = 'https://dds.cr.usgs.gov/srtm/version1/Islands/S46E168.hgt.zip'
raster = rasterio.open(filepath, 'r')
pyplot.imshow(raster.read(1), cmap='terrain')
pyplot.show()
###Output
_____no_output_____
###Markdown
Now we can look at how the field data is actually stored.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
What we're seeing here is the beginning and end of the first three and last three lines of the file. What's all this??? Field data is usually stored as *rasters*.To store the world into a raster, the surface of the earth is divided into a grid of equal sized cells that covers a specific chunk of the earth, say a square that is 10 m by 10 m. Each cell is given a value that represents the data that has been measured on the earth in that cell. In the raster in this graphic, the building has been coded with the value green and the road has been coded with the value red. So, let's look again at that field data. Run both of these code chunks.
###Code
print("The Raster is", raster.width, "cells wide and", raster.height, "cells high")
raster.bounds
###Output
_____no_output_____
###Markdown
These show us that the NW (top left) corner of the area covered is 45 S latitude and 168 E longitude and the area covered is 1 degree of latitude high and 1 degree of longitude wide. Since 1 degree is 3600 seconds and we have ~1200 cells, this means the cell dimensions are 3600/1200 = ~3 arc seconds of a degree (that's approx 64m wide and 90m high at this latitude). Each row in the file shows us the average elevation value (in meters) in each cell across a row of the grid. Run this code to see the file again.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
Note that the elevations are much higher in the NE corner (as evidenced by the high values at the end of the first few rows) and lower along the southern edge (shown in the final rows). Now let's look at how object data is stored - hint, it's completely different! And WAY more complex. We'll start simple. When you ask Google to show you all the nearby restaurants on a map, you get a map with a bunch of pins, some with labels. You can click on them and find out information about those places. Those dots represent restaurant objects.For example... Here's a map of Queenstown showing some points of interest. Now we're looking north and the camera point for the photo used earlier is the cleared area at the top of the hill on the left.This link will take you to this map live in Google Maps. Now, let's see how that data is stored in a file. In Try-it Exercise 1 you looked at a point dataset. Remember this? (click the arrow to the left of the code)
###Code
!wget -O ne_50m_populated_places_simple.zip https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_populated_places_simple.zip
!unzip -n ne_50m_populated_places_simple.zip
import geopandas
cities = geopandas.read_file("ne_50m_populated_places_simple.shp")
cities.head()
###Output
_____no_output_____
###Markdown
In the table we just generated, each row has - an object ID- some data about various attributes for that object- then a column with an entry that is the point location Click back one slide to check this out. Now let's see again how that table can generate the dots on a map...
###Code
from ipyleaflet import Map, GeoData
cities_layer= GeoData(geo_dataframe = cities)
mymap = Map(center=(-43,168), zoom = 5)
mymap.add_layer(cities_layer)
mymap
###Output
_____no_output_____
###Markdown
OK, let's get back to fields and objects and how we put them into the computer. Remember this? These two graphics show the two most common data models for geospatial data. Fields are stored as grids called rasters and there is a value everywhere. Objects, which are scattered around mostly empty space, are stored as vectors. So, tell me more about vectors, you say... Vectors usually come in three varieties - points, lines and polygons. Points are good for things like cities on a world map, or lightpoles and signposts on a neighborhood map. Lines are for rivers, roads, railways, boundaries - that sort of thing. Polygons are areas. So they're used for lakes, building footprints, parks. Vector data has two components. These components can be stored together in a table by including one or more columns that provide the direct georeference (e.g. lat and long). OR, these components can be stored separately. Attributes with an object ID in one table and the geometry labelled with the same IDs in a separate file. By the way, it's important to know that you can't mix up points, lines and polygons in a single geospatial data file. If you want a map that shows points, lines and polygons, then you'll need at least three different datasets, one for each type of vector object.Remember the rivers data in our Try-it Exercise 1? Let's add it to the map. First, we'll get it again, just in case it's not currently loaded. (click the arrow to the left)
###Code
!wget -O ne_10m_rivers_lake_centerlines.zip https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_rivers_lake_centerlines.zip
!unzip -n ne_10m_rivers_lake_centerlines.zip
rivers = geopandas.read_file("ne_10m_rivers_lake_centerlines.shp")
rivers_layer = GeoData(geo_dataframe = rivers, style={'color':'blue'})
###Output
_____no_output_____
###Markdown
(wait for the asterisk to turn into a number...) then go to the next slide and we'll add it to the cities data...
###Code
mymap2 = Map(center=(-43,168), zoom = 5)
mymap2.add_layer(cities_layer)
mymap2.add_layer(rivers_layer)
mymap2
###Output
_____no_output_____
###Markdown
OK, now let's practice these concepts. For each of the following kinds of geospatial data, choose the data model (raster or vector) that it's most likely to be stored in. {{IFrame("supplementary/sort-raster-vector.html", width=970, height=430)}} Well done! Now you know a little bit about geospatial data. If you have worked through this lesson carefully, you should now be able to: 1. Explain what is special about geospatial data.2. Describe how location can be measured and recorded in geospatial data.3. Explain the difference between raster and vector data.4. Identify several different types of geospatial data.5. Load and view different kinds of geospatial data in Python Notebooks. If you still have time, feel free to go back to the two Try-It exercises and try out downloading some different datasets from the sources. Make maps of different parts of the earth or of different days from the Johns Hopkins data server. If you want to learn more about geospatial data, you can go on to the intermediate Geospatial Data lesson.Or you can go back and complete some of the other introductory lessons as they all touch on the use of geospatial data. Congratulations!**You have finished an Hour of CI!**But, before you go ... 1. Please fill out a very brief questionnaire to provide feedback and help us improve the Hour of CI lessons. It is fast and your feedback is very important to let us know what you learned and how we can improve the lessons in the future.2. If you would like a certificate, then please type your name below and click "Create Certificate" and you will be presented with a PDF certificate.Take the questionnaire and provide feedback
###Code
# This code cell loads the Interact Textbox that will ask users for their name
# Once they click "Create Certificate" then it will add their name to the certificate template
# And present them a PDF certificate
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
from ipywidgets import interact
def make_cert(learner_name, lesson_name):
cert_filename = 'hourofci_certificate.pdf'
img = Image.open("../../supplementary/hci-certificate-template.jpg")
draw = ImageDraw.Draw(img)
cert_font = ImageFont.load_default()
cert_font = ImageFont.truetype('../../supplementary/cruft.ttf', 150)
cert_fontsm = ImageFont.truetype('../../supplementary/cruft.ttf', 80)
w,h = cert_font.getsize(learner_name)
draw.text( xy = (1650-w/2,1100-h/2), text = learner_name, fill=(0,0,0),font=cert_font)
w,h = cert_fontsm.getsize(lesson_name)
draw.text( xy = (1650-w/2,1100-h/2 + 750), text = lesson_name, fill=(0,0,0),font=cert_fontsm)
img.save(cert_filename, "PDF", resolution=100.0)
return cert_filename
interact_cert=interact.options(manual=True, manual_name="Create Certificate")
@interact_cert(name="Your Name")
def f(name):
print("Congratulations",name)
filename = make_cert(name, 'Beginner Geospatial Data')
print("Download your certificate by clicking the link below.")
###Output
_____no_output_____
###Markdown
Introduction to Geospatial Data Part 5 of 5 Storing geography in the computer ReminderContinue with the lessonBy continuing with this lesson you are granting your permission to take part in this research study for the Hour of Cyberinfrastructure: Developing Cyber Literacy for GIScience project. In this study, you will be learning about cyberinfrastructure and related concepts using a web-based platform that will take approximately one hour per lesson. Participation in this study is voluntary.Participants in this research must be 18 years or older. If you are under the age of 18 then please exit this webpage or navigate to another website such as the Hour of Code at https://hourofcode.com, which is designed for K-12 students.If you are not interested in participating please exit the browser or navigate to this website: http://www.umn.edu. Your participation is voluntary and you are free to stop the lesson at any time.For the full description please navigate to this website: Gateway Lesson Research Study Permission.
###Code
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
import hourofci
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<style>
.output_prompt{opacity:0;}
</style>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
###Output
_____no_output_____
###Markdown
The world is infinitely complex This is a photo looking down towards the SE on the mountain resort town of Queenstown, New Zealand (at 45.03 N lat, 168.66 E long). How many different kinds of things do you see here? How can we decide what to measure and record? And how can we structure data about this complex world into tables to represent this????? A famous GIScientist once said"People cultivate fields (but manipulate objects)" **This phrase summarizes the most important distinction we make when capturing geospatial data -- Is the world made up of fields or objects? ** by Helen Couclelis, 1992, "People Manipulate Objects (but Cultivate Fields): Beyond the Raster-Vector Debate in GIS" from the book *Theories and Methods of Spatio-Temporal Reasoning in Geographic Space: International Conference GIS — From Space to Territory: Theories and Methods of Spatio-Temporal Reasoning* Pisa, Italy, September 21–23, 1992 (pp.65-77) Think about the picture of Queenstown we looked at earlier. The rolling surface of the landscape is continuous. There's land or water, at various elevations, everywhere. That's a *field*. Elevation is the classic field. There is a value of elevation everywhere. Then consider all the manmade structures in the picture. There are buildings, lightposts, roads. These are *objects*. The object world view is mostly empty, with objects scattered around. So, let's see if you can separate these two perspectives.{{IFrame("supplementary/sort-field-object.html", width=970, height=530)}} Now let's look at some geospatial data that are coded as either objects or fields. Starting with field data, here is a file of elevation measurements in the area to the south of Queenstown which is located near the center top of the image. You can see the lakes in the pale grey color.
###Code
import rasterio
from matplotlib import pyplot
filepath = 'https://dds.cr.usgs.gov/srtm/version1/Islands/S46E168.hgt.zip'
raster = rasterio.open(filepath, 'r')
pyplot.imshow(raster.read(1), cmap='terrain')
pyplot.show()
###Output
_____no_output_____
###Markdown
Now we can look at how the field data is actually stored.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
What we're seeing here is the beginning and end of the first three and last three lines of the file. What's all this??? Field data is usually stored as *rasters*.To store the world into a raster, the surface of the earth is divided into a grid of equal sized cells that covers a specific chunk of the earth, say a square that is 10 m by 10 m. Each cell is given a value that represents the data that has been measured on the earth in that cell. In the raster in this graphic, the building has been coded with the value green and the road has been coded with the value red. So, let's look again at that field data. Run both of these code chunks.
###Code
print("The Raster is", raster.width, "cells wide and", raster.height, "cells high")
raster.bounds
###Output
_____no_output_____
###Markdown
These show us that the NW (top left) corner of the area covered is 45 S latitude and 168 E longitude and the area covered is 1 degree of latitude high and 1 degree of longitude wide. Since 1 degree is 3600 seconds and we have ~1200 cells, this means the cell dimensions are 3600/1200 = ~3 arc seconds of a degree (that's approx 64m wide and 90m high at this latitude). Each row in the file shows us the average elevation value (in meters) in each cell across a row of the grid. Run this code to see the file again.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
Note that the elevations are much higher in the NE corner (as evidenced by the high values at the end of the first few rows) and lower along the southern edge (shown in the final rows). Now let's look at how object data is stored - hint, it's completely different! And WAY more complex. We'll start simple. When you ask Google to show you all the nearby restaurants on a map, you get a map with a bunch of pins, some with labels. You can click on them and find out information about those places. Those dots represent restaurant objects.For example... Here's a map of Queenstown showing some points of interest. Now we're looking north and the camera point for the photo used earlier is the cleared area at the top of the hill on the left.This link will take you to this map live in Google Maps. Now, let's see how that data is stored in a file. In Try-it Exercise 1 you looked at a point dataset. Remember this? (click the arrow to the left of the code)
###Code
import geopandas
#cities = geopandas.read_file("https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_populated_places.zip")
cities = geopandas.read_file("https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_populated_places_simple.zip")
cities.head()
###Output
_____no_output_____
###Markdown
In the table we just generated, each row has - an object ID- some data about various attributes for that object- then a column with an entry that is the point location Click back one slide to check this out. Now let's see again how that table can generate the dots on a map...
###Code
from ipyleaflet import Map, GeoData
cities_layer= GeoData(geo_dataframe = cities)
mymap = Map(center=(-43,168), zoom = 5)
mymap.add_layer(cities_layer)
mymap
###Output
_____no_output_____
###Markdown
OK, let's get back to fields and objects and how we put them into the computer. Remember this? These two graphics show the two most common data models for geospatial data. Fields are stored as grids called rasters and there is a value everywhere. Objects, which are scattered around mostly empty space, are stored as vectors. So, tell me more about vectors, you say... Vectors usually come in three varieties - points, lines and polygons. Points are good for things like cities on a world map, or lightpoles and signposts on a neighborhood map. Lines are for rivers, roads, railways, boundaries - that sort of thing. Polygons are areas. So they're used for lakes, building footprints, parks. Vector data has two components. These components can be stored together in a table by including one or more columns that provide the direct georeference (e.g. lat and long). OR, these components can be stored separately. Attributes with an object ID in one table and the geometry labelled with the same IDs in a separate file. By the way, it's important to know that you can't mix up points, lines and polygons in a single geospatial data file. If you want a map that shows points, lines and polygons, then you'll need at least three different datasets, one for each type of vector object.Remember the rivers data in our Try-it Exercise 1? Let's add it to the map. First, we'll get it again, just in case it's not currently loaded. (click the arrow to the left)
###Code
rivers = geopandas.read_file("https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_rivers_lake_centerlines.zip")
rivers_layer = GeoData(geo_dataframe = rivers, style={'color':'blue'})
###Output
_____no_output_____
###Markdown
(wait for the asterisk to turn into a number...) then go to the next slide and we'll add it to the cities data...
###Code
mymap2 = Map(center=(-43,168), zoom = 5)
mymap2.add_layer(cities_layer)
mymap2.add_layer(rivers_layer)
mymap2
###Output
_____no_output_____
###Markdown
OK, now let's practice these concepts. For each of the following kinds of geospatial data, choose the data model (raster or vector) that it's most likely to be stored in. {{IFrame("supplementary/sort-raster-vector.html", width=970, height=430)}} Well done! Now you know a little bit about geospatial data. If you have worked through this lesson carefully, you should now be able to: 1. Explain what is special about geospatial data.2. Describe how location can be measured and recorded in geospatial data.3. Explain the difference between raster and vector data.4. Identify several different types of geospatial data.5. Load and view different kinds of geospatial data in Python Notebooks. If you still have time, feel free to go back to the two Try-It exercises and try out downloading some different datasets from the sources. Make maps of different parts of the earth or of different days from the Johns Hopkins data server. If you want to learn more about geospatial data, you can go on to the intermediate Geospatial Data lesson.Or you can go back and complete some of the other introductory lessons as they all touch on the use of geospatial data. Congratulations!**You have finished an Hour of CI!**But, before you go ... 1. Please fill out a very brief questionnaire to provide feedback and help us improve the Hour of CI lessons. It is fast and your feedback is very important to let us know what you learned and how we can improve the lessons in the future.2. If you would like a certificate, then please type your name below and click "Create Certificate" and you will be presented with a PDF certificate.Take the questionnaire and provide feedback
###Code
# This code cell has a tag "Hide" (Setting by going to Toolbar > View > Cell Toolbar > Tags)
# Code input is hidden when the notebook is loaded and can be hide/show using the toggle button "Toggle raw code" at the top
# This code cell loads the Interact Textbox that will ask users for their name
# Once they click "Create Certificate" then it will add their name to the certificate template
# And present them a PDF certificate
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
from ipywidgets import interact
def make_cert(learner_name):
cert_filename = 'hourofci_certificate.pdf'
img = Image.open("../../supplementary/hci-certificate-template.jpg")
draw = ImageDraw.Draw(img)
cert_font = ImageFont.load_default()
cert_font = ImageFont.truetype('times.ttf', 150)
w,h = cert_font.getsize(learner_name)
draw.text( xy = (1650-w/2,1100-h/2), text = learner_name, fill=(0,0,0),font=cert_font)
img.save(cert_filename, "PDF", resolution=100.0)
return cert_filename
interact_cert=interact.options(manual=True, manual_name="Create Certificate")
@interact_cert(name="Your Name")
def f(name):
print("Congratulations",name)
filename = make_cert(name)
print("Download your certificate by clicking the link below.")
###Output
_____no_output_____
###Markdown
Introduction to Geospatial Data Part 5 of 5 Storing geography in the computer ReminderContinue with the lessonBy continuing with this lesson you are granting your permission to take part in this research study for the Hour of Cyberinfrastructure: Developing Cyber Literacy for GIScience project. In this study, you will be learning about cyberinfrastructure and related concepts using a web-based platform that will take approximately one hour per lesson. Participation in this study is voluntary.Participants in this research must be 18 years or older. If you are under the age of 18 then please exit this webpage or navigate to another website such as the Hour of Code at https://hourofcode.com, which is designed for K-12 students.If you are not interested in participating please exit the browser or navigate to this website: http://www.umn.edu. Your participation is voluntary and you are free to stop the lesson at any time.For the full description please navigate to this website: Gateway Lesson Research Study Permission.
###Code
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
import hourofci
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<style>
.output_prompt{opacity:0;}
</style>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
###Output
_____no_output_____
###Markdown
The world is infinitely complex This is a photo looking down towards the SE on the mountain resort town of Queenstown, New Zealand (at 45.03 N lat, 168.66 E long). How many different kinds of things do you see here? How can we decide what to measure and record? And how can we structure data about this complex world into tables to represent this????? A famous GIScientist once said"People cultivate fields (but manipulate objects)" **This phrase summarizes the most important distinction we make when capturing geospatial data -- Is the world made up of fields or objects? ** by Helen Couclelis, 1992, "People Manipulate Objects (but Cultivate Fields): Beyond the Raster-Vector Debate in GIS" from the book *Theories and Methods of Spatio-Temporal Reasoning in Geographic Space: International Conference GIS — From Space to Territory: Theories and Methods of Spatio-Temporal Reasoning* Pisa, Italy, September 21–23, 1992 (pp.65-77) Think about the picture of Queenstown we looked at earlier. The rolling surface of the landscape is continuous. There's land or water, at various elevations, everywhere. That's a *field*. Elevation is the classic field. There is a value of elevation everywhere. Then consider all the manmade structures in the picture. There are buildings, lightposts, roads. These are *objects*. The object world view is mostly empty, with objects scattered around. So, let's see if you can separate these two perspectives.{{IFrame("supplementary/sort-field-object.html", width=970, height=530)}} Now let's look at some geospatial data that are coded as either objects or fields. Starting with field data, here is a file of elevation measurements in the area to the south of Queenstown which is located near the center top of the image. You can see the lakes in the pale grey color.
###Code
import rasterio
from matplotlib import pyplot
filepath = 'https://dds.cr.usgs.gov/srtm/version1/Islands/S46E168.hgt.zip'
raster = rasterio.open(filepath, 'r')
pyplot.imshow(raster.read(1), cmap='terrain')
pyplot.show()
###Output
_____no_output_____
###Markdown
Now we can look at how the field data is actually stored.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
What we're seeing here is the beginning and end of the first three and last three lines of the file. What's all this??? Field data is usually stored as *rasters*.To store the world into a raster, the surface of the earth is divided into a grid of equal sized cells that covers a specific chunk of the earth, say a square that is 10 m by 10 m. Each cell is given a value that represents the data that has been measured on the earth in that cell. In the raster in this graphic, the building has been coded with the value green and the road has been coded with the value red. So, let's look again at that field data. Run both of these code chunks.
###Code
print("The Raster is", raster.width, "cells wide and", raster.height, "cells high")
raster.bounds
###Output
_____no_output_____
###Markdown
These show us that the NW (top left) corner of the area covered is 45 S latitude and 168 E longitude and the area covered is 1 degree of latitude high and 1 degree of longitude wide. Since 1 degree is 3600 seconds and we have ~1200 cells, this means the cell dimensions are 3600/1200 = ~3 arc seconds of a degree (that's approx 64m wide and 90m high at this latitude). Each row in the file shows us the average elevation value (in meters) in each cell across a row of the grid. Run this code to see the file again.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
Note that the elevations are much higher in the NE corner (as evidenced by the high values at the end of the first few rows) and lower along the southern edge (shown in the final rows). Now let's look at how object data is stored - hint, it's completely different! And WAY more complex. We'll start simple. When you ask Google to show you all the nearby restaurants on a map, you get a map with a bunch of pins, some with labels. You can click on them and find out information about those places. Those dots represent restaurant objects.For example... Here's a map of Queenstown showing some points of interest. Now we're looking north and the camera point for the photo used earlier is the cleared area at the top of the hill on the left.This link will take you to this map live in Google Maps. Now, let's see how that data is stored in a file. In Try-it Exercise 1 you looked at a point dataset. Remember this? (click the arrow to the left of the code)
###Code
import geopandas
#cities = geopandas.read_file("https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/ne_110m_populated_places.zip")
cities = geopandas.read_file("https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_populated_places_simple.zip")
cities.head()
###Output
_____no_output_____
###Markdown
In the table we just generated, each row has - an object ID- some data about various attributes for that object- then a column with an entry that is the point location Click back one slide to check this out. Now let's see again how that table can generate the dots on a map...
###Code
from ipyleaflet import Map, GeoData
cities_layer= GeoData(geo_dataframe = cities)
mymap = Map(center=(-43,168), zoom = 5)
mymap.add_layer(cities_layer)
mymap
###Output
_____no_output_____
###Markdown
OK, let's get back to fields and objects and how we put them into the computer. Remember this? These two graphics show the two most common data models for geospatial data. Fields are stored as grids called rasters and there is a value everywhere. Objects, which are scattered around mostly empty space, are stored as vectors. So, tell me more about vectors, you say... Vectors usually come in three varieties - points, lines and polygons. Points are good for things like cities on a world map, or lightpoles and signposts on a neighborhood map. Lines are for rivers, roads, railways, boundaries - that sort of thing. Polygons are areas. So they're used for lakes, building footprints, parks. Vector data has two components. These components can be stored together in a table by including one or more columns that provide the direct georeference (e.g. lat and long). OR, these components can be stored separately. Attributes with an object ID in one table and the geometry labelled with the same IDs in a separate file. By the way, it's important to know that you can't mix up points, lines and polygons in a single geospatial data file. If you want a map that shows points, lines and polygons, then you'll need at least three different datasets, one for each type of vector object.Remember the rivers data in our Try-it Exercise 1? Let's add it to the map. First, we'll get it again, just in case it's not currently loaded. (click the arrow to the left)
###Code
rivers = geopandas.read_file("https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_rivers_lake_centerlines.zip")
rivers_layer = GeoData(geo_dataframe = rivers, style={'color':'blue'})
###Output
_____no_output_____
###Markdown
(wait for the asterisk to turn into a number...) then go to the next slide and we'll add it to the cities data...
###Code
mymap2 = Map(center=(-43,168), zoom = 5)
mymap2.add_layer(cities_layer)
mymap2.add_layer(rivers_layer)
mymap2
###Output
_____no_output_____
###Markdown
OK, now let's practice these concepts. For each of the following kinds of geospatial data, choose the data model (raster or vector) that it's most likely to be stored in. {{IFrame("supplementary/sort-raster-vector.html", width=970, height=430)}} Well done! Now you know a little bit about geospatial data. If you have worked through this lesson carefully, you should now be able to: 1. Explain what is special about geospatial data.2. Describe how location can be measured and recorded in geospatial data.3. Explain the difference between raster and vector data.4. Identify several different types of geospatial data.5. Load and view different kinds of geospatial data in Python Notebooks. If you still have time, feel free to go back to the two Try-It exercises and try out downloading some different datasets from the sources. Make maps of different parts of the earth or of different days from the Johns Hopkins data server. If you want to learn more about geospatial data, you can go on to the intermediate Geospatial Data lesson.Or you can go back and complete some of the other introductory lessons as they all touch on the use of geospatial data. Congratulations!**You have finished an Hour of CI!**But, before you go ... 1. Please fill out a very brief questionnaire to provide feedback and help us improve the Hour of CI lessons. It is fast and your feedback is very important to let us know what you learned and how we can improve the lessons in the future.2. If you would like a certificate, then please type your name below and click "Create Certificate" and you will be presented with a PDF certificate.Take the questionnaire and provide feedback
###Code
# This code cell has a tag "Hide" (Setting by going to Toolbar > View > Cell Toolbar > Tags)
# Code input is hidden when the notebook is loaded and can be hide/show using the toggle button "Toggle raw code" at the top
# This code cell loads the Interact Textbox that will ask users for their name
# Once they click "Create Certificate" then it will add their name to the certificate template
# And present them a PDF certificate
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
from ipywidgets import interact
def make_cert(learner_name):
cert_filename = 'hourofci_certificate.pdf'
img = Image.open("../../supplementary/hci-certificate-template.jpg")
draw = ImageDraw.Draw(img)
cert_font = ImageFont.load_default()
cert_font = ImageFont.truetype('../../supplementary/times.ttf', 150)
w,h = cert_font.getsize(learner_name)
draw.text( xy = (1650-w/2,1100-h/2), text = learner_name, fill=(0,0,0),font=cert_font)
img.save(cert_filename, "PDF", resolution=100.0)
return cert_filename
interact_cert=interact.options(manual=True, manual_name="Create Certificate")
@interact_cert(name="Your Name")
def f(name):
print("Congratulations",name)
filename = make_cert(name)
print("Download your certificate by clicking the link below.")
###Output
_____no_output_____
###Markdown
Introduction to Geospatial Data Part 5 of 5 Storing geography in the computer ReminderContinue with the lessonBy continuing with this lesson you are granting your permission to take part in this research study for the Hour of Cyberinfrastructure: Developing Cyber Literacy for GIScience project. In this study, you will be learning about cyberinfrastructure and related concepts using a web-based platform that will take approximately one hour per lesson. Participation in this study is voluntary.Participants in this research must be 18 years or older. If you are under the age of 18 then please exit this webpage or navigate to another website such as the Hour of Code at https://hourofcode.com, which is designed for K-12 students.If you are not interested in participating please exit the browser or navigate to this website: http://www.umn.edu. Your participation is voluntary and you are free to stop the lesson at any time.For the full description please navigate to this website: Gateway Lesson Research Study Permission.
###Code
# This code cell starts the necessary setup for Hour of CI lesson notebooks.
# First, it enables users to hide and unhide code by producing a 'Toggle raw code' button below.
# Second, it imports the hourofci package, which is necessary for lessons and interactive Jupyter Widgets.
# Third, it helps hide/control other aspects of Jupyter Notebooks to improve the user experience
# This is an initialization cell
# It is not displayed because the Slide Type is 'Skip'
from IPython.display import HTML, IFrame, Javascript, display
from ipywidgets import interactive
import ipywidgets as widgets
from ipywidgets import Layout
import getpass # This library allows us to get the username (User agent string)
# import package for hourofci project
import sys
sys.path.append('../../supplementary') # relative path (may change depending on the location of the lesson notebook)
import hourofci
# load javascript to initialize/hide cells, get user agent string, and hide output indicator
# hide code by introducing a toggle button "Toggle raw code"
HTML('''
<script type="text/javascript" src=\"../../supplementary/js/custom.js\"></script>
<style>
.output_prompt{opacity:0;}
</style>
<input id="toggle_code" type="button" value="Toggle raw code">
''')
###Output
_____no_output_____
###Markdown
The world is infinitely complex This is a photo looking down towards the SE on the mountain resort town of Queenstown, New Zealand (at 45.03 N lat, 168.66 E long). How many different kinds of things do you see here? How can we decide what to measure and record? And how can we structure data about this complex world into tables to represent this????? A famous GIScientist once said"People cultivate fields (but manipulate objects)" **This phrase summarizes the most important distinction we make when capturing geospatial data -- Is the world made up of fields or objects? ** by Helen Couclelis, 1992, "People Manipulate Objects (but Cultivate Fields): Beyond the Raster-Vector Debate in GIS" from the book *Theories and Methods of Spatio-Temporal Reasoning in Geographic Space: International Conference GIS — From Space to Territory: Theories and Methods of Spatio-Temporal Reasoning* Pisa, Italy, September 21–23, 1992 (pp.65-77) Think about the picture of Queenstown we looked at earlier. The rolling surface of the landscape is continuous. There's land or water, at various elevations, everywhere. That's a *field*. Elevation is the classic field. There is a value of elevation everywhere. Then consider all the manmade structures in the picture. There are buildings, lightposts, roads. These are *objects*. The object world view is mostly empty, with objects scattered around. So, let's see if you can separate these two perspectives.{{IFrame("supplementary/sort-field-object.html", width=970, height=530)}} Now let's look at some geospatial data that are coded as either objects or fields. Starting with field data, here is a file of elevation measurements in the area to the south of Queenstown which is located near the center top of the image. You can see the lakes in the pale grey color.
###Code
import rasterio
from matplotlib import pyplot as plt
from matplotlib.pyplot import figure
figure(figsize=(10,10))
filepath = 'supplementary/queenstown-90m-DEM.tif'
raster = rasterio.open(filepath, 'r')
plt.imshow(raster.read(1), cmap='terrain')
plt.show()
###Output
_____no_output_____
###Markdown
Now we can look at how the field data is actually stored.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
What we're seeing here is the beginning and end of the first three and last three lines of the file. What's all this??? Field data is usually stored as *rasters*.To store the world into a raster, the surface of the earth is divided into a grid of equal sized cells that covers a specific chunk of the earth, say a square that is 10 m by 10 m. Each cell is given a value that represents the data that has been measured on the earth in that cell. In the raster in this graphic, the building has been coded with the value green and the road has been coded with the value red. So, let's look again at that field data. Run both of these code chunks.
###Code
print("The Raster is", raster.width, "cells wide and", raster.height, "cells high")
raster.bounds
###Output
_____no_output_____
###Markdown
These show us that the NW (top left) corner of the area covered is 45 S latitude and 168 E longitude and the area covered is 1 degree of latitude high and 1 degree of longitude wide. Since 1 degree is 3600 seconds and we have ~1200 cells, this means the cell dimensions are 3600/1200 = ~3 arc seconds of a degree (that's approx 64m wide and 90m high at this latitude). Each row in the file shows us the average elevation value (in meters) in each cell across a row of the grid. Run this code to see the file again.
###Code
raster.read()
###Output
_____no_output_____
###Markdown
Note that the elevations are much higher in the NE corner (as evidenced by the high values at the end of the first few rows) and lower along the southern edge (shown in the final rows). Now let's look at how object data is stored - hint, it's completely different! And WAY more complex. We'll start simple. When you ask Google to show you all the nearby restaurants on a map, you get a map with a bunch of pins, some with labels. You can click on them and find out information about those places. Those dots represent restaurant objects.For example... Here's a map of Queenstown showing some points of interest. Now we're looking north and the camera point for the photo used earlier is the cleared area at the top of the hill on the left.This link will take you to this map live in Google Maps. Now, let's see how that data is stored in a file. In Try-it Exercise 1 you looked at a point dataset. Remember this? (click the arrow to the left of the code)
###Code
!wget -O ne_50m_populated_places_simple.zip https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_populated_places_simple.zip
!unzip -n ne_50m_populated_places_simple.zip
import geopandas
cities = geopandas.read_file("ne_50m_populated_places_simple.shp")
cities.head()
###Output
_____no_output_____
###Markdown
In the table we just generated, each row has - an object ID- some data about various attributes for that object- then a column with an entry that is the point location Click back one slide to check this out. Now let's see again how that table can generate the dots on a map...
###Code
from ipyleaflet import Map, GeoData
cities_layer= GeoData(geo_dataframe = cities)
mymap = Map(center=(-43,168), zoom = 5)
mymap.add_layer(cities_layer)
mymap
###Output
_____no_output_____
###Markdown
OK, let's get back to fields and objects and how we put them into the computer. Remember this? These two graphics show the two most common data models for geospatial data. Fields are stored as grids called rasters and there is a value everywhere. Objects, which are scattered around mostly empty space, are stored as vectors. So, tell me more about vectors, you say... Vectors usually come in three varieties - points, lines and polygons. Points are good for things like cities on a world map, or lightpoles and signposts on a neighborhood map. Lines are for rivers, roads, railways, boundaries - that sort of thing. Polygons are areas. So they're used for lakes, building footprints, parks. Vector data has two components. These components can be stored together in a table by including one or more columns that provide the direct georeference (e.g. lat and long). OR, these components can be stored separately. Attributes with an object ID in one table and the geometry labelled with the same IDs in a separate file. By the way, it's important to know that you can't mix up points, lines and polygons in a single geospatial data file. If you want a map that shows points, lines and polygons, then you'll need at least three different datasets, one for each type of vector object.Remember the rivers data in our Try-it Exercise 1? Let's add it to the map. First, we'll get it again, just in case it's not currently loaded. (click the arrow to the left)
###Code
!wget -O ne_10m_rivers_lake_centerlines.zip https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_rivers_lake_centerlines.zip
!unzip -n ne_10m_rivers_lake_centerlines.zip
rivers = geopandas.read_file("ne_10m_rivers_lake_centerlines.shp")
rivers_layer = GeoData(geo_dataframe = rivers, style={'color':'blue'})
###Output
_____no_output_____
###Markdown
(wait for the asterisk to turn into a number...) then go to the next slide and we'll add it to the cities data...
###Code
mymap2 = Map(center=(-43,168), zoom = 5)
mymap2.add_layer(cities_layer)
mymap2.add_layer(rivers_layer)
mymap2
###Output
_____no_output_____
###Markdown
OK, now let's practice these concepts. For each of the following kinds of geospatial data, choose the data model (raster or vector) that it's most likely to be stored in. {{IFrame("supplementary/sort-raster-vector.html", width=970, height=430)}} Well done! Now you know a little bit about geospatial data. If you have worked through this lesson carefully, you should now be able to: 1. Explain what is special about geospatial data.2. Describe how location can be measured and recorded in geospatial data.3. Explain the difference between raster and vector data.4. Identify several different types of geospatial data.5. Load and view different kinds of geospatial data in Python Notebooks. If you still have time, feel free to go back to the two Try-It exercises and try out downloading some different datasets from the sources. Make maps of different parts of the earth or of different days from the Johns Hopkins data server. If you want to learn more about geospatial data, you can go on to the intermediate Geospatial Data lesson.Or you can go back and complete some of the other introductory lessons as they all touch on the use of geospatial data. Congratulations!**You have finished an Hour of CI!**But, before you go ... 1. Please fill out a very brief questionnaire to provide feedback and help us improve the Hour of CI lessons. It is fast and your feedback is very important to let us know what you learned and how we can improve the lessons in the future.2. If you would like a certificate, then please type your name below and click "Create Certificate" and you will be presented with a PDF certificate.Take the questionnaire and provide feedback
###Code
# This code cell loads the Interact Textbox that will ask users for their name
# Once they click "Create Certificate" then it will add their name to the certificate template
# And present them a PDF certificate
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
from ipywidgets import interact
def make_cert(learner_name, lesson_name):
cert_filename = 'hourofci_certificate.pdf'
img = Image.open("../../supplementary/hci-certificate-template.jpg")
draw = ImageDraw.Draw(img)
cert_font = ImageFont.load_default()
cert_font = ImageFont.truetype('../../supplementary/cruft.ttf', 150)
cert_fontsm = ImageFont.truetype('../../supplementary/cruft.ttf', 80)
w,h = cert_font.getsize(learner_name)
draw.text( xy = (1650-w/2,1100-h/2), text = learner_name, fill=(0,0,0),font=cert_font)
w,h = cert_fontsm.getsize(lesson_name)
draw.text( xy = (1650-w/2,1100-h/2 + 750), text = lesson_name, fill=(0,0,0),font=cert_fontsm)
img.save(cert_filename, "PDF", resolution=100.0)
return cert_filename
interact_cert=interact.options(manual=True, manual_name="Create Certificate")
@interact_cert(name="Your Name")
def f(name):
print("Congratulations",name)
filename = make_cert(name, 'Beginner Geospatial Data')
print("Download your certificate by clicking the link below.")
###Output
_____no_output_____ |
Practice 09 - BPR for SLIM and MF.ipynb | ###Markdown
Recommender Systems 2020/21 Practice - BPR for SLIM and MF State of the art machine learning algorithm A few info about gradient descent
###Code
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import random
from scipy import stats
from scipy.optimize import fmin
###Output
_____no_output_____
###Markdown
Gradient Descent Gradient descent, also known as steepest descent, is an optimization algorithm for finding the local minimum of a function. To find a local minimum, the function "steps" in the direction of the negative of the gradient. Gradient ascent is the same as gradient descent, except that it steps in the direction of the positive of the gradient and therefore finds local maximums instead of minimums. The algorithm of gradient descent can be outlined as follows: 1: Choose initial guess $x_0$ 2: for k = 0, 1, 2, ... do 3: $s_k$ = -$\nabla f(x_k)$ 4: choose $\alpha_k$ to minimize $f(x_k+\alpha_k s_k)$ 5: $x_{k+1} = x_k + \alpha_k s_k$ 6: end for As a simple example, let's find a local minimum for the function $f(x) = x^3-2x^2+2$
###Code
f = lambda x: x**3-2*x**2+2
x = np.linspace(-1,2.5,1000)
plt.plot(x,f(x))
plt.xlim([-1,2.5])
plt.ylim([0,3])
plt.show()
###Output
_____no_output_____
###Markdown
We can see from plot above that our local minimum is gonna be near around 1.4 or 1.5 (on the x-axis), but let's pretend that we don't know that, so we set our starting point (arbitrarily, in this case) at $x_0 = 2$
###Code
x_old = 0
x_new = 2 # The algorithm starts at x=2
n_k = 0.1 # step size
precision = 0.0001
x_list, y_list = [x_new], [f(x_new)]
# returns the value of the derivative of our function
def f_gradient(x):
return 3*x**2-4*x
while abs(x_new - x_old) > precision:
x_old = x_new
# Gradient descent step
s_k = -f_gradient(x_old)
x_new = x_old + n_k * s_k
x_list.append(x_new)
y_list.append(f(x_new))
print ("Local minimum occurs at: {:.2f}".format(x_new))
print ("Number of steps:", len(x_list))
###Output
Local minimum occurs at: 1.33
Number of steps: 17
###Markdown
The figures below show the route that was taken to find the local minimum.
###Code
plt.figure(figsize=[10,3])
plt.subplot(1,2,1)
plt.scatter(x_list,y_list,c="r")
plt.plot(x_list,y_list,c="r")
plt.plot(x,f(x), c="b")
plt.xlim([-1,2.5])
plt.ylim([0,3])
plt.title("Gradient descent")
plt.subplot(1,2,2)
plt.scatter(x_list,y_list,c="r")
plt.plot(x_list,y_list,c="r")
plt.plot(x,f(x), c="b")
plt.xlim([1.2,2.1])
plt.ylim([0,3])
plt.title("Gradient descent (zoomed in)")
plt.show()
###Output
_____no_output_____
###Markdown
Recap on BPRS.Rendle et al. BPR: Bayesian Personalized Ranking from Implicit Feedback. UAI2009The usual approach for item recommenders is to predict a personalized score $\hat{x}_{ui}$ for an item that reflects the preference of the user for the item. Then the items are ranked by sorting them according to that score.Machine learning approaches are tipically fit by using observed items as a positive sample and missing ones for the negative class. A perfect model would thus be useless, as it would classify as negative (non-interesting) all the items that were non-observed at training time. The only reason why such methods work is regularization.BPR use a different approach. The training dataset is composed by triplets $(u,i,j)$ representing that user u is assumed to prefer i over j. For an implicit dataset this means that u observed i but not j:$$D_S := \{(u,i,j) \mid i \in I_u^+ \wedge j \in I \setminus I_u^+\}$$ BPR-OPTA machine learning model can be represented by a parameter vector $\Theta$ which is found at fitting time. BPR wants to find the parameter vector that is most probable given the desired, but latent, preference structure $>_u$:$$p(\Theta \mid >_u) \propto p(>_u \mid \Theta)p(\Theta) $$$$\prod_{u\in U} p(>_u \mid \Theta) = \dots = \prod_{(u,i,j) \in D_S} p(i >_u j \mid \Theta) $$The probability that a user really prefers item $i$ to item $j$ is defined as:$$ p(i >_u j \mid \Theta) := \sigma(\hat{x}_{uij}(\Theta)) $$Where $\sigma$ represent the logistic sigmoid and $\hat{x}_{uij}(\Theta)$ is an arbitrary real-valued function of $\Theta$ (the output of your arbitrary model).To complete the Bayesian setting, we define a prior density for the parameters:$$p(\Theta) \sim N(0, \Sigma_\Theta)$$And we can now formulate the maximum posterior estimator:$$BPR-OPT := \log p(\Theta \mid >_u) $$$$ = \log p(>_u \mid \Theta) p(\Theta) $$$$ = \log \prod_{(u,i,j) \in D_S} \sigma(\hat{x}_{uij})p(\Theta) $$$$ = \sum_{(u,i,j) \in D_S} \log \sigma(\hat{x}_{uij}) + \log p(\Theta) $$$$ = \sum_{(u,i,j) \in D_S} \log \sigma(\hat{x}_{uij}) - \lambda_\Theta ||\Theta||^2 $$Where $\lambda_\Theta$ are model specific regularization parameters. BPR learning algorithmOnce obtained the log-likelihood, we need to maximize it in order to find our obtimal $\Theta$. As the crierion is differentiable, gradient descent algorithms are an obvious choiche for maximization.Gradient descent comes in many fashions, you can find an overview on Cesare Bernardis thesis https://www.politesi.polimi.it/bitstream/10589/133864/3/tesi.pdf on pages 18-19-20. A nice post about momentum is available here https://distill.pub/2017/momentum/The basic version of gradient descent consists in evaluating the gradient using all the available samples and then perform a single update. The problem with this is, in our case, that our training dataset is very skewed. Suppose an item i is very popular. Then we habe many terms of the form $\hat{x}_{uij}$ in the loss because for many users u the item i is compared against all negative items j.The other popular approach is stochastic gradient descent, where for each training sample an update is performed. This is a better approach, but the order in which the samples are traversed is crucial. To solve this issue BPR uses a stochastic gradient descent algorithm that choses the triples randomly.The gradient of BPR-OPT with respect to the model parameters is: $$\frac{\partial BPR-OPT}{\partial \Theta} = \sum_{(u,i,j) \in D_S} \frac{\partial}{\partial \Theta} \log \sigma (\hat{x}_{uij}) - \lambda_\Theta \frac{\partial}{\partial\Theta} || \Theta ||^2$$$$ = \sum_{(u,i,j) \in D_S} \frac{-e^{-\hat{x}_{uij}}}{1+e^{-\hat{x}_{uij}}} \frac{\partial}{\partial \Theta}\hat{x}_{uij} - \lambda_\Theta \Theta $$ BPR-MFIn order to practically apply this learning schema to an existing algorithm, we first split the real valued preference term: $\hat{x}_{uij} := \hat{x}_{ui} − \hat{x}_{uj}$. And now we can apply any standard collaborative filtering model that predicts $\hat{x}_{ui}$.The problem of predicting $\hat{x}_{ui}$ can be seen as the task of estimating a matrix $X:U×I$. With matrix factorization teh target matrix $X$ is approximated by the matrix product of two low-rank matrices $W:|U|\times k$ and $H:|I|\times k$:$$X := WH^t$$The prediction formula can also be written as:$$\hat{x}_{ui} = \langle w_u,h_i \rangle = \sum_{f=1}^k w_{uf} \cdot h_{if}$$Besides the dot product ⟨⋅,⋅⟩, in general any kernel can be used.We can now specify the derivatives:$$ \frac{\partial}{\partial \theta} \hat{x}_{uij} = \begin{cases}(h_{if} - h_{jf}) \text{ if } \theta=w_{uf}, \\w_{uf} \text{ if } \theta = h_{if}, \\-w_{uf} \text{ if } \theta = h_{jf}, \\0 \text{ else }\end{cases} $$Which basically means: user $u$ prefer $i$ over $j$, let's do the following:- Increase the relevance (according to $u$) of features belonging to $i$ but not to $j$ and vice-versa- Increase the relevance of features assigned to $i$- Decrease the relevance of features assigned to $j$We're now ready to look at some code! Let's implement SLIM BPR
###Code
import time
import numpy as np
###Output
_____no_output_____
###Markdown
What do we need for a SLIM BPR?* Item-Item similarity matrix* Computing prediction* Update rule* Training loop and some patience
###Code
from Notebooks_utils.data_splitter import train_test_holdout
from Data_manager.Movielens.Movielens10MReader import Movielens10MReader
data_reader = Movielens10MReader()
data_loaded = data_reader.load_data()
URM_all = data_loaded.get_URM_all()
URM_train, URM_test = train_test_holdout(URM_all, train_perc = 0.8)
###Output
Movielens10M: Verifying data consistency...
Movielens10M: Verifying data consistency... Passed!
DataReader: current dataset is: <class 'Data_manager.Dataset.Dataset'>
Number of items: 10681
Number of users: 69878
Number of interactions in URM_all: 10000054
Value range in URM_all: 0.50-5.00
Interaction density: 1.34E-02
Interactions per user:
Min: 2.00E+01
Avg: 1.43E+02
Max: 7.36E+03
Interactions per item:
Min: 0.00E+00
Avg: 9.36E+02
Max: 3.49E+04
Gini Index: 0.57
ICM name: ICM_genres, Value range: 1.00 / 1.00, Num features: 20, feature occurrences: 21564, density 1.01E-01
ICM name: ICM_tags, Value range: 1.00 / 69.00, Num features: 10217, feature occurrences: 108563, density 9.95E-04
ICM name: ICM_all, Value range: 1.00 / 69.00, Num features: 10237, feature occurrences: 130127, density 1.19E-03
###Markdown
Step 1: We create a dense similarity matrix, initialized as zero
###Code
n_users, n_items = URM_train.shape
item_item_S = np.zeros((n_items, n_items), dtype = np.float)
item_item_S
###Output
_____no_output_____
###Markdown
Step 2: We sample a triplet Create a mask of positive interactions. How to build it depends on the data
###Code
URM_mask = URM_train.copy()
URM_mask.data[URM_mask.data <= 3] = 0
URM_mask.eliminate_zeros()
URM_mask
user_id = np.random.choice(n_users)
user_id
###Output
_____no_output_____
###Markdown
Get user seen items and choose one
###Code
user_seen_items = URM_mask.indices[URM_mask.indptr[user_id]:URM_mask.indptr[user_id+1]]
user_seen_items
pos_item_id = np.random.choice(user_seen_items)
pos_item_id
###Output
_____no_output_____
###Markdown
To select a negative item it's faster to just try again then to build a mapping of the non-seen items
###Code
neg_item_selected = False
# It's faster to just try again then to build a mapping of the non-seen items
while (not neg_item_selected):
neg_item_id = np.random.randint(0, n_items)
if (neg_item_id not in user_seen_items):
neg_item_selected = True
neg_item_id
###Output
_____no_output_____
###Markdown
Step 2 - Computing prediction The prediction depends on the model: SLIM, Matrix Factorization... Note that here the data is implicit so we do not multiply for the user rating, because it is always 1, we just sum the similarities of the seen items.
###Code
x_ui = item_item_S[pos_item_id, user_seen_items].sum()
x_uj = item_item_S[neg_item_id, user_seen_items].sum()
print("x_ui is {:.4f}, x_uj is {:.4f}".format(x_ui, x_uj))
###Output
x_ui is 0.0000, x_uj is 0.0000
###Markdown
Step 3 - Computing gradient The gradient depends on the objective function: RMSE, BPR...
###Code
x_uij = x_ui - x_uj
x_uij
###Output
_____no_output_____
###Markdown
The original BPR paper uses the logarithm of the sigmoid of x_ij, whose derivative is the following
###Code
sigmoid_item = 1 / (1 + np.exp(x_uij))
sigmoid_item
###Output
_____no_output_____
###Markdown
Step 4 - Update model How to update depends on the model itself, here we have just one paramether, the similarity matrix, so we perform just one update. In matrix factorization we have two. We need a learning rate, which influences how fast the model will change. Small ones lead to slower convergence but often higher results
###Code
learning_rate = 1e-3
item_item_S[pos_item_id, user_seen_items] += learning_rate * sigmoid_item
item_item_S[pos_item_id, pos_item_id] = 0
item_item_S[neg_item_id, user_seen_items] -= learning_rate * sigmoid_item
item_item_S[neg_item_id, neg_item_id] = 0
###Output
_____no_output_____
###Markdown
Usually there is no relevant change in the scores over a single iteration
###Code
x_i = item_item_S[pos_item_id, user_seen_items].sum()
x_j = item_item_S[neg_item_id, user_seen_items].sum()
print("x_i is {:.4f}, x_j is {:.4f}".format(x_i, x_j))
###Output
x_i is 0.0070, x_j is -0.0075
###Markdown
Now we put everything in a training loop
###Code
def sample_triplet():
non_empty_user = False
while not non_empty_user:
user_id = np.random.choice(n_users)
user_seen_items = URM_mask.indices[URM_mask.indptr[user_id]:URM_mask.indptr[user_id+1]]
if len(user_seen_items)>0:
non_empty_user = True
pos_item_id = np.random.choice(user_seen_items)
neg_item_selected = False
# It's faster to just try again then to build a mapping of the non-seen items
while (not neg_item_selected):
neg_item_id = np.random.randint(0, n_items)
if (neg_item_id not in user_seen_items):
neg_item_selected = True
return user_id, pos_item_id, neg_item_id
def train_one_epoch(item_item_S, learning_rate):
start_time = time.time()
for sample_num in range(n_users):
# Sample triplet
user_id, pos_item_id, neg_item_id = sample_triplet()
user_seen_items = URM_mask.indices[URM_mask.indptr[user_id]:URM_mask.indptr[user_id+1]]
# Prediction
x_ui = item_item_S[pos_item_id, user_seen_items].sum()
x_uj = item_item_S[neg_item_id, user_seen_items].sum()
# Gradient
x_uij = x_ui - x_uj
sigmoid_item = 1 / (1 + np.exp(x_uij))
# Update
item_item_S[pos_item_id, user_seen_items] += learning_rate * sigmoid_item
item_item_S[pos_item_id, pos_item_id] = 0
item_item_S[neg_item_id, user_seen_items] -= learning_rate * sigmoid_item
item_item_S[neg_item_id, neg_item_id] = 0
# Print some stats
if (sample_num +1)% 50000 == 0 or (sample_num +1) == n_users:
elapsed_time = time.time() - start_time
samples_per_second = (sample_num +1)/elapsed_time
print("Iteration {} in {:.2f} seconds. Samples per second {:.2f}".format(sample_num+1, elapsed_time, samples_per_second))
return item_item_S, samples_per_second
learning_rate = 1e-6
item_item_S = np.zeros((n_items, n_items), dtype = np.float)
for n_epoch in range(5):
item_item_S, samples_per_second = train_one_epoch(item_item_S, learning_rate)
estimated_seconds = 8e6 * 10 / samples_per_second
print("Estimated time with the previous training speed is {:.2f} seconds, or {:.2f} minutes".format(estimated_seconds, estimated_seconds/60))
###Output
Estimated time with the previous training speed is 3441.25 seconds, or 57.35 minutes
###Markdown
Common mistakes in using ML (based on last year's presentations)* Use default parameters and then give up when results are not good* Train for just 1 or 2 epochs* Use huge learning rate or regularization parameters: 1, 50, 100 BPR for MF What do we need for BPRMF?* User factor and Item factor matrices* Computing prediction* Update rule* Training loop and some patience Step 1: We create the dense latent factor matrices
###Code
num_factors = 10
user_factors = np.random.random((n_users, num_factors))
item_factors = np.random.random((n_items, num_factors))
user_factors
item_factors
###Output
_____no_output_____
###Markdown
Step 2 - Computing prediction
###Code
user_id, pos_item_id, neg_item_id = sample_triplet()
(user_id, pos_item_id, neg_item_id)
x_ui = np.dot(user_factors[user_id,:], item_factors[pos_item_id,:])
x_uj = np.dot(user_factors[user_id,:], item_factors[neg_item_id,:])
print("x_ui is {:.4f}, x_uj is {:.4f}".format(x_ui, x_uj))
###Output
x_ui is 2.0311, x_uj is 2.0837
###Markdown
Step 3 - Computing gradient
###Code
x_uij = x_ui - x_uj
x_uij
sigmoid_item = 1 / (1 + np.exp(x_uij))
sigmoid_item
###Output
_____no_output_____
###Markdown
Step 4 - Update model
###Code
regularization = 1e-4
learning_rate = 1e-2
H_i = item_factors[pos_item_id,:]
H_j = item_factors[neg_item_id,:]
W_u = user_factors[user_id,:]
user_factors[user_id,:] += learning_rate * (sigmoid_item * ( H_i - H_j ) - regularization * W_u)
item_factors[pos_item_id,:] += learning_rate * (sigmoid_item * ( W_u ) - regularization * H_i)
item_factors[neg_item_id,:] += learning_rate * (sigmoid_item * (-W_u ) - regularization * H_j)
x_ui = np.dot(user_factors[user_id,:], item_factors[pos_item_id,:])
x_uj = np.dot(user_factors[user_id,:], item_factors[neg_item_id,:])
print("x_i is {:.4f}, x_j is {:.4f}".format(x_ui, x_uj))
x_uij = x_ui - x_uj
x_uij
def train_one_epoch(user_factors, item_factors, learning_rate):
start_time = time.time()
for sample_num in range(n_users):
# Sample triplet
user_id, pos_item_id, neg_item_id = sample_triplet()
# Prediction
x_ui = np.dot(user_factors[user_id,:], item_factors[pos_item_id,:])
x_uj = np.dot(user_factors[user_id,:], item_factors[neg_item_id,:])
# Gradient
x_uij = x_ui - x_uj
sigmoid_item = 1 / (1 + np.exp(x_uij))
H_i = item_factors[pos_item_id,:]
H_j = item_factors[neg_item_id,:]
W_u = user_factors[user_id,:]
user_factors[user_id,:] += learning_rate * (sigmoid_item * ( H_i - H_j ) - regularization * W_u)
item_factors[pos_item_id,:] += learning_rate * (sigmoid_item * ( W_u ) - regularization * H_i)
item_factors[neg_item_id,:] += learning_rate * (sigmoid_item * (-W_u ) - regularization * H_j)
# Print some stats
if (sample_num +1)% 50000 == 0 or (sample_num +1) == n_users:
elapsed_time = time.time() - start_time
samples_per_second = (sample_num +1)/elapsed_time
print("Iteration {} in {:.2f} seconds. Samples per second {:.2f}".format(sample_num+1, elapsed_time, samples_per_second))
return user_factors, item_factors, samples_per_second
learning_rate = 1e-6
num_factors = 10
user_factors = np.random.random((n_users, num_factors))
item_factors = np.random.random((n_items, num_factors))
for n_epoch in range(5):
user_factors, item_factors, samples_per_second = train_one_epoch(user_factors, item_factors, learning_rate)
###Output
Iteration 50000 in 1.77 seconds. Samples per second 28252.49
Iteration 69878 in 2.66 seconds. Samples per second 26300.70
Iteration 50000 in 1.86 seconds. Samples per second 26826.71
Iteration 69878 in 2.60 seconds. Samples per second 26907.05
Iteration 50000 in 1.85 seconds. Samples per second 26957.57
Iteration 69878 in 2.56 seconds. Samples per second 27334.92
Iteration 50000 in 1.69 seconds. Samples per second 29620.48
Iteration 69878 in 2.35 seconds. Samples per second 29699.76
Iteration 50000 in 1.95 seconds. Samples per second 25631.02
Iteration 69878 in 2.68 seconds. Samples per second 26077.49
|
schrodinger.ipynb | ###Markdown
Numerical Solution of the Schrödinger Equation Using a Finite Difference, Finite Element and Neural Network ApproachAuthor: Ante Lojic KapetanovicCourse: Modern Physics [FEMT08](https://nastava.fesb.unist.hr/nastava/predmeti/11624) taught by professor Ivica PuljakDate: 2020, 26 Mar Content* [Content](Content)* [Theory behind...](Theory-behind...) * [1 Introduction to Quantum Mechanics](1-Introduction-to-Quantum-Mechanics) * [2 Schrödinger Equation](2-Schrödinger-Equation) * [3 The Use of Wave Equation](3-The-Use-of-Wave-Equation) * [Numerical solution of the Schrödinger equation](Numerical-solution-of-the-Schrödinger-equation) * [1 Finite difference method](1-Finite-difference-method) * [2 Finite element method](2-Finite-element-method) * [3 Artifical neural network method](3-Artifical-neural-network-method)* [Results, discussion and conclusion](Results,-discussion-and-conclussion) Theory behind... 1 Introduction to Quantum Mechanics One of the greatest modern-science successes surely has to be credited to sir **Isac Newton** for his 1687. work titled *Philosophiæ Naturalis Principia Mathematica*. Immense knowledge of physics and mechanics accumulated over the centuries was beautifully summed up in three Newton laws. Those laws served as the foundation to many discoveries that will occur for few centuries to come. Also, great contribution in electricity and magnetism came from **Oersted**, **Faraday**, **Ampere**, **Henry** and **Ohm**. But not until **James Clerk Maxwell** created theoretical synthesis of electricity and magnetism in his 1867. work titled *Treatise on Electricity and Magnetism*, the electromagnetism is understood as an indivisible phenomenon. The progress in 19th century was so huge that, to some, it seemed impossible there is anything fundamentally new to discover.However, the idea of (electromagnetic) waves propagating through vacuum was a mistery; also the black body radiation and the photoelectric effect were imposible to explain using the classical mechanics and Maxwell's classical electromagnetism. **Max Planck** had made the great first step into the creation of quantum hypothesis in 1900. Planck introduced the idea of discretized quantum oscilators that are able to radiate energy at certain discrete levels in order to explain black body radiation. Radiated energy was described using the frequency at which the source oscilates, multiplied by the extremely small constant, later known as the Planck constant $h$, mathematically formulated in the following expression:$$\begin{equation} B(\nu, T) = \frac{ 2 h \nu^3}{c^2} \frac{1}{e^\frac{h\nu}{k_B T} - 1}\end{equation}$$where $\nu$ is the frequency at which the source oscilates, T is the temperature, $k_B$ is the Boltzmann constant and $h$ is the Plank constant. $B(\nu, T)$ is the intensity of black body radiation, which can also be expressed using wavelength, $\lambda$, instead of frequency, $\nu$:$$\begin{equation} B(\lambda, T) =\frac{2hc^2}{\lambda^5}\frac{1}{ e^{\frac{hc}{\lambda k_B T}} - 1}\end{equation}$$Code for visual comparisson between the Planck's interpretation and the classical Rayleigh-Jeans interpretation of black body radiation is in the `black_body_radiation.py` script.
###Code
%matplotlib inline
%run src/scripts/black_body_radiation.py
###Output
_____no_output_____
###Markdown
Young physicist **Albert Einstein** in 1905 using the Planck's hypothesis was able to explain the photoelectric effect. With 1913 **Bohr** atom model it was obvious that the quantum hypothesis reflects low level law of the nature and the modern physics was conceived.Even though the quantum hypothesis was able to explain some of the problems classical mechanics and physics in general had failed to explain, a lot of mysteries were yet to be unraveled. The wave-particle duality of nature by **Louis de Broglie** captured through the expression:$$\begin{equation}\lambda = \frac{h}{p}\end{equation}$$where $\lambda$ is the wavelength, $h$ is the Planck constant and $p$ is the momentum of a particle, was the first step into a more general but also more complex way to explain the quantum hypothesis. The rather unusual De Broigle's hypothesis, presented in his 1924 PhD thesis, was soon after experimentally proved by **Davisson** and **Germer**. The conducted double-slit experiment is a demonstration that light and electrons can display characteristics of both classically defined waves and particles.
###Code
# use `inline` for inline plot or `notebook` for interactive plot w/ sliders
%matplotlib inline
%run src/scripts/double_slit_exp.py
# a - single slit width
# λ - wavelength
# L - screen distance
# d - distance between slits
###Output
_____no_output_____
###Markdown
In 1926, assuming the wave-particle duality of nature, **Erwin Schrödinger** formulates *the wave equation*. The wave equation describes the strenth of the field produced by the particle wave and has its solution as a function of both time and position. This particle wave is also known as the matter wave and is in the perfect accordance to the Davisson-Germer experiment.In the experiment, if the electrons were generated one by one, at first, the locations would seem to hit the sensor screen randomly. However, if the experiment is allowed to run for a very long time, a pattern shown in the previous figure will emerge. It is impossible to predict where each individual electron would strike the sensor screen, but using the Schrödinger equation, one can predict the probability of an electron striking a certain point of the sensor screen. This probability is proportional to the square of the magnitude of the wave function for the observed electron at that point, and is known as the Born rule, which will be described in more details in the following section. It was formulated by German physicist **Max Born**, soon after Schrödinger formulated his equation. Davisson-Germer experiment also showed that if the observer wants to locate the position of an electron at every point, the sensor screen will not detect wave characteristics of a generated electron, rather the pattern of sensed electrons on the screen will be as if the elelctron has particle characteristics only. This occurance is called the wave function collapse and, more generally, it occurs when a wave function, naturally in a superposition of several eigenstates reduces to a single eigenstate due to the interaction with the external world. This interaction is called an *observation*. The problem of the wave function collapse is called the *measurement problem*. The inability to observe such a collapse directly has given rise to different interpretations of quantum mechanics and poses a key set of questions that each interpretation must answer. The oldest and most widely accepted interpretation is already mentioned Born rule, statistical interpretation which gave rise to the Copenhagen interpretation. Controversial Copenhagen philospohy, whose founding fathers are Neils Bohr and **Werner Heisenberg**, states that physical systems generally do not have definite properties prior to being measured, and quantum mechanics can only predict the probability distribution of a given measurement's possible results. Another opposing philosophy, supported by Einstein and Schrödinger, advocated the idea of hidden states and incomplete realization of quantum mechanics. From this, famous Schrödinger cat thought experiment emerged, illustrating the incompleteness of the Copenhagen interpretation. Great explanation of the Schrödinger cat thought experiment and intro to many-world interpretation by Sean Carroll is given [here](https://www.youtube.com/watch?v=kTXTPe3wahc).To take a few steps back, this study is based on, however flawed, still very powerful Copenhagen interpretation, without which most of the technology of the second half of the 20th century would not have been successfully realized. This probabilistic philosophy is based on the Heisenberg uncertainty principle. The uncertainty, in its most basic form, states that product of the position uncertainty and the momentum uncertainty of the observed electron at the same time is greater or equal to the reduced Planck constant $h$:$$\begin{equation}\Delta p \cdot \Delta x \geqslant \hbar\end{equation}$$where $\hbar=\frac{h}{2\pi}$.In order to determine precise position of the electron, one has to observe the electron using some electromagnetic-based detection mechanism. A photon with a low wavelength $\lambda_p$, emitted out of the detection mechanism, carries large momentum $p_p$ and the collision with an electron will deliver significant kinetic energy to an electron. Thus, an electron position at the certain time-point will be known with high probability but the initial momentum of an electron is changed and there is no way to know the exact value of the momentum. The principle is also applicable to all canonically conjugate variables (two variables, once multiplied, give the product of the physical dimension of the reduced Planck constant, e.q. time and energy):$$\begin{equation}\Delta E \cdot \Delta t \geqslant \hbar\end{equation}$$The uncertainty principle is an inherent property of the nature and comes from the fact that the matter acts both as waves and as particles. The uncertainty principle does not depend on the precision or tuning of a measuring device or a system. 2 Schrödinger Equation Assuming low radiation frequencies, where corpuscular nature of light is hardly detected, Maxwell's equations, in their differential form, are defined as follows:$$\begin{align}\nabla \times \mathbf{E} &= - \frac{\partial \mathbf{B}}{\partial t} \\\nabla \times \mathbf{B} &= \mu_0 ( \mathbf{J} + \epsilon_0\frac{\partial \mathbf{E}}{\partial t} ) \\\nabla \cdot \mathbf{E} &= \frac{\rho}{\epsilon_0} \\\nabla \cdot \mathbf{B} &= 0\end{align}$$where:* $\mathbf{E}$ is electric vector field;* $\mathbf{B}$ is magnetic pseudovector field;* $\mathbf{J}$ is electric current density;* $\epsilon_0$ is the permittivity of free space and* $\mu_0$ is the permeability of free space.The electromagnetic wave radiation velocity is defined with the following expression:$$\begin{equation}c = \lambda \cdot \nu\end{equation}$$where $\lambda$ is the wavelenght and $\nu$ is the frequency. Equation (10) can be rewritten as:$$\begin{equation}\frac{\nu^2}{c^2} - \frac{1}{\nu^2} = 0 \end{equation}$$for later mathematical convenience.The value of the electric field of the plane electromagnetic wave expanding in $x$ direction is defined as follows:$$\begin{equation}E = E_0 \sin 2\pi \big(\nu t - \frac{x}{\lambda}\big)\end{equation}$$with the associated second partial derivatives:$$\begin{equation}\frac{\partial^2 E}{\partial t^2} = -4\pi^2 \nu^2E; \qquad \frac{\partial^2 E}{\partial x^2} = -\frac{4\pi^2}{\lambda^2} E\end{equation}$$Equation (11), once multiplied by $4\pi^2\mathbf{E}$ and rearanged considering the form of second partial derivatives defined in (13) generates the following expression:$$\begin{equation}\frac{\partial^2 \mathbf{E}}{\partial x^2} - \frac{1}{c^2}\frac{\partial^2 \mathbf{E}}{\partial t^2} = 0\end{equation}$$For a free-particle, i.e. an electron, moving rectilinearly, the matter wave takes the form of the plane wave. The relativistic relation between the energy and the momentum of a free-particle is defiend using the expression:$$\begin{equation}\frac{E^2}{c^2} = p^2 + m_0^2 c^2\end{equation}$$where $E$ is the relativistic energy of a free-particle, $p$ is the relativistic momentum and $m_o$ is the rest mass.Assuming the wave-particle duality for a free-particle where $E=h\nu$ and $p=\frac{h}{\nu}$, the relativistic relation between the energy and the momentum of a free-particle becomes:$$\begin{equation}\frac{h^2 \nu^2}{c^2} = \frac{h^2}{\lambda^2} + \frac{h^2 \nu_0^2}{c^2} \Rightarrow \frac{\nu^2}{c^2} = \frac{1}{\lambda^2} + \frac{\nu_0^2}{c^2}\end{equation}$$If we assume that the rest mass is 0, (16) becomes (11). The matter wave of an observed free particle is defined as:$$\begin{equation}\Phi = A \cdot e^{-2\pi i(\nu t - x/\lambda)}\end{equation}$$with the associated second partial derivatives:$$\begin{equation}\frac{\partial^2 \Phi}{\partial t^2} = -4\pi^2 \nu^2\Phi; \qquad \frac{\partial^2 \Phi}{\partial x^2} = -\frac{4\pi^2}{\lambda^2} \Phi\end{equation}$$Equation (16), once multiplied by $\Phi$ and rearanged considering the form of second partial derivatives defined in (18) generates the following expression:$$\begin{equation}\frac{\partial^2 \mathbf{\Phi}}{\partial x^2} - \frac{1}{c^2}\frac{\partial^2 \mathbf{\Phi}}{\partial t^2} = \frac{4\pi^2\mu_0^2}{c^2}\mathbf{\Phi}\end{equation}$$or, generalized in the 3-D space:$$\begin{equation}\frac{\partial^2 \mathbf{\Phi}}{\partial x^2} + \frac{\partial^2 \mathbf{\Phi}}{\partial y^2} + \frac{\partial^2 \mathbf{\Phi}}{\partial z^2} - \frac{1}{c^2}\frac{\partial^2 \mathbf{\Phi}}{\partial t^2} = \frac{4\pi^2\mu_0^2}{c^2}\mathbf{\Phi}\end{equation}$$$$\Downarrow$$$$\begin{equation}\Delta\mathbf{\Phi} - \frac{1}{c^2}\frac{\partial^2 \mathbf{\Phi}}{\partial t^2} = \frac{4\pi^2\mu_0^2}{c^2}\mathbf{\Phi}\end{equation}$$thus forming the relativistic version of the matter wave, in the literature often referenced as the **Klein-Gordon** equation. In practice, the difference between $\nu$ and $\nu_0$ is negligible and the non-relativistic version of the Klein-Gordon equation is of greater importance. From here, the famous time-dependant Schrödinger equation for a free-particle arises and is written as:$$\begin{equation}-\frac{\hbar^2}{2m_0} \Delta \mathbf{\Phi} = i \hbar \frac{\partial \mathbf{\Phi}}{\partial t}\end{equation}$$where $m_0$ is the rest mass of a free particle.Separating the spatial, $\psi(x, y, z)$, and temporal variables of the function $\Phi$, time-independent Schrödinger equation is written as:$$\begin{equation}-\frac{\hbar^2}{2m} \Delta \psi = E \psi\end{equation}$$where $E$ is the non-relativistic free-particle energy and the mass of a free particle is the same as the rest mass of a free particle. The non-relativistic energy is defined as:$$\begin{equation}E = \frac{p^2}{2m} + U\end{equation}$$where $U$ is the potential energy. Equation (24) must always be satisfied in order to obey the conservation of energy law.A solution of Schrödinger equation is a wave function, $\psi$, a quantity that described the displacement of the matter wave produced by particles, i.e. electrons. In Copenhagen interpretation, as mentioned in previous section, the square of the absolute value of the wave function results in the probability of finding the particle at given location. 1-D time-independent Schrödinger equation 1-D time-independent Schrödinger equation for a single non-relativistic particle is obtained from the time-dependant equation by the separation of variables and is written using the following expression:$$\begin{equation}\Big(-\frac{\hbar^2}{2m} \cdot \frac{d^2 \psi(x)}{dx^2} + U(x)\Big) \psi(x) = E \psi(x)\end{equation}$$where, in the language of linear algebra, this equation is the eigenvalue equation. The wave function is an eigenfunction, $\psi(x)$ of the Hamiltonian operator, $\hat H$, with corresponding eigenvalue $E$:$$\begin{equation}\hat H \psi(x) = \Big(-\frac{\hbar^2}{2m} \cdot \frac{d^2}{dx^2} + U(x)\Big) \psi(x)\end{equation}$$The form of the Hamiltonian operator comes from classical mechanics, where the Hamiltonian function is the sum of the kinetic and potential energies.The probability density function of a particle over the x-axis is written using the following expression:$$\begin{equation}\mbox{pdf} = \lvert \psi(x) \lvert^2 \end{equation}$$and it can be shown, that the probability density function in space does not depend on time. From time-dependent Schrödinger equation (22), by letting the $\Phi(x,t) = \psi(x) \cdot \Phi(t)$ and applying the separation of variables, we obtain the following expression for temporal variable:$$\begin{equation}i\hbar \cdot \frac{1}{\Phi(t)} \cdot \frac{\partial \Phi(t)}{\partial t} = E\end{equation}$$The solution of (28) is obtained by integrating both sides:$$\begin{equation}\Phi(t) = e^{-\frac{iEt}{\hbar}}\end{equation}$$Finally the 1-D time-dependant wave function is written as:$$\begin{equation}\Phi(x,t) = \psi(x) e^{-\frac{iEt}{\hbar}}\end{equation}$$Following the Copenhagen interpretation, the square of the 1-D time-dependant wave function resolves the probability density of a single particle to be found at the certain location on the $x$-axis at a specific time $t$. This results in the following:$$\begin{equation}\lvert \Phi(x,t) \lvert^2 = \lvert \psi(x) \lvert^2 \cdot \lvert e^{-\frac{iEt}{\hbar}} \lvert^2 = \lvert \psi(x) \lvert^2\end{equation}$$From this, it is obvious that the probability density function in space does not depend on time, thus the rest of the seminar will be concentrated on the non-linear Schrödinger equation for non-relativitic particles.The probability of finding the observed particle anywhere in the domain $x \subseteq \Omega$ is 1 since the particle has to be somewhere. The actual position is not known, rather one can describe the uncertainty of finding the particle for position in $[x, x+\mbox{d}x]$. The sum of probabilities over the entire solution domain should be equal to one:$$\begin{equation}\int_{x \subseteq \Omega} \lvert \psi(x) \lvert^2 \mbox{d}x = 1\end{equation}$$or, more explicitly:$$\begin{equation}\int_{x \subseteq \Omega} \psi(x) \Psi^{*}(x) \mbox{d}x = 1\end{equation}$$ Wave function constraints In order to represent a physically observable system in a measurable and meaningful fashion, the wave function must satisfy set of constraints:1. the wave function must be a solution to Schrödinger equation;2. the wave function and the first derivative of the wave function, $\frac{d \Psi(x)}{dx}$ must be continuous3. the wave function must be normalizable - the 1-D wave function approaches to zero as $x$ approaches to infinity; 3 The Use of Wave Equation Electron in Infinite Potential Well Consider an electron of mass $m$ confined to 1-D rigid box of width L.The potential of the infinite potential well is defined as follows:$$\begin{align} U(x) &= 0, \qquad &0 < x < L\\ U(x) &= +\infty, \qquad &x \leq 0, x \geq L\end{align}$$The electron can move along $x$-axis and the collisions with the walls are perfectly elastic. The described situation is known as an inifinitely deep square well potential. In order to determine the motion of the observed electron as it travels along the $x$-axis, we must determine the wave function using Schrödinger equation.Boundary conditions are pre-defined in (34) and (35). For $x=0$ and $x=L$, the potential energy is defined as $U(0) = U(L) = +\infty$, thus the wave function at boundaries is 0, $\psi(0) = \psi(L) = 0$, otherwise the product of $U(x) \cdot \psi(x)$ in Schrödinger equation would be phisically infeasible. Inside the potential well, the potential energy is equal to 0, which means that the Schrödinger equation becomes:$$\begin{equation} E \cdot \psi(x) = - \frac{\hbar^2}{2m} \cdot \frac{d^2 \psi(x)}{dx^2}\end{equation}$$and the wave function is written as:$$\begin{equation} \psi(x) = A \sin(kx) + B \cos(kx)\end{equation}$$In order to discover the unknown parameters $A$ and $B$, the boundary condtions have to be taken into consideration. If $x=0$, $\Psi(x=0) = 0 \qquad \Rightarrow \qquad \psi(0) = A\sin(0) + B\cos(0) = 0$, and$$\begin{equation} \boxed{B=0}\end{equation}$$---Since $B=0$ the wave function takes the following form of $\psi(x) = A\sin(kx)$.---If $x=L$, $\psi(x=L) = 0 \qquad \Rightarrow \qquad \psi(L) = A\sin(kL) = 0$.Since the electron is assumed to exists somewhere in the solution domain $ x \in (0, L)$, the term $\sin(kL)$ has to be equal to 0. This is true for $kL = 0, \pi, 2\pi...$ or, more generally, $kL=n\pi$ where $n=1,2,3,...$. From here, $k$ can be written as:$$\begin{equation} k = \frac{n\pi}{L}\end{equation}$$and, since the $k$ is known to be:$$\begin{equation} k = \sqrt{\frac{2mE}{\hbar^2}}\end{equation}$$Combining the expressions (39) and (40), the following expression occurs:$$\begin{equation} \frac{n\pi}{L} = \sqrt{\frac{2mE}{\hbar^2}}\end{equation}$$From (41), the non-relativistic energy of an electron is defined as:$$\begin{equation} E = \frac{n^{2}h^{2}}{8 m L^2}\end{equation}$$where, $n=1,2,3,...$ and $\hbar=h/2\pi$.Energy discretion is a direct consequence of boundary conditions and the initial assumption that the potential energy of an electron within the area is equal to zero. The positive integer $n$ is a principal quantum number. Quantum numbers describe values of conserved quantities in the dynamics of a quantum system, in this case the quantum system in an electron trapped inside of a infinite potential well. The quantum number $n$ represent principal quantum numbers, which give acceptable solutions to the Schrödinger equation. The standard model of a quantum system is described using four quantum numbers: principal quantum number $n$, azimuthal quantum number $l$, magentic quantum number $m_l$ and spin quantum number $m_s$. The first three quantum numbers are derived from Schrödinger equation, while the fourth comes from applying special relativity. The principal quantum number describes the electron shell - energy level of an electron, and it ranges from 1 to the shell containing the outermost electron of the specific atom, thus is of the greatest importance for the general solution of time-independent Schrödinger equation. Since the principal quantum number $n$ is natural number, the energy level of an electron is a discrete variable. The zero point energy represents the ground state of an electron or the least amount of energy that electron can contain, where $n=1$.A wave function has to obey [the set of constraints](Wave-function-constraints) defined in the previous section:$$\begin{equation} \int_{-\infty}^{+\infty}\lvert \psi \lvert^2 dx = \int_{0}^{L} A^2 \sin^2 \big(\frac{n \pi}{L} \cdot x \big)dx = 1\end{equation}$$From (43):$$\begin{equation} \frac{A^2 L}{2} = 1 \qquad \Rightarrow \qquad \boxed{A=\sqrt{\frac{2}{L}}}\end{equation}$$---Finally, the wave function describing the motion of the observed electron with potential energy $U(x)=0$ and with mass $m$ moving along the the $x$-axis trapped inside the infinite potential well, considering (38) and (44), is given using the following expression:$$\begin{equation} \psi_n(x) = \sqrt{\frac{2}{L}}\sin\big( \frac{n \pi}{L} \cdot x \big)\end{equation}$$where $n$ is the principal quantum number and $L$ is the width of the rigid box. The term $\sqrt{\frac{2}{L}}$ represents the amplitude of the wave function. In order to obtain the probability density of the electron in three different energy states for $n=1, 2 \mbox{ and } 3$, trapped inside the infinite potential well of width $L=1 \mbox{ nm}$, the square of the wave function has to be observed:
###Code
%matplotlib inline
%run src/scripts/infinite_potential_well.py
###Output
_____no_output_____
###Markdown
The highest point of each probability density function plotted in previous figure represents the most likely placement of an electron at any given moment in time. Boundary conditions are defined *a priori* and are integrated in the analytical solution given in (45), thus the probability of an electron placement at locations $x=\{0, L\}$ is 0. It is also important to notice that the area under each of the probability density curves shown above, is equal to 1. In order to be measurable and physically meaningful, a wave function must be normalized, mathematically formulated in (43). The total probability of finding the electron inside the $x \in \langle 0, L \rangle$ region is obtained by integrating the probability density function over $x$ and is always equal to 1. If the $L$ increases, the probability density peak will decrease and the probability density function curve will, consequently, be flatten. Electron in Finite Potential WellConsider an electron of mass $m$ confined to 1-D finite potential well of width L and potential barriers at energy level of $U_0$.The potential of the finite potential well is defined as follows:$$\begin{align} U(x) &= 0, &\qquad \lvert x \lvert > L/2\\ U(x) &= U_0, &\qquad \lvert x \lvert \leq L\end{align}$$In order to determine the motion of the observed electron as it travels along the $x$-axis, we must determine the wave function using Schrödinger equation for three different regions. Region II: $x \in \langle -L/2, L/2 \rangle$If particle is in region II, then the $U(x)=0$ due to (46). The Schrödinger equation is defined as follows:$$\begin{equation} E \cdot \psi (x) = -\frac{\hbar^2}{2m} \cdot \frac{d^2 \psi(x)}{dx^2}\end{equation}$$From (48) the wave function is defined as:$$\begin{equation} \psi(x) = A \sin(kx) + B \sin(kx)\end{equation}$$where $k = \frac{n\pi}{L}$. Region I and III: $x \in \langle-\infty, -L/2] \cup [L/2, +\infty \rangle$If particle is in region I or III, then the $U(x) = U_0$ due to (47). The Schrödinger equation is defined as follows:$$\begin{equation} E \cdot \psi(x) = U_0 \cdot \psi(x) -\frac{\hbar^2}{2m} \cdot \frac{d^2 \psi(x)}{dx^2}\end{equation}$$After rearanging (50), the Schrödinger equation becomes:$$\begin{equation} \frac{d^2 \psi(x)}{dx^2} - \Big ( \frac{2m}{\hbar}(E - U_0) \Big ) \psi(x) = 0\end{equation}$$If the term $\frac{2m}{\hbar}(E - U_0)$ is written as $\alpha^2$, the equation takes simplified form of:$$\begin{equation} \frac{d^2 \psi(x)}{dx^2} - \alpha^2 \psi(x) = 0\end{equation}$$and the solution of the equation is:$$\begin{equation} \psi(x) = C \mbox{e}^{\alpha x} + D \mbox{e}^{-\alpha x}\end{equation}$$For the region I, if $x \rightarrow -\infty$, the term $\mbox{e}^{-\alpha x} \rightarrow \infty$ and $\psi(x) \rightarrow \infty$, which is physically infeasible. In order to avoid this, the term $D$ is set to be 0 for region I. The wave function for region I is defined as:$$\begin{equation} \psi(x) = C \mbox{e}^{\alpha x}\end{equation}$$the wave function of the region III is defined analogously as:$$\begin{equation} \psi(x) = D \mbox{e}^{-\alpha x}\end{equation}$$The wave function must obey [the set of constraints](Wave-function-constraints) defined in the previous section, that is, the wave function must be continuous throughout $x$-axis in order to be physically feasible. The total wave function composed of wave functions for three different regions defined in (49), (54) and (55) must satisfy the following conditions on boundaries to achieve the continuity:$$\begin{align} \psi_1(x) = \psi_2(x), \quad \frac{\psi_1(x)}{dx}=\frac{\psi_2(x)}{dx} \qquad &\mbox{for } x=-\frac{L}{2};\\ \psi_2(x) = \psi_3(x), \quad \frac{\psi_2(x)}{dx}=\frac{\psi_3(x)}{dx} \qquad &\mbox{for } x=\frac{L}{2}.\end{align}$$where $\psi_1(x)$, $\psi_2(x)$ and $\psi_3(x)$ are wave functions for regions I, II and III, respectively.Even though, the analytical solution for previously described situation exists, it is not going to be solved in this notebook, rather the high level intuition of the problem will be presented. According to classical mechanics, if $U_0 > E$, an electron cannot be found in the regions I and III. By solving the Schrödinger through the framework of quantum mechanics, it is shown that there is some probability of an electron to penetrate the potential barriers and to be found outside of the region II, even if the potential energy $U_0$ is greater than the total energy of an electron. The possibility of positioning an electron outside the potential well makes the energy conservation law momentarily invalid. This, however, can be explained via the Heisenberg uncertainty principle defined in (5). The uncertainty principle enables an electron to exist outside of the potential well for the amount of time $\Delta t$ during which a certain amount of energy uncertainty also co-exists.
###Code
%matplotlib inline
%run -i src/scripts/finite_potential_well.py
###Output
_____no_output_____
###Markdown
Quantum TunnelingAs shown in the previous section, an electron is able to penetrate potential barrier and enter a region, which is not explainable through the lense of classical mechanics. According to quantum mechanics, there is some non-zero probability that the particle will end up passing through a potential barrier even though the potential energy level of the barrier is higher than the total non-relativistic energy of an electron. The wave function of an electron, once it enters the potential region, will start to decay exponentially. Before the amplitude of the wave function drops to zero, the wave function resurfaces and continues to propagate with a smaller amplitude on the other side of the barrier. Since there is existing wave function on the other side of the barrier, one can reason that electron is able to penetrate the barrier even though its kinetic energy is less than the potential energy of a barrier. This phenomenon is known as quantum tunneling. From here, two important coefficients can be calculated:**Transmission coefficient** ($T$) - the fraction of the transmitted wave function of an electron$$\begin{equation} T \approx \mbox{e}^{-2\alpha L}\end{equation}$$where L is the width of a potential barrier and $\alpha$ is formulated as follows:$$\begin{equation} \alpha = \frac{\sqrt{2m(U_0 - E}}{\hbar}\end{equation}$$where $m$ is the mass of an electron, $U_0$ is the poteential energy level of a barrier, $E$ is the kinetic energy of an electron and $\hbar$ is the reduced Planck constant. **Reflection coefficient** ($R$) - fraction of the reflected wave function of an electron:$$\begin{equation} R = 1 - T\end{equation}$$The total sum of the transmission and reflection coefficient is always 1. The following cell is the output of a Python solver for the 1-D Schrödinger equation beautfully explained through [this blog post](http://jakevdp.github.io/blog/2012/09/05/quantum-python/). The code is minimally modified to satisfy the current version of Python and some dependencies. All rights go to [Jake Vanderplas](http://vanderplas.com/).
###Code
# use `inline` for inline plot or `notebook` for interactive plot
%matplotlib notebook
%run -i src/scripts/quantum_tunneling.py
###Output
_____no_output_____
###Markdown
Numerical solution of the Schrödinger equation 1 Finite difference method The wavefunction, $\psi(x)$, over $x$-axis is defined as eigenvector of the Hamiltonian operator as follows:$$\begin{equation} \hat{H} \psi = E \psi\end{equation}$$where $\hat{H}$ is the Hamiltonian operator, E is eigenvalues (energies) of an electron with the wave function $\psi$. This problem should be solved as eigenproblem, otherwise the solution to $\psi$ will be trivial.Hamiltonian operator is then defiend as $N \times N$ matrix, where $N$ is the chosen number of grid points.
###Code
%matplotlib inline
%run src/fdm_infinite_potential_well.py -n 50
###Output
_____no_output_____
###Markdown
2 Finite element method Applying the weighted residual approach to Schrödinger equation (25) with $U(x)=0$ the following term is obtained:$$\begin{equation} -\int_{0}^{L} \frac{\hbar^2}{2m}\frac{d^2\psi}{dx^2}W_{j} = \int_{0}^{L} E \psi W_{j} dx \end{equation}$$where $W_{j}$ are test functions for $j=1, 2, 3, ..., n$, where $n$ stands for the number of approximated solutions. From here, weak formulation is utilized as follows:$$\begin{equation} \int_{0}^{L} \frac{d\psi}{dx}\frac{dW_{j}}{dx} dx = \frac{2m}{\hbar^2}E \int_{0}^{L} \psi W_{j} dx + \frac{2m}{\hbar^2} \frac{d\psi}{dx} W_j \Big \lvert_{0}^L \end{equation}$$The approximate solution is formed as the linear combination of coefficients $\alpha_i$ and shape functions $N_i$. In the Galerkin-Bubnov framework test functions are the same as shape functions, $W_j = N_j$ and the vectorized Schrödinger equation is defined as follows:$$\begin{equation} [A]\{\alpha\} = [B]\{\alpha\}[E]\end{equation}$$where:* A is the left-hand-side matrix, $n \times n$ shaped, defined as:$$\begin{equation} A_{ji} = \int_{x_1}^{x_2} \frac{dN_i(x)}{dx}\frac{dN_{j}}{dx} dx\end{equation}$$ * B is the right-hand-side vector, $n \times n$ shaped, defined as:$$\begin{equation} B_{ji} = \frac{2m}{\hbar^2}E \int_{x_1}^{x_2} N_i(x) N_{j}(x) dx\end{equation}$$* E is a diagonal matrix representing eigenvalues for different princple quantum numbers, $1$ to $N$.
###Code
%matplotlib inline
%run src/fem_infinite_potential_well.py -n 49
###Output
_____no_output_____
###Markdown
3 Artifical neural network method [The technique proposed by Lagaris et al.](https://pubmed.ncbi.nlm.nih.gov/18255782/) is extended and adjusted to modern machine learning practices in order to acquire the wave function in the particle-in-a-box quantum model. Firstly, the wave function is approximated using the following expression:$$\begin{equation} \hat \psi(x) = B(x) NN(x; \theta) \end{equation}$$where $B(x)$ is any function that satisfies boundary conditions, in this case $B(x) = x(1-x)$, and $NN(x; \theta)$ is the neural network that takes positional coordinate $x$ as the input argument and is parameterized using the weights and biases, $\theta$. The collocation method is applied in order to discretize the solution domain, $\Omega$, into the set of points for training, $x_i \in \Omega$ for $i \in [1, N]$, where $N$ is the total number of collocation points. The problem is transformed to the optimization problem with the following loss function:$$\begin{equation} J(x, \theta) = \frac{\sum_i [H \hat \psi(x_i; \theta) - E \hat \psi(x_i; \theta)]^2}{\int_\Omega \lvert \hat \psi \rvert^2 dx} + \lvert \psi(0) - \hat \psi(0) \rvert^2 + \lvert \psi(L) - \hat \psi(L) \rvert^2 \end{equation}$$where$$\begin{equation} E = \frac{\int_\Omega \hat \psi^* H \hat \psi dx}{\int_\Omega \lvert \hat \psi \rvert^2 dx}\end{equation}$$Just like in the original work, the neural network is the single hidden layer multi-layer perceptron (MLP) with 40 hidden units total. Every unit is activated using hyperbolic tangent, *tanh*, activation function. Instead of manually applying symbolic derivatives, the automatic differentiation procedure is applied. Once the derivative of the loss function with respect to the network parameters has been acquired, the minimization can be performed. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, an iterative method for solving unconstrained nonlinear optimization problems, is applied. Results, discussion and conclussion
###Code
%matplotlib inline
%run src/compare.py -n 10
%matplotlib inline
%run -i src/compare.py -n 50
%matplotlib inline
%run -i src/compare.py -n 100
###Output
Neural Schroedinger Solver
-------------------------------------
Boundary condtions: (0.0, 0.0)
Neural architecture: [1, 40, 1]
Number of training points: 100
-------------------------------------
Iteration: 0 Loss: [0.29798196]
Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.000000
Iterations: 7
Function evaluations: 51
Gradient evaluations: 39
Training time: 5.1269 s
|
Tutorial 1/Lab_-1_Python_Lab_Notebook.ipynb | ###Markdown
Lab -1 How to Use Python and Colab for Lab Notebooks **Name: Tim Child****Partner: Ali Tully** Table of contents*Note: The navigation bar on the left side of colab has a table of contents that you can use as a reference when creating your own.** Objective* Introduction* Organization * Use of Sections* Images * Images Intro * Student Work* Mathematical Equations * LaTeX * Using Images * Student work* Data * Data taking * Loading data from Scope * Plotting Data * Student Work* Prelab for Lab 1* Conclusion Objective In this section you describe in detail what you are going in the lab with goals...(which you know because youread the lab before, right?) Introduction Some words about background and theory; include any new principles and equations you'll be using. If you don't know where to include the sketch of the circuit you're going to build, include it here. Organization 1. Properly formated lab notebooks are much easier to read and follow (for your future self and for theTA marker)2. So do use bullet lists and number lists3. Each major section should use the '' format4. Each minor section should use the '' format5. Consider numbering your major sections; it makes it easier to refer back to them. Use of Sections For each thought or concept, use a section break and add a new entry. Images Images Intro Although there are many ways to add an image such that it shows up in the notebook view, I have only found **one** way to to make images show up in the exported pdf file. To insert an image:1. Resize image to ~300px wide before uploading to your Google Drive (can be done in MS Paint for example)2. Run `from Ipython.display import Image` somewhere in your notebook (probably the very top) 3. Then in a code cell run the line `Image(filename="")` *Note: Although you can specify width and height in the `Image` function, it will not work when it is exported to PDF*.
###Code
from IPython.display import Image
from google.colab import drive
drive.mount('/content/drive')
import os
ddir = 'drive/My Drive/ENPHYS_259/Intro'
os.listdir(ddir)
Image(filename=os.path.join(ddir, 'cat_300.jpg')) # This path needs changing
###Output
_____no_output_____
###Markdown
*Figure 1: Safety Cat* Figure 2: Experimental setup for interferometer You should reference the figures in your entries. For example, Fig. 1 shows a good cat. DO NOT include a figure and then not discuss it...otherwise what is the point? Figure 2 shows an example of anactual experimental setup. Student Work Place an image below and properly caption it
###Code
###Output
_____no_output_____
###Markdown
Mathematical Equations There are two ways to put equations in: LaTeX The first is a widely used formatting scripting language called LaTex (or Tex) which is superior to render mathematical expressions and is used widely. You will use in later classes (ENPH 257, PHYS352 among others) so it will pay off to learn (and use) it now. To insert an expression wrap the LaTeX formatted equation in either `$` signs for inline equations, or `$$` for full line equationse.g. If you enter`$$\alpha^2+\beta^2=c^2$$`This is the result: $$\alpha^2+\beta^2=c^2$$Single `$` signs allow you to put equations in the $\alpha^2+\beta^2=c^2$ middle of sentences.Go to https://www.codecogs.com/latex/eqneditor.php or https://www.latex4technics.com/ for help with writing LaTeX equations.Here is an example of a more complicated formula,`$$P_\lambda = \frac{2 \pi h c^2}{\lambda^5 \left(e^{\left(\frac{h c}{\lambda k T}\right)} - 1\right)}$$`$$P_\lambda = \frac{2 \pi h c^2}{\lambda^5 \left(e^{\left(\frac{h c}{\lambda k T}\right)} - 1\right)}$$ Using Images If there is a long derivation if it often easier to write it out and take a picture as shown in Fig. 3. Fig 3: Derivation of Uncertainty propagation Fig 4: Bad picture of derivation of Uncertainty. There is no chance to read it Student Work Enter the Taylor expansion of sin(x)
###Code
###Output
_____no_output_____
###Markdown
DataPlotting and analyzing data is one of the many things Python is well suited for. Data Taking * Say you are going to record two sets of data.* First switch to a code section and type```xdata = [1, 2, 3, 4, 5, 6, 7, 8] this is a list with 8 data pointsydata = [0.8, 3.8, 8.9, 17, 26, 35.8, 49.1, 63.3] this is also a list with 8 data points```* The `` is a delimiting character for commenting* Lists are entered with `[]`Its often useful to work with arrays which can be multi-dimensional. For this we can use the `numpy` package in python. To use a numpy array, the `numpy` package must first be imported into the script using an `import` statement. It is common to import numpy like this: `import numpy as np`. The `as np` part just makes it quicker to type. Any import statements only need to be run once, but it doesn't hurt to run them multiple times. (Note: We usually import whatever packages we'll run at the top of our file.)e.g. ```import numpy as npxdata = np.array([1,2,3,4,5,6,7,8])```To create a numpy array from scratch. Or```ydata = np.array(ydata)```To convert ydata from the `list` we entered above to a numpy `ndarray` Loading Data from Scope1. Copy the files to the same folder as the `.ipynb` file you want to work with them in. This is your directory (e.g. `Python_Introduction` or `Lab1`).2. Make sure your drive is mounted (i.e. give colab access to your google drive): * Open the help menu (`ctrl+M+H`). * Scroll down and find the option to "Mount Drive." * Create the following keyboard shortcut for it: `ctrl+D` * Press `ctrl+D` and follow the directions to mount your drive.3. Set your data directory (see example below):
###Code
dir_path = 'ENPH259/Python_Introduction' # change this to your data directory path - e.g. ENPH259/Lab1
ddir = f'/content/drive/My Drive/{dir_path}'
###Output
_____no_output_____
###Markdown
4. List the contents of your directory to ensure your file is there:
###Code
print(os.listdir(ddir))
###Output
['Lab_-1_Python_Lab_Notebook.ipynb', 'Python_Analysis_Guide.ipynb', 'scatter.xlsx', 'sine1khz.csv', 'sine1khz.txt', 'square1khz.txt', 'Colab Instructions.docx', 'Export.ipynb', 'cat_300.jpg']
###Markdown
5. Load your data (example is a txt file, but you can do this with csv, xlsx...):
###Code
data_path = os.path.join(ddir, 'square1khz.txt')
df_square = pd.read_csv(data_path, header=31, sep='\t') # 31 comment lines at the top, and the values are tab (\t) separated
df_square # df stands for dataframe
x = df_square['Time (s)']
y = df_square['Channel 2 (V)']
df_square
###Output
_____no_output_____
###Markdown
Plotting DataTo get started with plotting, we need to first `import` a useful library for plotting. There are many plotting packages available for Python, one of the most common is Matplotlib which as the name suggests, is very similar to Matlab plotting. Specifically we want the `matplotlib.pyplot` package. To make it easier to use we will import it `as plt` so that we only have to type `plt.` to use it. If you didn't see the import packages cell at the top of this tutorial, go back and take a look at it. Run the cell. You only need to import packages once per file, but we've included the code cell below as an example of the import syntax. You'll notice this is the same syntax we used at the beginning.
###Code
import matplotlib.pyplot as plt
import numpy as np # Importing numpy here so I can quickly make arrays of data
x = np.linspace(0, 10, 100) # Using numpy to make a quick array of x coords from 0 to 10 with 100 points
y = np.sin(x)
fig, ax = plt.subplots(1,1) # Making a figure with 1 rows and 1 columns of axes to plot on (i.e. a figure with one plot)
ax.plot(x, y, label='sine wave'); # plotting the data onto the axes. (the label is for the legend we add next)
###Output
_____no_output_____
###Markdown
All plots **MUST** have axis labels, titles, and legends. You can add these with some very simple commands
###Code
# I can carry on using the figure and axes we made previously by using their handles (fig and ax)
ax.set_title('Title of Axes')
ax.set_xlabel('The X label')
ax.set_ylabel('The Y label')
ax.legend() # Turns the legend on and uses the labels that were given when the data was plotted
fig # This makes the figure show up again after this cell even though we didn't create it here
###Output
_____no_output_____
###Markdown
Student Work * Make two arrays that are 10 elements long* Plot one against the other (add all the necessary extra information) Prelab for Lab 1For second prelab question we are asked to plot the frequency response of an RC circuit for Lab 1* you can generate a linear or log space array using:`xlin = np.linspace(1, 100, 1000)` defines a vector xlin with a 1000 linearly spaced points from 1 to 100 `xlog = np.logspace(1, 100, 1000)` defines a vector xlog with a 1000 log spaced points from 1 to 100* Now the frequency response of the voltage across a capacitor normalized to the input voltage of an RC circuit is: ConclusionAlways include a conclusion!
###Code
###Output
_____no_output_____ |
dmu1/dmu1_ml_XMM-LSS/1.18_VISTA-VIKING.ipynb | ###Markdown
XMM-LSS master catalogue Preparation of VIKING dataVISTA telescope/VIKING catalogue: the catalogue comes from `dmu0_VIKING`.In the catalogue, we keep:- The identifier (it's unique in the catalogue);- The position;- The stellarity;- The magnitude for each band.- The kron magnitude to be used as total magnitude (no “auto” magnitude is provided). These are Vega magnitudes and must be corrected.We don't know when the maps have been observed. We will use the year of the reference paper.
###Code
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
from collections import OrderedDict
import os
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table
import numpy as np
from herschelhelp_internal.flagging import gaia_flag_column
from herschelhelp_internal.masterlist import nb_astcor_diag_plot, remove_duplicates
from herschelhelp_internal.utils import astrometric_correction, mag_to_flux
OUT_DIR = os.environ.get('TMP_DIR', "./data_tmp")
try:
os.makedirs(OUT_DIR)
except FileExistsError:
pass
RA_COL = "viking_ra"
DEC_COL = "viking_dec"
###Output
_____no_output_____
###Markdown
I - Column selection
###Code
# Bands: Z,Y,J,H,K
imported_columns = OrderedDict({
'SOURCEID': "viking_id",
'ra': "viking_ra",
'dec': "viking_dec",
'PSTAR': "viking_stellarity",
'ZPETROMAG': "m_viking_z",
'ZPETROMAGERR': "merr_viking_z",
'ZAPERMAG3': "m_ap_viking_z",
'ZAPERMAG3ERR': "merr_ap_viking_z",
'YPETROMAG': "m_viking_y",
'YPETROMAGERR': "merr_viking_y",
'YAPERMAG3': "m_ap_viking_y",
'YAPERMAG3ERR': "merr_ap_viking_y",
'JPETROMAG': "m_viking_j",
'JPETROMAGERR': "merr_viking_j",
'JAPERMAG3': "m_ap_viking_j",
'JAPERMAG3ERR': "merr_ap_viking_j",
'HPETROMAG': "m_viking_h",
'HPETROMAGERR': "merr_viking_h",
'HAPERMAG3': "m_ap_viking_h",
'HAPERMAG3ERR': "merr_ap_viking_h",
'KSPETROMAG': "m_viking_k",
'KSPETROMAGERR': "merr_viking_k",
'KSAPERMAG3': "m_ap_viking_k",
'KSAPERMAG3ERR': "merr_ap_viking_k",
})
catalogue = Table.read("../../dmu0/dmu0_VISTA-VIKING/data/VIKING_XMM-LSS.fits")[list(imported_columns)]
for column in imported_columns:
catalogue[column].name = imported_columns[column]
epoch = 2011
# Clean table metadata
catalogue.meta = None
# Conversion from Vega magnitudes to AB is done using values from
# http://casu.ast.cam.ac.uk/surveys-projects/vista/technical/filter-set
vega_to_ab = {
"z": 0.521,
"y": 0.618,
"j": 0.937,
"h": 1.384,
"k": 1.839
}
# Coverting from Vega to AB and adding flux and band-flag columns
for col in catalogue.colnames:
if col.startswith('m_'):
errcol = "merr{}".format(col[1:])
# Some object have a magnitude to 0, we suppose this means missing value
catalogue[col][catalogue[col] <= 0] = np.nan
catalogue[errcol][catalogue[errcol] <= 0] = np.nan
# Convert magnitude from Vega to AB
catalogue[col] += vega_to_ab[col[-1]]
flux, error = mag_to_flux(np.array(catalogue[col]), np.array(catalogue[errcol]))
# Fluxes are added in µJy
catalogue.add_column(Column(flux * 1.e6, name="f{}".format(col[1:])))
catalogue.add_column(Column(error * 1.e6, name="f{}".format(errcol[1:])))
# Band-flag column
if "ap" not in col:
catalogue.add_column(Column(np.zeros(len(catalogue), dtype=bool), name="flag{}".format(col[1:])))
# TODO: Set to True the flag columns for fluxes that should not be used for SED fitting.
catalogue[:10].show_in_notebook()
###Output
_____no_output_____
###Markdown
II - Removal of duplicated sources We remove duplicated objects from the input catalogues.
###Code
SORT_COLS = ['merr_ap_viking_y', 'merr_ap_viking_h', 'merr_ap_viking_j', 'merr_ap_viking_k']
FLAG_NAME = 'viking_flag_cleaned'
nb_orig_sources = len(catalogue)
catalogue = remove_duplicates(catalogue, RA_COL, DEC_COL, sort_col=SORT_COLS,flag_name=FLAG_NAME)
nb_sources = len(catalogue)
print("The initial catalogue had {} sources.".format(nb_orig_sources))
print("The cleaned catalogue has {} sources ({} removed).".format(nb_sources, nb_orig_sources - nb_sources))
print("The cleaned catalogue has {} sources flagged as having been cleaned".format(np.sum(catalogue[FLAG_NAME])))
###Output
/opt/anaconda3/envs/herschelhelp_internal/lib/python3.6/site-packages/astropy/table/column.py:1096: MaskedArrayFutureWarning: setting an item on a masked array which has a shared mask will not copy the mask and also change the original mask array in the future.
Check the NumPy 1.11 release notes for more information.
ma.MaskedArray.__setitem__(self, index, value)
###Markdown
III - Astrometry correctionWe match the astrometry to the Gaia one. We limit the Gaia catalogue to sources with a g band flux between the 30th and the 70th percentile. Some quick tests show that this give the lower dispersion in the results.
###Code
gaia = Table.read("../../dmu0/dmu0_GAIA/data/GAIA_XMM-LSS.fits")
gaia_coords = SkyCoord(gaia['ra'], gaia['dec'])
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec)
delta_ra, delta_dec = astrometric_correction(
SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]),
gaia_coords
)
print("RA correction: {}".format(delta_ra))
print("Dec correction: {}".format(delta_dec))
catalogue[RA_COL] += delta_ra.to(u.deg)
catalogue[DEC_COL] += delta_dec.to(u.deg)
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec)
###Output
_____no_output_____
###Markdown
IV - Flagging Gaia objects
###Code
catalogue.add_column(
gaia_flag_column(SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]), epoch, gaia)
)
GAIA_FLAG_NAME = "viking_flag_gaia"
catalogue['flag_gaia'].name = GAIA_FLAG_NAME
print("{} sources flagged.".format(np.sum(catalogue[GAIA_FLAG_NAME] > 0)))
###Output
15270 sources flagged.
###Markdown
V - Saving to disk
###Code
catalogue.write("{}/VISTA-VIKING.fits".format(OUT_DIR), overwrite=True)
###Output
_____no_output_____ |
project-tv-script-generation/.ipynb_checkpoints/dlnd_tv_script_generation_try1-checkpoint.ipynb | ###Markdown
TV Script GenerationIn this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chroniclesscripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data. Get the DataThe data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text. >* As a first step, we'll load in this data and look at some samples. * Then, you'll be tasked with defining and training an RNN to generate a new script!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
###Output
_____no_output_____
###Markdown
Explore the DataPlay around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
###Code
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 46367
Number of lines: 109233
Average number of words in each line: 5.544240293684143
The lines 0 to 10:
jerry: do you know what this is all about? do you know, why were here? to be out, this is out...and out is one of the single most enjoyable experiences of life. people...did you ever hear people talking about we should go out? this is what theyre talking about...this whole thing, were all out now, no one is home. not one person here is home, were all out! there are people trying to find us, they dont know where we are. (on an imaginary phone) did you ring?, i cant find him. where did he go? he didnt tell me where he was going. he must have gone out. you wanna go out you get ready, you pick out the clothes, right? you take the shower, you get all ready, get the cash, get your friends, the car, the spot, the reservation...then youre standing around, what do you do? you go we gotta be getting back. once youre out, you wanna get back! you wanna go to sleep, you wanna get up, you wanna go out again tomorrow, right? where ever you are in life, its my feeling, youve gotta go.
jerry: (pointing at georges shirt) see, to me, that button is in the worst possible spot. the second button literally makes or breaks the shirt, look at it. its too high! its in no-mans-land. you look like you live with your mother.
george: are you through?
jerry: you do of course try on, when you buy?
george: yes, it was purple, i liked it, i dont actually recall considering the buttons.
###Markdown
--- Implement Pre-processing FunctionsThe first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
###Code
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
vocab = set(text)
vocab_to_int = {}
int_to_vocab = {}
for i, word in enumerate(vocab):
vocab_to_int[word] = i
int_to_vocab[i] = word
# return tuple
return (vocab_to_int, int_to_vocab)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
###Output
Tests Passed
###Markdown
Tokenize PunctuationWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( **.** )- Comma ( **,** )- Quotation Mark ( **"** )- Semicolon ( **;** )- Exclamation mark ( **!** )- Question mark ( **?** )- Left Parentheses ( **(** )- Right Parentheses ( **)** )- Dash ( **-** )- Return ( **\n** )This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
token_dict = {
'.':'||period||',
',':'||comma||',
'"':'||quotation_mark||',
';':'||semicolon||',
'!':'||exclamation_mark||',
'?':'||question_mark||',
'(':'||left_parentheses||',
')':'||right_parentheses||',
'-':'||dash||',
'\n':'||return||',
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
###Output
Tests Passed
###Markdown
Pre-process all the data and save itRunning the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkIn this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions. Check Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
###Output
_____no_output_____
###Markdown
InputLet's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.htmltorch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.```data = TensorDataset(feature_tensors, target_tensors)data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size)``` BatchingImplement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.For example, say we have these as input:```words = [1, 2, 3, 4, 5, 6, 7]sequence_length = 4```Your first `feature_tensor` should contain the values:```[1, 2, 3, 4]```And the corresponding `target_tensor` should just be the next "word"/tokenized word value:```5```This should continue with the second `feature_tensor`, `target_tensor` being:```[2, 3, 4, 5] features6 target```
###Code
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
print(n_batches)
# only full batches
words = words[:n_batches*batch_size]
x, y = [], []
for i in range(0, len(words)-sequence_length):
x_batch = words[i:i+sequence_length]
y_batch = words[i+sequence_length]
x.append(x_batch)
y.append(y_batch)
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.array(y)))
dataloader = DataLoader(data, shuffle=True, batch_size = batch_size)
# return a dataloader
return dataloader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
###Output
_____no_output_____
###Markdown
Test your dataloader You'll have to modify this code to test a batching function, but it should look fairly similar.Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.Your code should return something like the following (likely in a different order, if you shuffled your data):```torch.Size([10, 5])tensor([[ 28, 29, 30, 31, 32], [ 21, 22, 23, 24, 25], [ 17, 18, 19, 20, 21], [ 34, 35, 36, 37, 38], [ 11, 12, 13, 14, 15], [ 23, 24, 25, 26, 27], [ 6, 7, 8, 9, 10], [ 38, 39, 40, 41, 42], [ 25, 26, 27, 28, 29], [ 7, 8, 9, 10, 11]])torch.Size([10])tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])``` SizesYour sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10). ValuesYou should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
###Code
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
###Output
5
torch.Size([10, 5])
tensor([[ 12, 13, 14, 15, 16],
[ 27, 28, 29, 30, 31],
[ 40, 41, 42, 43, 44],
[ 23, 24, 25, 26, 27],
[ 6, 7, 8, 9, 10],
[ 1, 2, 3, 4, 5],
[ 35, 36, 37, 38, 39],
[ 2, 3, 4, 5, 6],
[ 33, 34, 35, 36, 37],
[ 38, 39, 40, 41, 42]])
torch.Size([10])
tensor([ 17, 32, 45, 28, 11, 6, 40, 7, 38, 43])
###Markdown
--- Build the Neural NetworkImplement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.htmltorch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class: - `__init__` - The initialize function. - `init_hidden` - The initialization function for an LSTM/GRU hidden state - `forward` - Forward propagation function. The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word. Hints1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:``` reshape into (batch_size, seq_length, output_size)output = output.view(batch_size, -1, self.output_size) get last batchout = output[:, -1]```
###Code
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
# set class variables
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
# dropout layer
#self.dropout = nn.Dropout(0.3)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
#self.sig = nn.Sigmoid()
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
# embeddings and lstm_out
nn_input = nn_input.long()
embeds = self.embedding(nn_input)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
#out = self.dropout(lstm_out)
output = self.fc(lstm_out)
# reshape to be batch_size first
output = output.view(batch_size, -1, self.output_size)
output = output[:, -1] # get last batch of labels
# return one batch of output word scores and the hidden state
return output, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
# initialize hidden state with zero weights, and move to GPU if available
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
###Output
Tests Passed
###Markdown
Define forward and backpropagationUse the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:```loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)```And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.**If a GPU is available, you should move your data to that GPU device, here.**
###Code
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
clip=5 # gradient clipping
# move data to GPU, if available
if(train_on_gpu):
inp, target = inp.cuda(), target.cuda()
# perform backpropagation and optimization
hidden = tuple([each.data for each in hidden])
rnn.zero_grad()
output, hidden = rnn(inp, hidden)
#print(output.shape)
#print(hidden)
#print(output)
loss = criterion(output.squeeze(), target.long())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(rnn.parameters(), clip)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
#print(loss.item())
return loss.item(), hidden
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
###Output
Tests Passed
###Markdown
Neural Network TrainingWith the structure of the network complete and data ready to be fed in the neural network, it's time to train it. Train LoopThe training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
###Output
_____no_output_____
###Markdown
HyperparametersSet and train the neural network with the following parameters:- Set `sequence_length` to the length of a sequence.- Set `batch_size` to the batch size.- Set `num_epochs` to the number of epochs to train for.- Set `learning_rate` to the learning rate for an Adam optimizer.- Set `vocab_size` to the number of uniqe tokens in our vocabulary.- Set `output_size` to the desired size of the output.- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.- Set `hidden_dim` to the hidden dimension of your RNN.- Set `n_layers` to the number of layers/cells in your RNN.- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
###Code
# Data params
# Sequence Length
sequence_length = 10 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = .001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 256
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 5000
###Output
_____no_output_____
###Markdown
TrainIn the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train. > **You should aim for a loss less than 3.5.** You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
###Output
Training for 10 epoch(s)...
Epoch: 1/10 Loss: 5.506498638153076
Epoch: 1/10 Loss: 4.877447241783142
Epoch: 1/10 Loss: 4.644863019943237
Epoch: 1/10 Loss: 4.550259655952454
Epoch: 1/10 Loss: 4.485655527114869
Epoch: 1/10 Loss: 4.377468444824219
Epoch: 1/10 Loss: 4.3330537147521975
Epoch: 1/10 Loss: 4.297451208114624
Epoch: 1/10 Loss: 4.272087756633758
Epoch: 1/10 Loss: 4.234725869178772
Epoch: 1/10 Loss: 4.201399923324585
Epoch: 1/10 Loss: 4.192778712272644
Epoch: 1/10 Loss: 4.18784255695343
Epoch: 2/10 Loss: 4.060189198117611
Epoch: 2/10 Loss: 3.9615867052078246
Epoch: 2/10 Loss: 3.9691027655601503
Epoch: 2/10 Loss: 3.9519200410842896
Epoch: 2/10 Loss: 3.9647532691955565
Epoch: 2/10 Loss: 3.934483114242554
Epoch: 2/10 Loss: 3.9386162099838256
Epoch: 2/10 Loss: 3.9188381514549255
Epoch: 2/10 Loss: 3.9008119230270384
Epoch: 2/10 Loss: 3.9270327596664427
Epoch: 2/10 Loss: 3.9407107014656066
Epoch: 2/10 Loss: 3.9228728451728823
Epoch: 2/10 Loss: 3.934419768333435
Epoch: 3/10 Loss: 3.8343563892624597
Epoch: 3/10 Loss: 3.7426507859230043
Epoch: 3/10 Loss: 3.7557671813964846
Epoch: 3/10 Loss: 3.7736143598556517
Epoch: 3/10 Loss: 3.7458421902656553
Epoch: 3/10 Loss: 3.7396921286582945
Epoch: 3/10 Loss: 3.781070989608765
Epoch: 3/10 Loss: 3.7556300292015075
Epoch: 3/10 Loss: 3.7549902181625368
Epoch: 3/10 Loss: 3.773991184234619
Epoch: 3/10 Loss: 3.7551934151649475
Epoch: 3/10 Loss: 3.7813932132720947
Epoch: 3/10 Loss: 3.7605673551559446
Epoch: 4/10 Loss: 3.687348848039454
Epoch: 4/10 Loss: 3.6174276361465454
Epoch: 4/10 Loss: 3.632415452003479
Epoch: 4/10 Loss: 3.60900665807724
Epoch: 4/10 Loss: 3.6264643836021424
Epoch: 4/10 Loss: 3.652281243801117
Epoch: 4/10 Loss: 3.6259088106155395
Epoch: 4/10 Loss: 3.641796570777893
Epoch: 4/10 Loss: 3.608736423969269
Epoch: 4/10 Loss: 3.6658333034515382
Epoch: 4/10 Loss: 3.64899453496933
Epoch: 4/10 Loss: 3.6710940074920653
Epoch: 4/10 Loss: 3.671676317214966
Epoch: 5/10 Loss: 3.593381263746703
Epoch: 5/10 Loss: 3.5083509378433226
Epoch: 5/10 Loss: 3.5186539788246156
Epoch: 5/10 Loss: 3.526905979633331
Epoch: 5/10 Loss: 3.526041862487793
Epoch: 5/10 Loss: 3.540880611896515
Epoch: 5/10 Loss: 3.5590038523674012
Epoch: 5/10 Loss: 3.555765299320221
Epoch: 5/10 Loss: 3.5798491163253785
Epoch: 5/10 Loss: 3.5680856795310976
Epoch: 5/10 Loss: 3.5748853750228884
Epoch: 5/10 Loss: 3.5902964310646057
Epoch: 5/10 Loss: 3.60514697933197
Epoch: 6/10 Loss: 3.52073567268277
Epoch: 6/10 Loss: 3.434302396297455
Epoch: 6/10 Loss: 3.429089115142822
Epoch: 6/10 Loss: 3.467383470535278
Epoch: 6/10 Loss: 3.461300371170044
Epoch: 6/10 Loss: 3.477927261829376
Epoch: 6/10 Loss: 3.488366159915924
Epoch: 6/10 Loss: 3.5074389123916627
Epoch: 6/10 Loss: 3.479556882381439
Epoch: 6/10 Loss: 3.5131272134780884
Epoch: 6/10 Loss: 3.5079288334846495
Epoch: 6/10 Loss: 3.5293037824630735
Epoch: 6/10 Loss: 3.550463225841522
Epoch: 7/10 Loss: 3.45417728034918
Epoch: 7/10 Loss: 3.394913876056671
Epoch: 7/10 Loss: 3.393556586742401
Epoch: 7/10 Loss: 3.4119445657730103
Epoch: 7/10 Loss: 3.420303556442261
Epoch: 7/10 Loss: 3.417772924423218
Epoch: 7/10 Loss: 3.451010533809662
Epoch: 7/10 Loss: 3.4430946741104127
Epoch: 7/10 Loss: 3.4528610763549805
Epoch: 7/10 Loss: 3.4568219618797302
Epoch: 7/10 Loss: 3.452462176799774
Epoch: 7/10 Loss: 3.484591769218445
Epoch: 7/10 Loss: 3.4895895872116087
Epoch: 8/10 Loss: 3.409822672359214
Epoch: 8/10 Loss: 3.341217004299164
Epoch: 8/10 Loss: 3.3462885613441467
Epoch: 8/10 Loss: 3.3690835256576537
Epoch: 8/10 Loss: 3.3675912661552427
Epoch: 8/10 Loss: 3.374692803859711
Epoch: 8/10 Loss: 3.391706500530243
Epoch: 8/10 Loss: 3.396349504947662
Epoch: 8/10 Loss: 3.4337016572952272
Epoch: 8/10 Loss: 3.4108635573387147
Epoch: 8/10 Loss: 3.427956174850464
Epoch: 8/10 Loss: 3.4229448461532592
Epoch: 8/10 Loss: 3.441431237220764
Epoch: 9/10 Loss: 3.366252738335901
Epoch: 9/10 Loss: 3.300349271774292
Epoch: 9/10 Loss: 3.31926242685318
Epoch: 9/10 Loss: 3.2994600176811217
Epoch: 9/10 Loss: 3.3354625854492186
Epoch: 9/10 Loss: 3.3441440467834473
Epoch: 9/10 Loss: 3.3583695521354677
Epoch: 9/10 Loss: 3.3639197597503663
Epoch: 9/10 Loss: 3.3809996342658994
Epoch: 9/10 Loss: 3.371726893424988
Epoch: 9/10 Loss: 3.3805938386917114
Epoch: 9/10 Loss: 3.423382860183716
Epoch: 9/10 Loss: 3.4237439546585082
Epoch: 10/10 Loss: 3.3301968308519725
Epoch: 10/10 Loss: 3.2772505798339844
Epoch: 10/10 Loss: 3.28263094997406
Epoch: 10/10 Loss: 3.2998509521484376
Epoch: 10/10 Loss: 3.299762092113495
Epoch: 10/10 Loss: 3.299778486251831
Epoch: 10/10 Loss: 3.319560504436493
Epoch: 10/10 Loss: 3.3200579319000245
Epoch: 10/10 Loss: 3.333797775268555
Epoch: 10/10 Loss: 3.3373168268203734
Epoch: 10/10 Loss: 3.3615681343078614
Epoch: 10/10 Loss: 3.380522684574127
Epoch: 10/10 Loss: 3.396915725708008
###Markdown
Question: How did you decide on your model hyperparameters? For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those? **Answer:** (Write answer, here)Tried many combination ofr hyperparameter and dropout.1. **Basic params to test model functionality -** * sequence_length = 10 * batch_size=50 * num_epochs=1 * learning_rate = 0.001 * vocab_size=len(vocab_to_int) +1 * output_size=vocab_size * embedding_dim=50 * hidden_dim=10 * n_layers=2 **Result** - Training was happining 2. **Set 1** - Increased hidden dimension only * sequence_length = 10 * batch_size=50 * num_epochs=1 * learning_rate = 0.001 * vocab_size=len(vocab_to_int) +1 * output_size=vocab_size * embedding_dim=50 * hidden_dim=128 * n_layers=2 **Result** - loss was stuck ~5 and not decreasing beyond 3. **Set 2** -- Increased hidden and embedding dimension * sequence_length = 20 * batch_size=50 * num_epochs=1 * learning_rate = 0.001 * vocab_size=len(vocab_to_int) +1 * output_size=vocab_size * embedding_dim=100 * hidden_dim=128 * n_layers=2 **Result** - again loss was stuck ~5 and not decreasing beyond 4. **Set 3** -- Increased hidden dimension more and batch size * sequence_length = 10 * batch_size=128 * num_epochs=2 * learning_rate = 0.01 * vocab_size=len(vocab_to_int) +1 * output_size=vocab_size * embedding_dim=100 * hidden_dim=256 * n_layers=2 **Result** - loss was increasing and decreasing5. **Set 4** -- Increased epochs and embedding dimension and removed +1 from vacab size as no padding here * sequence_length = 10 * batch_size=128 * num_epochs=20 * learning_rate = 0.01 * vocab_size=len(vocab_to_int) no padding * output_size=vocab_size * embedding_dim=300 * hidden_dim=256 * n_layers=2 **Result** - loss was decreasing in start but stable after 10 epochs6. **Set 5** -- decreased embedding dimension to 200 and removed dropout layer * sequence_length = 10 * batch_size=128 * num_epochs=20 * learning_rate = 0.001 * vocab_size=len(vocab_to_int) no padding * output_size=vocab_size * embedding_dim=200 * hidden_dim=256 * n_layers=2 **Result** - Finally Loss: 3.396915725708008, but not happy with script. 7. **Set 6** -- Increased embedding dimension to 300 and 3 LSTM layers * sequence_length = 10 * batch_size=128 * num_epochs=20 * learning_rate = 0.001 * vocab_size=len(vocab_to_int) no padding * output_size=vocab_size * embedding_dim=300 * hidden_dim=256 * n_layers=3 **Result** - --- CheckpointAfter running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
###Output
_____no_output_____
###Markdown
Generate TV ScriptWith the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section. Generate TextTo generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
###Output
_____no_output_____
###Markdown
Generate a New ScriptIt's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:- "jerry"- "elaine"- "george"- "kramer"You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
###Code
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:48: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
###Markdown
Save your favorite scriptsOnce you have a script that you like (or find interesting), save it to a text file!
###Code
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
###Output
_____no_output_____ |
notebooks/data_collection.ipynb | ###Markdown
Spotify Data Collection Jacob Torres---
###Code
# imports
import numpy as np
import pandas as pd
from dotenv import load_dotenv
from os import getenv
from spotipy import Spotify, SpotifyException
from spotipy.oauth2 import SpotifyClientCredentials
from pycountry import countries
# Instantiate authorized Spotify API object
load_dotenv()
client_id = getenv('CLIENT_ID')
client_secret = getenv('CLIENT_SECRET')
auth_manager = SpotifyClientCredentials(
client_id=client_id,
client_secret=client_secret
)
spotify = Spotify(auth_manager=auth_manager)
# Function for gathering Spotify category playlists by country
def get_playlists():
"""
Creates a list of Spotify markets,
and queries the Spotify API for "decades" playlists
returns:
-------
playlists: dict
Dictionary of decades playlists from each supported country
"""
# Collect category data for all Spotify markets
markets = spotify.available_markets()['markets']
print(f"Number of Spotify markets: {len(markets)}")
# Get decades playlists for each country
categories = [spotify.categories(market) for market in markets]
playlists = {}
for market, category in zip(markets, categories):
items = category['categories']['items']
playlist_ids = [item['id'] for item in items]
if 'decades' in playlist_ids:
playlists[market] = spotify.category_playlists('decades', country=market)
print(f"Number of countries with decades playlists: {len(playlists)}")
return playlists
playlists = get_playlists()
# Create country index
country_index = {}
for market in playlists.keys():
country = countries.get(alpha_2=market)
if country is not None:
country_index[market] = country.name
country_index
playlists['AE']
###Output
_____no_output_____
###Markdown
Load Dataset labeled with hate speech
###Code
df = pd.read_csv('../data/hate_add.csv')
df = df[(df['racism']=='racism') | (df['racism']=='sexism')]
df.head()
df.info()
df.racism.value_counts()
df.columns = ['id','label']
def group_list(l,size = 100):
"""
Generate batches of 100 ids in each
Returns list of strings with , seperated ids
"""
n_l =[]
idx = 0
while idx < len(l):
n_l.append(
','.join([str(i) for i in l[idx:idx+size]])
)
idx += size
return n_l
def tweets_request(tweets_ids):
"""
Make a requests to Tweeter API
"""
df_lst = []
for batch in tweets_ids:
url = "https://api.twitter.com/2/tweets?ids={}&tweet.fields=created_at&expansions=author_id&user.fields=created_at".format(batch)
payload={}
headers = {'Authorization': 'Bearer ',
'Cookie': 'personalization_id="v1_hzpv7qXpjB6CteyAHDWYQQ=="; guest_id=v1%3A161498381400435837'}
r = requests.request("GET", url, headers=headers, data=payload)
data = r.json()
if 'data' in data.keys():
df_lst.append(pd.DataFrame(data['data']))
return pd.concat(df_lst)
###Output
_____no_output_____
###Markdown
Getting actual tweets text with API requests
###Code
racism_sex_hate_id = group_list(list(df.id))
# df_rac_sex_hate = tweets_request(racism_sex_hate_id)
df_rac_sex_hate = pd.read_csv('../data/df_ras_sex_hate.csv')
df_rac_sex_hate.head()
df_rac_sex_hate = df_rac_sex_hate.drop(columns=['Unnamed: 0', 'id', 'author_id', 'created_at'])
df_rac_sex_hate['class'] = 1
df_rac_sex_hate.head()
df_rac_sex_hate.shape
###Output
_____no_output_____
###Markdown
Loading second Labeled Dataset with twits
###Code
df_l = pd.read_csv("../data/labeled_data.csv")
df_l.head()
print(df_l['class'].value_counts(normalize=True))
# Class Imbalance
fig, ax = plt.subplots(figsize=(10,6))
ax = sns.countplot(df_l['class'], palette='Set2')
ax.set_title('Amount of Tweets Per Label',fontsize = 20)
ax.set_xlabel('Type of Tweet',fontsize = 15)
ax.set_ylabel('Count',fontsize = 15)
ax.set_xticklabels(['Hate_speech','Offensive_language', 'Neither'],fontsize = 13)
total = float(len(df_l)) # one person per row
for p in ax.patches:
height = p.get_height()
ax.text(p.get_x()+p.get_width()/2.,
height + 3,
'{:1.2f}'.format(height/total * 100) + '%',
ha="center")
# class 0 - hate tweets
# class 1 - offensive_language tweets
# class 2 - neither tweets
df_l['class'].value_counts()
###Output
_____no_output_____
###Markdown
Lets combine Offensive_language and Neither and mark it as not_hate_speech
###Code
df_hate_not_hate = df_l.copy()
df_hate_not_hate['class'] = df_hate_not_hate['class'].map(lambda x : 1 if x == 0 else 0)
df_hate_not_hate['class'].value_counts()
print(df_hate_not_hate['class'].value_counts(normalize=True))
# Class Imbalance
fig, ax = plt.subplots(figsize=(10,6))
ax = sns.countplot(df_hate_not_hate['class'], palette='Set2')
ax.set_title('Amount of Tweets Per Label',fontsize = 20)
ax.set_xlabel('Type of Tweet',fontsize = 15)
ax.set_ylabel('Count',fontsize = 15)
ax.set_xticklabels(['Not Hate Speech','Hate_speech'],fontsize = 13)
total = float(len(df_l)) # one person per row
for p in ax.patches:
height = p.get_height()
ax.text(p.get_x()+p.get_width()/2.,
height + 3,
'{:1.2f}'.format(height/total * 100) + '%',
ha="center")
plt.savefig('../images/class_inbal.png')
# considering only class 0 (hate tweets) and class 2 (neither tweets) as binary classification problem
# updating neither tweets as not hate speech
df_l = df_l.drop(columns=['Unnamed: 0', 'count', 'hate_speech', 'offensive_language', 'neither'])
df_l = df_l[(df_l['class']==0) | (df_l['class']==2)]
df_l['class'] = df_l['class'].map(lambda x : 0 if x == 2 else 1)
df_l.rename(columns={'tweet':'text'}, inplace= True)
###Output
_____no_output_____
###Markdown
Lets combine 2 Data Frames with Labeled Classes of hate speach and not hate speach
###Code
df_combined = pd.concat([df_rac_sex_hate,df_l])
print(df_combined['class'].value_counts(normalize=True))
# Class Imbalance
fig, ax = plt.subplots(figsize=(10,6))
ax = sns.countplot(df_combined['class'], palette='Set2')
ax.set_title('Amount of Tweets Per Label',fontsize = 20)
ax.set_xlabel('Type of Tweet',fontsize = 15)
ax.set_ylabel('Count',fontsize = 15)
ax.set_xticklabels(['Not Hate Speech','Hate Speech'],fontsize = 13)
total = float(len(df_combined)) # one person per row
for p in ax.patches:
height = p.get_height()
ax.text(p.get_x()+p.get_width()/2.,
height + 3,
'{:1.2f}'.format(height/total * 100) + '%',
ha="center")
plt.savefig('../images/class_balan.png')
###Output
1 0.50066
0 0.49934
Name: class, dtype: float64
###Markdown
Saving combined Data to CSV file
###Code
df_combined.to_csv('../data/balanced_data_combined.csv')
###Output
_____no_output_____
###Markdown
Collecting app data
###Code
app_packages = [
'com.anydo',
'com.todoist',
'com.ticktick.task',
'com.habitrpg.android.habitica',
'cc.forestapp',
'com.oristats.habitbull',
'com.levor.liferpgtasks',
'com.habitnow',
'com.microsoft.todos',
'prox.lab.calclock',
'com.artfulagenda.app',
'com.tasks.android',
'com.appgenix.bizcal',
'com.appxy.planner',
'com.android.chrome'
]
app_infos = []
for ap in tqdm(app_packages):
info = app(ap, lang='en', country='us')
del info['comments']
app_infos.append(info)
def print_json(json_object):
json_str = json.dumps(
json_object,
indent=2,
sort_keys=True,
default=str
)
print(highlight(json_str, JsonLexer(), TerminalFormatter()))
print_json(app_infos[0])
df_app_infos = pd.DataFrame(app_infos)
df_app_infos.to_csv('./data/app_data.csv', index=None, header=True)
###Output
_____no_output_____
###Markdown
Scraping app data
###Code
app_reviews = []
for app in tqdm(app_packages):
for score in range(1, 6):
for sort_order in [Sort.MOST_RELEVANT, Sort.NEWEST]:
rvs = reviews(
app,
lang='en',
country='us',
sort=sort_order,
count=200 if score == 3 else 100,
filter_score_with=score
)[0]
for r in rvs:
r['sortOrder'] = 'most_relevant' if sort_order == Sort.MOST_RELEVANT else 'newest'
r['appId'] = app
app_reviews.extend(rvs)
df_app_reviews = pd.DataFrame(app_reviews)
df_app_reviews.head()
df_app_reviews.to_csv('./data/app_review.csv', index=None, header=True)
###Output
_____no_output_____
###Markdown
Setup
###Code
%%capture
!pip install eikon
!pip install wandb
import eikon as ek
import wandb
import pandas as pd
import numpy as np
import time
import os
import glob
from tqdm.auto import tqdm
#"api_key_lk"
ek.set_app_key("example")
###Output
_____no_output_____
###Markdown
Screen companies
###Code
mic_exchanges = pd.read_csv("mic_codes.csv").set_index("MIC") #Can be used to look up specific stock exchanges codes :) (Y)
oil_osebx_screen = 'SCREEN(U(IN(Equity(active,public,primary))), TR.CompanyMarketCap>=500000, IN(TR.ExchangeMarketIdCode,"XOSL"), IN(TR.TRBCBusinessSectorCode,"5010","5020","5030"), CURN=USD)'
fields_oil_osebx_screen = ["TR.CommonName"]#["TR.CommonName","TR.CompanyMarketCap","TR.ExchangeName","TR.TRBCBusinessSector","TR.TotalReturn3Mo"]
osbx_companies, e = ek.get_data(oil_osebx_screen, fields_oil_osebx_screen)
osbx_companies = osbx_companies.set_index("Instrument")
oil_global_screen = 'SCREEN(U(IN(Equity(active,public,primary))), TR.CompanyMarketCap>=500000, IN(TR.TRBCBusinessSectorCode,"5010","5020","5030"), CURN=USD)'
fields_oil_global_screen = ["TR.CommonName"]
global_oil, e = ek.get_data(oil_global_screen, fields_oil_global_screen)
global_oil = global_oil.set_index("Instrument")
###Output
_____no_output_____
###Markdown
Now we have dataframe of all noted oil companies (with mcap > USD 5m) in eikon refinitives entire database Collect data from eikon refinitiv
###Code
######## INPUTS ########
lst_of_tickers = global_oil.index.to_list()
#Eikon parameters
start_date = '2000-01-01'
end_date = '2022-04-21'
ek_params = {'SDate': start_date, 'EDate': end_date,'Frq': 'FQ', "Curn":"USD"}
#Max http company request at once
search_limit = 10_000
#What data to get
get_stock_data = False
get_meta_data = True
get_fundamental_data = False
get_broker_data = False
toggle_dict = {'stock_data':get_stock_data, 'meta_data':get_meta_data,
'fundamental_data':get_fundamental_data, 'broker_data':get_broker_data}
params = ek_params | toggle_dict | {'limit': search_limit}
########################
###Output
_____no_output_____
###Markdown
We define functions to find stock, meta, fundamental and broker estimates data
###Code
def _sub_lists(data, size_m):
return [data[x:x+size_m] for x in range(0, len(data), size_m)]
#Function to counteract http timeout
def _divide_pull_request(lst_of_tickers, fields, params, suffix):
p = {key: val for key, val in params.items() if key in ek_params}
if len(lst_of_tickers) > params['limit']:
dfs = []
for sub_ticker_lst in tqdm(_sub_lists(lst_of_tickers, params['limit']), suffix):
df_sub, err = ek.get_data(lst_of_tickers, fields, p)
print(df_sub)
dfs.append(df_sub)
df = pd.concat(dfs, axis=0)
else:
df, err = ek.get_data(lst_of_tickers, fields, p)
return df
def stock_data(lst_of_tickers, params):
params_new = params.copy()
params_new['Frq'] = 'D'
fields = ['TR.CompanyMarketCap.Date','TR.CompanyMarketCap', 'TR.PriceClose',
'TR.CompanyMarketCap.Currency'] #TR.F.ComShrOutsTot
stock_df = _divide_pull_request(lst_of_tickers, fields=fields, params=params_new, suffix=' Getting time series')
return stock_df
#Meta data collector
def meta_data(lst_of_tickers):
geography = ['TR.ExchangeMarketIdCode', 'TR.HeadquartersRegionAlt', 'TR.HeadquartersCountry', 'TR.HQStateProvince']
sectors = ['TR.TRBCEconomicSector', 'TR.TRBCBusinessSector', 'TR.TRBCIndustryGroup', 'TR.TRBCIndustry', 'TR.TRBCActivity']
founded = ['TR.OrgFoundedYear']
meta_data = geography + founded + sectors
meta_df, _ = ek.get_data(lst_of_tickers, meta_data)
meta_df = meta_df.set_index("Instrument")
meta_df['Organization Founded Year'] = meta_df['Organization Founded Year'].replace(0, np.NaN) #<-- Eikon hilariously uses 0 instead of Na for missing year value
return meta_df
#Fundamental data collector
def fundamental_data(lst_of_tickers, params):
#fields
profits = ['TR.TotalRevenue', 'TR.GrossProfit','TR.EBITDA','TR.EBIT', 'TR.F.NetIncAfterTax']#, 'TR.EV','MKT_CAP']
balance = ['TR.F.TotAssets','TR.F.TotCurrAssets','TR.F.TotLiab','TR.F.TotCurrLiab','TR.F.LTDebtPctofTotAssets','TR.F.STDebtPctofTotAssets']#TR.F.TotLiab(Period=FY0)
cash_flow = ['TR.F.LeveredFOCF']
fundamental_data = profits + balance + cash_flow
other = []#['TR.InsiderBuyDepthComp'] <--- NA only, could be interesting to use....
reported_dates = ['TR.TotalRevenue.date','TR.TotalRevenue.periodenddate','TR.BSOriginalAnnouncementDate']
fields = reported_dates + fundamental_data + other
#collect data
fundamental_df = _divide_pull_request(lst_of_tickers, fields, params, suffix=' Getting fundamentals')
return fundamental_df
def broker_estimates(lst_of_tickers, params):
params_new = params.copy()
params_new["Period"] = "FY1"
fields = ["TR.EPSMean","TR.EPSMean.periodenddate","TR.EBITMean",'TR.RevenueMean',
"TR.ROAMean","TR.ROEMean","TR.FCFMean","TR.TotalAssets","TR.MeanPctChg(Period=FY1,WP=60d)"]
estimates_df, err = ek.get_data(lst_of_tickers, fields, params)
return estimates_df
def get_data(lst_of_tickers, params):
stock_df = None
meta_df = None
fundamental_df = None
broker_df = None
if params['stock_data']:
stock_df = stock_data(lst_of_tickers, params)
if params['meta_data']:
meta_df = meta_data(lst_of_tickers)
if params['fundamental_data']:
fundamental_df = fundamental_data(lst_of_tickers, params)
if params['broker_data']:
broker_df = broker_estiqates(lst_of_tickers, params)
return stock_df, meta_df, fundamental_df, broker_df
def save_data(file_name, save_per_n_http_request, lst_of_tickers, params):
non_collected_tickers = []
name_to_index = {}
dfs = {}
for i, possible_key in enumerate(["stock_data", "meta_data", "fundamental_data", "broker_data"]):
if params[possible_key]:
name_to_index[possible_key] = i
dfs[possible_key] = []
partioned_lst_of_tickers = _sub_lists(lst_of_tickers, params["limit"])
for i, sub_ticker_lst in enumerate(tqdm(partioned_lst_of_tickers, "saving loop")):
try:
raw_data_dfs = get_data(sub_ticker_lst, params)
for key in name_to_index:
dfs[key] = dfs[key] + [raw_data_dfs[name_to_index[key]]]
if not (i % save_per_n_http_request):
for key in name_to_index:
df = pd.concat(dfs[key], axis=0)
df = df.reset_index()
df.to_feather(f"{file_name}_save={i}_type={key}.feather")
dfs[key] = []
except ek.EikonError as err:
for key in name_to_index:
dfs[key] = []
non_collected_tickers += sub_ticker_lst
except Exception as e:
print(e)
for key in name_to_index:
dfs[key] = []
non_collected_tickers += sub_ticker_lst
#Write crashes to file
with open(f"{file_name}.txt", "w") as f:
f.write("\n".join(non_collected_tickers))
#Save last data if there are rests
for key in name_to_index:
break
if dfs[key] != []:
df = pd.concat(dfs[key], axis=0)
df = df.reset_index()
#wtfffff
df.to_feather(f"{file_name}_save={len(partioned_lst_of_tickers)}_type={key}.feather")
save_toggle = False
file_name = "C:/Users/kjartkra/Untitled Folder/meta_data/global_oil"
if save_toggle:
save_data(file_name, 1, lst_of_tickers, params)
def _time_interval(start_date, end_date):
y0 = int(start_date.split("-")[0])
yn = int(end_date.split("-")[0])
in_between_dates = [f"{str(year)}-01-01" for year in range(y0+1,yn,7)]
return [start_date] + in_between_dates + [end_date]
def macro_data(lst_of_tickers, ek_get_timeseries_fields, params):
start_and_ends = _time_interval(params["SDate"],params["EDate"])
tickers_to_serie = {}
for ticker in lst_of_tickers:
tickers_to_serie[ticker] = []
for i in range(len(start_and_ends)-1):
try:
time_series = ek.get_timeseries(ticker, fields=ek_get_timeseries_fields,
start_date=start_and_ends[i], end_date=start_and_ends[i+1], interval=params["interval"])
except ek.EikonError as err:
if err.code ==-1:
time_series = ek.get_timeseries("BRT-", fields=ek_get_timeseries_fields, start_date=start_and_ends[i], end_date=start_and_ends[i+1],interval=params["interval"])
time_series[ek_get_timeseries_fields] = np.nan
if err.code == 2504:
print("backend error")
time.sleep(2)
time_series = ek.get_timeseries(ticker, fields=ek_get_timeseries_fields,
start_date=start_and_ends[i], end_date=start_and_ends[i+1], interval=params["interval"])
tickers_to_serie[ticker] = tickers_to_serie[ticker] + [time_series]
tickers_to_serie[ticker] = pd.concat(tickers_to_serie[ticker], axis=0)
return tickers_to_serie
def dict_to_df(dictionary):
dates = set()
for key, frame in dictionary.items():
dates |= set(frame.index.values)
index = pd.Index(list(sorted(dates)))
all_macro = pd.DataFrame(index=index)
for key, frame in dictionary.items():
frame = frame[~frame.index.duplicated(keep='first')]
all_macro[key] = frame
return all_macro
def folder_to_df(folder_with_data):
files = glob.glob(folder_with_data + '/*.feather')
dfs = []
for file in files:
dfs.append(pd.read_feather(file).set_index("index"))
df_big = pd.concat(dfs, axis=0).reset_index()
df_big = df_big.drop("index", axis=1)
return df_big
def upload_artifact(run, dataframe_file_location, artifact_name):
artifact = wandb.Artifact(artifact_name, type='dataset')
# Add a file to the artifact's contents
artifact.add_file(dataframe_file_location)
# Save the artifact version to W&B and mark it as the output of this run
run.log_artifact(artifact)
collect_meta_data = False
if collect_meta_data:
stock_df, meta_df, fundamental_df, broker_df = get_data(lst_of_tickers, params)
meta_location = 'C:/Users/kjartkra/Untitled Folder/meta_oil.feather'
meta_df.reset_index().to_feather(meta_location)
collect_macro_data = False
if collect_macro_data:
macro_oil_params = ek_params.copy()
macro_oil_params["interval"] = "daily"
macro_oil_series = ["BRT-", "CLc1", "WTCLc1", "LNG-AS", ".VIX",'EUR=', 'GBP=', "CNY=", ]
macro_oil_fields = ["CLOSE"]
macro_oil = macro_data(macro_oil_series, macro_oil_fields , macro_oil_params)
fundamentals_df = folder_to_df('C:/Users/kjartkra/Untitled Folder/fundamental_data')
fundamentals_location = 'C:/Users/kjartkra/Untitled Folder/fundamentals_oil.feather'
fundamentals_df.to_feather(fundamentals_location)
macro_df = dict_to_df(macro_oil).reset_index()
oil_company_df = folder_to_df("C:/Users/kjartkra/Untitled Folder/stock_data/)
macro_location = 'C:/Users/kjartkra/Untitled Folder/macro_oil.feather'
company_location = 'C:/Users/kjartkra/Untitled Folder/companies_oil.feather'
macro_df.to_feather(macro_location)
oil_company_df.to_feather(company_location)
upload_stocks = False
upload_meta = True
upload_fundamentals = False
upload_macro = False
if upload_stocks or upload_meta or upload_fundamentals or upload_macro:
with wandb.init(project="master-test") as run:
if upload_stocks:
upload_artifact(run, company_location, "oil-company-data")
if upload_meta:
upload_artifact(run, meta_location, "oil-meta-data")
if upload_fundamentals:
upload_artifact(run, fundamentals_location, "oil-fundamental-data")
if upload_macro:
upload_artifact(run, macro_location, "oil-macro-data")
###Output
_____no_output_____
###Markdown
Testing
###Code
df_big
wandb.init()
artifact = wandb.Artifact('mnist', type='dataset')
artifact.add_dir('mnist/')
wandb.log_artifact(artifact)
time_series_df_2, meta_df, fundamental_df, broker_df = get_data(lst_of_tickers[:5], params)
time_series_df_2
time_series_df_2["Number Of Stocks"].isna().sum()
time_series_df["Common Shares - Outstanding - Total"].isna().sum()
time_series_df
pd.set_option('display.max_rows', 1000)
time_series_df.to_excel("stock_data_2.xlsx")
#Conclusion, makes small difference in time to process at server
test_time = False
if test_time:
params_single = {'SDate': start_date, 'EDate': end_date,'Frq': 'FQ','Period': 'FQ0'}
params_curn = {'SDate': start_date, 'EDate': end_date,'Frq': 'FQ','Period': 'FQ0', "Curn":"USD"}
start_time = time.time()
data,err = ek.get_data(osbx_companies.index[:3].to_list(), financials, params_single)
print("--- simple: %s seconds ---" % (time.time() - start_time))
start_time = time.time()
data_usd,err = ek.get_data(osbx_companies.index[:3].to_list(), financials, params_curn)
print("--- Curn: %s seconds ---" % (time.time() - start_time))
start_time = time.time()
data_all,err = ek.get_data(osbx_companies.index[:3].to_list(), financials, params)
print("--- Scale & Curn: %s seconds ---" % (time.time() - start_time))
###Output
_____no_output_____
###Markdown
Importation of dataPossible to do the same in the GitBasch command to work on local
###Code
!curl -O https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/archives/fgvc-aircraft-2013b.tar.gz
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2625M 100 2625M 0 0 31.4M 0 0:01:23 0:01:23 --:--:-- 31.7M
###Markdown
Fichier tar- x : extraire les informations- z : le fichier est compressé- t : lister les fichiers sans décompresser- v : pour avoir du détail sur les fichiers- c : créer un tarfile- f : pour spécifier le nom du fichier (toujours en dernier)
###Code
# Exemple pour lister les fichiers avant de décompresser
#!tar tzvf fgvc-aircraft-2013b.tar.gz
!tar zxf fgvc-aircraft-2013b.tar.gz
###Output
_____no_output_____
###Markdown
Rename dataset
###Code
! mv fgvc-aircraft-2013b dataset
###Output
_____no_output_____
###Markdown
Extract labels to yaml files
###Code
import pandas as pd
import yaml
label=pd.read_csv('dataset/data/families.txt',
sep="\t", # Use a separator that do not exist to have a unique column as output
names=['all'],
dtype={'all': str} # Allows to keep the id begining with 00
)
label_dic = label['all'].to_dict()
with open(r'family_label.yaml','w') as file:
documents = yaml.dump(label_dic, file)
label=pd.read_csv('dataset/data/manufacturers.txt',
sep="\t", # Use a separator that do not exist to have a unique column as output
names=['all'],
dtype={'all': str} # Allows to keep the id begining with 00
)
label_dic = label['all'].to_dict()
with open(r'manufacturer_label.yaml','w') as file:
documents = yaml.dump(label_dic, file)
label=pd.read_csv('dataset/data/variants.txt',
sep="\t", # Use a separator that do not exist to have a unique column as output
names=['all'],
dtype={'all': str} # Allows to keep the id begining with 00
)
label_dic = label['all'].to_dict()
with open(r'variant_label.yaml','w') as file:
documents = yaml.dump(label_dic, file)
###Output
_____no_output_____
###Markdown
Downloading dataThe full dataset is from 2012-2021, and includes both Atlantic storms and Eastern Pacific Storms.
###Code
import requests
from bs4 import BeautifulSoup
import urllib
from urllib.request import urlopen, urlretrieve, quote
url = 'https://ftp.nhc.noaa.gov/atcf/archive/MESSAGES/2012/dis/'
r = requests.get(url)
soup = BeautifulSoup(r.content)
files = soup.find_all("a", href=re.compile("discus"))
def download(dest_folder: str):
if not os.path.exists(dest_folder):
os.makedirs(dest_folder) # create folder if it does not exist
for file in files:
file_link = url + file.get('href')
filename = file_link.split('/')[-1].replace(".", "_")
filepath = os.path.join(dest_folder, filename)
r = requests.get(file_link, stream=True)
if r.ok:
print("saving to", os.path.abspath(filepath))
with open(filepath, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024 * 8):
if chunk:
f.write(chunk)
f.flush()
os.fsync(f.fileno())
else: # HTTP status code 4XX/5XX
print("Download failed: status code {}\n{}".format(r.status_code, r.text))
download(dest_folder="discussions")
###Output
_____no_output_____
###Markdown
Initial Preprocessing: Perl ScriptsI ran these two scripts in the same directory as the downloaded text files. The first one (`remove_issuer.pl`) removes an extra line in the header that was present on about 10% of the files. The second one (`cleanup.pl`) inserts custom delimiters and corrects some errors in desired data fields. `remove_issuer.pl`:
###Code
#!/usr/bin/perl -w
use strict;
use warnings;
opendir IN, 'input';
my @in = grep { /^[^.]/ } readdir IN; # reads all file names from current directory
closedir IN;
for my $in (@in) {
open IN, '<', "input/$in" || next;
open OUT, '>', "output/$in" || die "can't open file output/$in";
while(<IN>) { # reads input file line by line
print OUT unless ($. == 6 and m/Issued|ISSUED|NWS/);
if(eof){ # if the next line is the end-of-file
close ARGV ; # closes the current filehandle to reset $.
}
}
close OUT;
close IN;
}
###Output
_____no_output_____
###Markdown
`cleanup.pl`:
###Code
#!/usr/bin/perl -w
use strict;
use warnings;
opendir IN, 'input';
my @in = grep { /^[^.]/ } readdir IN; # reads all file names from current directory
closedir IN;
for my $in (@in) {
open IN, '<', "output/$in" || next;
open OUT, '>', "clean_output/$in" || die "can't open file output/$in";
while(<IN>) { # reads input file line by line
$. == 6 and print OUT "|"; # inserts a vertical bar at line 6
$. == 7 and print OUT "|"; # inserts a vertical bar at line 7
s/FORECAST POSITIONS AND MAX WINDS|\$\$/|/;
s/\n|NNNN/ /g;
s/CVT/CDT/;
} continue {
print OUT; # prints the edited file to the output folder
if(eof){ # if the next line is the end-of-file
close ARGV ; # closes the current filehandle to reset $.
}
}
close OUT;
close IN;
}
###Output
_____no_output_____
###Markdown
Creating Corpus Data Frame * Could not read discussions/clean_output/al182020_discus_001 - needed to remove newlines and change 'CVT' to 'CDT' manually* Also needed to manually move the separators around the date field in ep172014_discus_018
###Code
import glob
from pandas._libs.parsers import ParserError
discussion_list = []
discussions = glob.glob("discussions/clean_output/*")
for d in discussions:
try:
dis_df = pd.read_csv(d, header=None, sep='|', on_bad_lines='skip', names=['info', 'date', 'body', 'positions', 'author']).assign(tag=d)
discussion_list.append(dis_df)
except ParserError:
raise Exception('Could not read {}'.format(d))
corpus = pd.concat(discussion_list, axis=0, ignore_index=True)
corpus['tag'] = corpus['tag'].str.split('\/').str[-1].str.strip()
corpus['storm'] = corpus['tag'].str.split('_').str[0].str.strip()
corpus
###Output
_____no_output_____
###Markdown
Example NHC Discussion Text
###Code
corpus.body[1]
###Output
_____no_output_____
###Markdown
Preprocessing: Pandas Creating a datetime columnDespite claiming to handle timezone letter codes with `%Z`, pandas refused to convert. Replaced with time offset instead.
###Code
# Replace time zone codes with time offsets
corpus = corpus.replace({'date':{'AST':'-0400',
'EST':'-0500',
'EDT':'-0400',
'CST':'-0600',
'CDT':'-0500',
'MST':'-0700',
'MDT':'-0600',
'PST':'-0800',
'PDT':'-0700',
'HST':'-1000',
'GMT':'-0000'}}, regex=True)
corpus['datetime'] = pd.to_datetime(corpus['date'].str.strip(), format="%I%M %p %z %a %b %d %Y", utc=True)
corpus
###Output
_____no_output_____
###Markdown
Lowercase all text, strip extra whitespace and "forecaster" from author column
###Code
# document text
corpus['body'] = corpus['body'].str.strip()
corpus['body'] = corpus['body'].str.lower()
# author info
corpus['author'] = corpus['author'].str.lower()
corpus = corpus.replace({'author':{'forecaster':''}}, regex=True)
corpus['author'] = corpus['author'].str.strip()
# remove weather forecast links
corpus['body'] = corpus['body'].str.replace('http\S+|www.\S+', '', case=False)
corpus['body'] = corpus['body'].str.replace('awips header', '')
corpus['body'] = corpus['body'].str.replace('wmo header', '')
###Output
_____no_output_____
###Markdown
Create a column of storm strength at time of writing
###Code
corpus['positions'] = corpus['positions'].str.strip().str.replace(' ', ' ')
corpus['mph'] = corpus['positions'].str.split().str[6]
corpus['category'] = (['TD' if x<=38
else 'TS' if 39<=x<=73
else '1' if 74<=x<=95
else '2' if 96<=x<=110
else '3' if 111<=x<=129
else '4' if 130<=x<=156
else '5'
for x in corpus['mph'].astype('int')])
corpus
###Output
_____no_output_____
###Markdown
Adding a geometry column for storm position
###Code
corpus['lat'] = corpus['positions'].str.split(' ').str[2].str.strip().str.replace('N','')
corpus['lon'] = corpus['positions'].str.split(' ').str[3].str.strip().str.replace('W','')
# manual edits due to odd errors
corpus.at[4460,'lon']=9.3
corpus.at[916,'lon']=8.6
corpus.at[261,'lon']=6.9
corpus.at[210,'lon']=9.5
corpus_geo = geopandas.GeoDataFrame(
corpus, crs = CRS("WGS84"), geometry = geopandas.points_from_xy(corpus.lon, corpus.lat))
corpus
corpus.to_csv('corpus_clean.csv', index=False)
###Output
_____no_output_____
###Markdown
Data Source: https://www.uci.org/mountain-bike/results
###Code
import requests
import json
import os
URL_BASE = 'https://dataride.uci.ch/iframe/'
# All competitions
URL_COMPETITIONS = URL_BASE + 'Competitions/'
# Races in a competition
URL_RACES = URL_BASE + 'Races/'
# Events in a race
URL_EVENTS = URL_BASE + 'Events/'
# Results for event
URL_RESULTS = URL_BASE + 'Results/'
DICIPLINE_ID_MOUNTAIN_BIKE = '7'
RACE_TYPE_ID_DOWNHILL = '19'
RACE_TYPE_ID_ENDURO = '122'
RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC = '92'
SEASON_ID_YEAR_MAP = {
2020: '129',
2019: '128',
2018: '123',
2017: '22',
2016: '12',
2015: '4',
2014: '102',
2013: '103',
2012: '104',
2011: '105',
2010: '106',
2009: '107',
}
COMPETITION_CLASS_CODE_WORLD_CHAMPS = 'CM'
COMPETITION_CLASS_CODE_WORLD_CUP = 'CDM'
COMPETITION_CLASS_CODE_ENDURO_WORLD_SERIES = '3'
CATEGORY_CODE_MEN_ELITE = 'Men Elite'
CATEGORY_CODE_WOMEN_ELITE = 'Women Elite'
RACE_TYPE_CODE_DHI = 'DHI'
RACE_TYPE_CODE_ENDURO = 'END'
RACE_TYPE_CODE_XCO = 'XCO'
RACE_TYPE_ID_TO_CODE_MAP = {
RACE_TYPE_ID_DOWNHILL: RACE_TYPE_CODE_DHI,
RACE_TYPE_ID_ENDURO: RACE_TYPE_CODE_ENDURO,
RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC: RACE_TYPE_CODE_XCO
}
os.mkdir('../data')
def getCompetitions(race_type_id: str, year: int):
request_body = {
"disciplineId": DICIPLINE_ID_MOUNTAIN_BIKE,
"take":"400",
"skip":"0",
"page":"1",
"pageSize":"400",
"sort": [{"field": "StartDate", "dir": "desc"}],
"filter": {
"filters": [
{"field": "RaceTypeId", "value": race_type_id},
{"field": "SeasonId", "value": SEASON_ID_YEAR_MAP[year]}
]
}
}
response = requests.post(URL_COMPETITIONS, json=request_body).json()
if len(response['data']) < response['total']:
print('DID NOT GET ALL COMPETITIONS')
return response['data']
def getRaces(competition_id: str):
request_body = {
"disciplineId": DICIPLINE_ID_MOUNTAIN_BIKE,
"competitionId": competition_id,
"take":"400",
"skip":"0",
"page":"1",
"pageSize":"400"
}
response = requests.post(URL_RACES, json=request_body).json()
if len(response['data']) < response['total']:
print('DID NOT GET ALL RACES')
return response['data']
def getEvents(race_id: str):
request_body = {
"disciplineId": DICIPLINE_ID_MOUNTAIN_BIKE,
"raceId": race_id
}
return requests.post(URL_EVENTS, json=request_body).json()
def getResults(event_id: str):
request_body = {
"disciplineId": DICIPLINE_ID_MOUNTAIN_BIKE,
"eventId": event_id,
"take":"400",
"skip":"0",
"page":"1",
"pageSize":"400"
}
response = requests.post(URL_RESULTS, json=request_body).json()
if len(response['data']) < response['total']:
print('DID NOT GET ALL RESULTS')
return response['data']
###Output
_____no_output_____
###Markdown
For each year, retrieve the competitions for each of our chosen mountain bike disciplines (downhill, enduro and cross country olympic)
###Code
competitions = {
RACE_TYPE_ID_DOWNHILL: {},
RACE_TYPE_ID_ENDURO: {},
RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC: {}
}
for year in SEASON_ID_YEAR_MAP:
competitions[RACE_TYPE_ID_DOWNHILL][year] = getCompetitions(RACE_TYPE_ID_DOWNHILL, year)
competitions[RACE_TYPE_ID_ENDURO][year] = getCompetitions(RACE_TYPE_ID_ENDURO, year)
competitions[RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC][year] = getCompetitions(RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC, year)
competitions_json = json.dumps(competitions)
with open("../data/competitions.json","w") as f:
f.write(competitions_json)
with open('../data/competitions.json') as f:
competitions = json.load(f)
###Output
_____no_output_____
###Markdown
Filter the competitions:For downhill and XCO we filter for only world cup and world championship competitions.For enduro we filter for EWS competitions.
###Code
filtered_competitions = {
RACE_TYPE_ID_DOWNHILL: {},
RACE_TYPE_ID_ENDURO: {},
RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC: {}
}
for race_type in filtered_competitions:
for year in competitions[race_type]:
filtered_competitions[race_type][year] = []
for competition in competitions[race_type][year]:
if race_type in [RACE_TYPE_ID_DOWNHILL, RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC]:
if competition['ClassCode'] in [COMPETITION_CLASS_CODE_WORLD_CHAMPS, COMPETITION_CLASS_CODE_WORLD_CUP]:
filtered_competitions[race_type][year].append(competition)
if race_type == RACE_TYPE_ID_ENDURO:
if competition['ClassCode'] == COMPETITION_CLASS_CODE_ENDURO_WORLD_SERIES:
filtered_competitions[RACE_TYPE_ID_ENDURO][year].append(competition)
###Output
_____no_output_____
###Markdown
For each filtered competition, fetch the races
###Code
for race_type in filtered_competitions:
for year in filtered_competitions[race_type]:
for competition in filtered_competitions[race_type][year]:
races = getRaces(competition['CompetitionId'])
competition['races'] = races
competitions_with_races_json = json.dumps(filtered_competitions)
with open("../data/competitions_with_races.json","w") as f:
f.write(competitions_with_races_json)
with open('../data/competitions_with_races.json') as f:
competitions_with_races = json.load(f)
###Output
_____no_output_____
###Markdown
For each race (excluding qualifying), fetch the events
###Code
for race_type in competitions_with_races:
for year in competitions_with_races[race_type]:
for competition in competitions_with_races[race_type][year]:
for race in competition['races']:
race['events'] = {}
for category_code in [CATEGORY_CODE_MEN_ELITE, CATEGORY_CODE_WOMEN_ELITE]:
if race['CategoryCode'] == category_code and race['RaceTypeCode'] == RACE_TYPE_ID_TO_CODE_MAP[race_type]:
if (race_type == RACE_TYPE_ID_DOWNHILL and 'qualifying' not in race['RaceName'].lower()) or race_type == RACE_TYPE_ID_CROSS_COUNTRY_OLYMPIC or race_type == RACE_TYPE_ID_ENDURO:
events = getEvents(race['Id'])
if (len(events) > 1):
print('MORE THAN ONE EVENT')
race['events'][category_code] = events[0]
competitions_with_races_and_events_json = json.dumps(competitions_with_races)
with open("../data/competitions_with_races_and_events.json","w") as f:
f.write(competitions_with_races_and_events_json)
with open('../data/competitions_with_races_and_events.json') as f:
competitions_with_races_and_events = json.load(f)
###Output
_____no_output_____
###Markdown
For each event, fetch the results
###Code
for race_type in competitions_with_races_and_events:
for year in competitions_with_races_and_events[race_type]:
for competition in competitions_with_races_and_events[race_type][year]:
for race in competition['races']:
for category_code in [CATEGORY_CODE_MEN_ELITE, CATEGORY_CODE_WOMEN_ELITE]:
if category_code in race['events']:
event_id = race['events'][category_code]['EventId']
results = getResults(event_id)
race['events'][category_code]['results'] = results
competitions_with_races_and_events_and_results_json = json.dumps(competitions_with_races_and_events)
with open("../data/competitions_with_races_and_events_and_results.json","w") as f:
f.write(competitions_with_races_and_events_and_results_json)
###Output
_____no_output_____
###Markdown
Data CollectionThis notebook illustrates the collection of 'Statcast_data.csv' data file. It will detail the code with the pybaseball library in addition to metadata about the data itself.The statcast data is collected thanks in part to James LeDoux and company python library pybaseball. The link to the official github page is here: https://github.com/jldbc/pybaseball.This package scrapes Baseball Reference, Baseball Savant, and FanGraphs, all websites that house statistical and baseball related information. Specifically for this notebook, the package retrieves statcast data (detailed in the Proposal document) on the individual pitch level. The data will be collected on the following terms:Identify the classes in our target suitable for overall analysis. In statcast terms, the classes will be "called_strike", "ball", and "blocked_ball".Order pitchers who threw the most pitches in the 2018 regular season. That is done below in the pitchers list object.To get an even sample of pitches from each pitcher and a variety of pitchers, select the top 400 pitchers in our ordering and collect 350 pitches each. This is chosen because our 400th rank pitcher, Gen Giles, threw 351 pitches last year. Thus, to ensure an even amount between all pitchers, each pitcher will have 350 pitches in the final dataset. The data will be collected from the entire 2018 regular season, which started on March 29 and ended on September 30.Select appropriate features that can only be measured during the duration of a pitch. The duration, or timeline of a pitch, is defined as the moment when the pitcher releases the baseball out of his hand to the moment the catcher receives the ball. Thus, features about hitting the ball, or any information after a pitch has been thrown is excluded. The only feature considered will be the target, which is the result of the pitch.Logical executionThe logic of the data collection is based on the pybaseball functionality:Grab a unique identification label for each pitcher to be used in collected his respective dataPull the data from Statcast through pybaseball, resulting in a pandas dataframe, based on the unique identification. This dataframe will be a random sample of 350 pitches thrown in the 2018 regular season by the particular pitcher.Instatiate a dataframe by performing step 2 above. Then, loop through all of the pitchers and append their respective data to the instatiated dataframe. This will result in our final dataframe. For reference, the last pitcher will be Ken Giles.Save that dataframe as a csv file for future use.(Note from the author: The logic is not necessarily elegant, but it get's the job done. However, there are some hiccups. Due to random minor bugs and errors that crept up during execution of the looping through pitcher names, not all 400 pitchers ended in the dataframe. If there was a possible disruption of the loop with a particular pitcher, the pitcher was simply bypassed. This execution resulting in 368 pitchers resulting in the dataframe. Still an ample amount.)Let's begin the process now.
###Code
#import dependencies
import pybaseball
import pandas as pd
import numpy as np
from pybaseball import statcast_pitcher
from pybaseball import playerid_lookup
import pathlib
PITCHER_NAMES = pathlib.Path.cwd().parent / 'references' / 'pitcher_names.txt'
DATA_FOLDER = pathlib.Path.cwd().parent / 'data'
#set up a few constants
#number of pitches
SAMPLE_SIZE = 350
#classes of the target variable
TARGET_CLASSES = ['ball', 'called_strike', 'blocked_ball']
#resulting features we want
FEATURES_TO_KEEP = ['player_name', 'p_throws', 'pitch_name', 'release_speed','release_spin_rate',
'release_pos_x', 'release_pos_y',
'release_pos_z', 'pfx_x', 'pfx_z', 'vx0','vy0', 'vz0',
'ax', 'ay', 'az', 'sz_top', 'sz_bot',
'release_extension','description']
PITCHER_NAMES.parents[1]
def read_pitchers(file):
'''
# read in pitcher_names.txt file,
# split the file into list of list,
# where each individual list has two elements, the first and last names, respectively
'''
with open(file) as f:
names = f.read().split(',')
for name in names:
if '\n' in name:
names = [name.replace('\n', '') for name in names]
split_names = [name.split(' ') for name in names]
print(f' Number of Pitchers: {len(names)}')
return split_names
###Output
_____no_output_____
###Markdown
Using pybaseball Now begin the execution of the loop. This goes through steps 1-4 in the logical execution portion above.We'll use a few constraints:- collect 350 pitches from each pitcher so that there is balance between pitchers- collect 400 pitches from each pitcher to further ensure balance
###Code
names_temp = read_pitchers(PITCHER_NAMES)
long_names = []
for p in names_temp:
if len(p) >= 3:
#print(p)
long_names.append(p)
lance = long_names[0]
fname, lname = lance[0], " ".join(lance[1:])
print(fname)
print(lname)
def collect_statcast(sample_size, target, features, pitcher_names):
"""TODO"""
#loop through all the names
pitchers = pd.DataFrame(columns = features)
for fname, lname in pitcher_names[:2]:
#grap the unique identifier of the pitcher
player = playerid_lookup(lname, fname)
#to avoid any possible errors, execute following try statement:
# grab the unique identifier value
# get all available data in time frame
# filter data to only have appropriate targets, defined above
# append particular pitcher to 'master' dataframe
#if any of these steps fail, particularly the grabbing of 'ID'
#pass on to next pitcher
try:
ID = player['key_mlbam'].iloc[player['key_mlbam'].argmax()]
df = statcast_pitcher('2018-03-29', '2018-09-30', player_id = ID)
df = df[df['description'].isin(target)].sample(sample_size, random_state=2019)
data = df[features]
pitchers = pitchers.append(data, ignore_index=True)
except ValueError:
pass
return pitchers
def convert_to_csv(data):
'''
todo
'''
data.to_csv(DATA_FOLDER / 'raw' / 'Statcast_data.csv')
#convert_to_csv(pitchers)
def main():
names = read_pitchers(PITCHER_NAMES)
pitchers = collect_statcast(SAMPLE_SIZE, TARGET_CLASSES, FEATURES_TO_KEEP, names)
convert_to_csv(pitchers)
#main()
###Output
_____no_output_____ |
.ipynb_checkpoints/Homework_3-checkpoint.ipynb | ###Markdown
Homework assignment 3These problem sets focus on using the Beautiful Soup library to scrape web pages. Problem Set 1: Basic scrapingI've made a web page for you to scrape. It's available [here](http://static.decontextualize.com/widgets2016.html). The page concerns the catalog of a famous [widget](http://en.wikipedia.org/wiki/Widget) company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called `html_str` that contains the HTML source code of the page, and a variable `document` that stores a Beautiful Soup object.
###Code
from bs4 import BeautifulSoup
from urllib.request import urlopen
html_str = urlopen("http://static.decontextualize.com/widgets2016.html").read()
document = BeautifulSoup(html_str, "html.parser")
###Output
_____no_output_____
###Markdown
Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of `` tags contained in `widgets2016.html`.
###Code
h3_tag = document.find_all('h3')
for item in h3_tag:
print(item.string)
###Output
Forensic Widgets
Wondrous widgets
Mood widgets
Hallowed widgets
###Markdown
Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the "Widget Catalog" header.
###Code
telephone = document.find('a',{'class':'tel'})
for item in telephone:
print (item.string)
###Output
212-555-9912
###Markdown
In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, `widget_names` should evaluate to a list that looks like this (though not necessarily in this order):```Skinner WidgetWidget For FurtivenessWidget For StrawmanJittery WidgetSilver WidgetDivided WidgetManicurist WidgetInfinite WidgetYellow-Tipped WidgetUnshakable WidgetSelf-Knowledge WidgetWidget For Cinema```
###Code
widget_names = document.find_all('td',{'class':'wname'})
for item in widget_names:
print (item.string)
###Output
Skinner Widget
Widget For Furtiveness
Widget For Strawman
Jittery Widget
Silver Widget
Divided Widget
Manicurist Widget
Infinite Widget
Yellow-Tipped Widget
Unshakable Widget
Self-Knowledge Widget
Widget For Cinema
###Markdown
Problem set 2: Widget dictionariesFor this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called `widgets`. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be `partno`, `wname`, `price`, and `quantity`, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:```[{'partno': 'C1-9476', 'price': '$2.70', 'quantity': u'512', 'wname': 'Skinner Widget'}, {'partno': 'JDJ-32/V', 'price': '$9.36', 'quantity': '967', 'wname': u'Widget For Furtiveness'}, ...several items omitted... {'partno': '5B-941/F', 'price': '$13.26', 'quantity': '919', 'wname': 'Widget For Cinema'}]```And this expression: widgets[5]['partno'] ... should evaluate to: LH-74/O
###Code
widgets = []
# your code here
winfo = document.find_all('tr')
for item in winfo:
partno = item.find('td',{'class': 'partno'})
price = item.find('td',{'class': 'price'})
quantity = item.find('td',{'class': 'quantity'})
wname = item.find('td',{'class': 'wname'})
widget_map={}
widget_map['partno']=partno.string
widget_map['price']=price.string
widget_map['quantity']=quantity.string
widget_map['wname']=wname.string
widgets.append(widget_map)
# end your code
widgets
###Output
_____no_output_____
###Markdown
In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for `price` and `quantity` in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this: [{'partno': 'C1-9476', 'price': 2.7, 'quantity': 512, 'widgetname': 'Skinner Widget'}, {'partno': 'JDJ-32/V', 'price': 9.36, 'quantity': 967, 'widgetname': 'Widget For Furtiveness'}, ... some items omitted ... {'partno': '5B-941/F', 'price': 13.26, 'quantity': 919, 'widgetname': 'Widget For Cinema'}](Hint: Use the `float()` and `int()` functions. You may need to use string slices to convert the `price` field to a floating-point number.)
###Code
widgets = []
# your code here
winfo = document.find_all('tr')
for item in winfo:
partno = item.find('td',{'class': 'partno'})
price = item.find('td',{'class': 'price'})
quantity = item.find('td',{'class': 'quantity'})
wname = item.find('td',{'class': 'wname'})
widget_map={}
widget_map['partno']=partno.string
widget_map['price']=float(price.string[1:])
widget_map['quantity']=int(quantity.string)
widget_map['wname']=wname.string
widgets.append(widget_map)
# end your code
widgets
###Output
_____no_output_____
###Markdown
Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the `widgets` list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.Expected output: `7928`
###Code
sum_quantity = 0
for quantity in widgets:
sum_quantity = sum_quantity + quantity['quantity']
print (sum_quantity)
###Output
7928
###Markdown
In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.Expected output:```Widget For FurtivenessJittery WidgetSilver WidgetInfinite WidgetWidget For Cinema```
###Code
for item in widgets:
if item['price'] > 9.30:
print (item['wname'])
###Output
Widget For Furtiveness
Jittery Widget
Silver Widget
Infinite Widget
Widget For Cinema
###Markdown
Problem set 3: Sibling rivalriesIn the following problem set, you will yet again be working with the data in `widgets2016.html`. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's `.find_next_sibling()` method. Here's some information about that method, cribbed from the notes:Often, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using `.find()` and `.find_all()`, and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called `example_html`):
###Code
example_html = """
<h2>Camembert</h2>
<p>A soft cheese made in the Camembert region of France.</p>
<h2>Cheddar</h2>
<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>
"""
###Output
_____no_output_____
###Markdown
If our task was to create a dictionary that maps the name of the cheese to the description that follows in the `` tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a `.find_next_sibling()` method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:
###Code
example_doc = BeautifulSoup(example_html, "html.parser")
cheese_dict = {}
for h2_tag in example_doc.find_all('h2'):
cheese_name = h2_tag.string
cheese_desc_tag = h2_tag.find_next_sibling('p')
cheese_dict[cheese_name] = cheese_desc_tag.string
cheese_dict
###Output
_____no_output_____
###Markdown
With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the `.find_next_sibling()` method, to print the part numbers of the widgets that are in the table *just beneath* the header "Hallowed Widgets."Expected output:```MZ-556/BQV-730T1-97315B-941/F```
###Code
# do it again
h3_tag = document.find_all('h3')
tr_tag = document.find_all('tr')
for item in h3_tag:
if item.string == "Hallowed Widgets":
for item in tr_tag:
partno = item.find_next_sibling('td',{'class': 'partno'})
print (partno)
###Output
_____no_output_____
###Markdown
Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!In the cell below, I've created a variable `category_counts` and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are "categories" of widgets (e.g., the contents of the `` tags on the page: "Forensic Widgets", "Mood widgets", "Hallowed Widgets") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary `category_counts` should look like this:```{'Forensic Widgets': 3, 'Hallowed widgets': 4, 'Mood widgets': 2, 'Wondrous widgets': 3}```
###Code
category_counts = {}
# your code here
# end your code
category_counts
###Output
_____no_output_____ |
uitwerkingen/Schema-uitwerkingen.ipynb | ###Markdown
Schema: schema's in MongoDB Validator: (partieel) schema voor collection-documentenElk document in een MongoDB collection heeft zijn eigen *structuur*: veldnamen en bijbehorende waarden (types).Deze grote vrijheid is niet handig als je een collectie wilt kunnen doorzoeken:daarvoor moet je weten welke namen en waarden de verschillende documenten gebruiken.Dit werkt beter als die documenten een bepaalde (minimale) gemeenschappelijke structuur hebben.Met behulp van een *validator* kun je een *minimale* structuur van de documenten in een collection beschrijven.MongoDB gebruikt deze validator bij het toevoegen of aanpassen van een document.Als dit document niet voldoet aan de beschrijving van de validator, wordt het niet toegevoegd.Je kunt de validator opgeven bij het aanmaken van de collection.Je kunt deze ook later toevoegen, met het db-commando `collMod` voor het aanpassen van de eigenschappen van een collection. SchemaDe structuur van de documenten in een collection noemen we een *schema*.In een MongoDB collection-schema bepaal je zelf welk deel van de structuur vastligt, en waar documenten kunnen verschillen.> In een SQL database beschrijft het (fysieke) schema de *volledige structuur* van de database: de tabellen, en de structuur van elke tabel (namen en types van de kolommen).Alle rijen (records) in een tabel hebben dezelfde structuur. Initialisaties
###Code
import os
import re
import pandas as pd
import numpy as np
from IPython.core.display import display, HTML
import pymongo
os.environ["PATH"]=os.environ["PATH"] + ":/usr/local/bin"
pd.set_option('max_colwidth',160)
userline = !echo $USER
username = userline[0]
dbname = username + "-demodb"
print("Database name: " + dbname)
from pymongo import MongoClient
print('Mongo version', pymongo.__version__)
client = MongoClient('localhost', 27017)
db = client[dbname]
contacts = db.contacts
contacts.drop()
os.system('mongoimport -d ' + dbname + ' -c contacts adressen.json')
###Output
Database name: eelco-demodb
Mongo version 3.11.0
###Markdown
Tegenvoorbeeld: toevoegen van een vreemd documentMongoDB-collections hebben in eerste instantie geen structuur (schema).Dit betekent dat je willekeurige documenten kun toevoegen, zoals we hier demonstreren:
###Code
contacts.insert_one({"kleur": "groen", "prijs": 400, "beschrijving": "fiets"})
list(contacts.find())
###Output
_____no_output_____
###Markdown
Dit is natuurlijk niet de bedoeling.Als de database sterk gekoppeld is aan een enkele toepassing zal dit niet zo snel gebeuren.Maar een database wordt vaak door meerdere toepassingen gebruikt: je wilt dergelijke problemen dan voorkomen.Bovendien wil je weten welke velden (properties) gebruikt kunnen worden in de documenten in een bepaalde collection, zoals `contacts` of `agenda`. Zoeken van documenten via een *type*In de volgende zoekopdracht gebruiken we niet de waarde van een veld, maar het type.Dit komt later van pas bij het definiëren van een schema (validator).
###Code
list(contacts.find({"kleur": {"$type": "string"}}))
###Output
_____no_output_____
###Markdown
Valideren van documentenMet behulp van een *validator* controleert MongoDB bij het toevoegen of aanpassen van een document in een collection of dat document voldoet aan de regels van die collection.Je kunt een validator zien als een query-expressie waarmee je alle "valide" documenten in de database filtert.We kunnen de validator van een collection instellen met behulp van het db-commando `collMod`. Definiëren van de validatorAls minimale eis voor de documenten in de `contacts`-collection stellen we dat er tenminste een `name`-veld (property) is, en een `email` of een `tel`-veld.Dit beschrijven we met het volgende schema:
###Code
contact_schema = {"name": {"$type": "string"},
"$or": [{"email": {"$type": "string"}},
{"tel": {"$type": "string"}}
]}
###Output
_____no_output_____
###Markdown
We testen dit schema, door te zoeken naar de documenten die hier wel aan voldoen:
###Code
list(contacts.find(contact_schema))
###Output
_____no_output_____
###Markdown
Vinden van niet-valide documentenVervolgens gaan we na welke documenten *niet* aan de validator-query voldoen.Hiervoor gebruiken we de `$nor`-operator met een lijst van deel-queries,in ons geval is de lijst maar 1 lang. (Er is geen `$not`, maar zo kan het ook.)
###Code
list(contacts.find({"$nor":[contact_schema]}))
###Output
_____no_output_____
###Markdown
Toevoegen van de validator aan de collectionWe voegen dit schema toe als *validator*-schema voor de collection `contacts`.> Je kunt de validator definiëren bij de initialisatie van de collection, maar je kunt deze achteraf ook nog veranderen, zoals we hier doen.
###Code
db.command("collMod", "contacts", validator=contact_schema)
###Output
_____no_output_____
###Markdown
Voorbeeld: toevoegen van een valide documentHet toevoegen van een document dat aan deze regels voldoet:
###Code
contacts.insert_one({"name": "Henk de Vries", "tel": "06 3333 8765"})
###Output
_____no_output_____
###Markdown
Voorbeeld: toevoegen van een niet-valide documentHet toevoegen van een document dat *niet* aan deze regels voldoet (door een foute keuze van het "name"-veld).> Dit geeft een foutmelding; later geven we een manier om hier handiger mee om te gaan in een programma.
###Code
contacts.insert_one({"naam": "Anne de Boer", "tel": "06 1234 8855"})
###Output
_____no_output_____
###Markdown
Het is handiger om dergelijke fouten in het programma zelf op te vangen.Python biedt hiervoor de mogelijkheid met het exception-mechanisme, zie het voorbeeld hieronder:
###Code
try:
contacts.insert_one({"naam": "Anne de Boer", "tel": "06 1234 8855"})
except pymongo.errors.WriteError as s:
print("Document not inserted: " + str(s))
else:
print("insert OK")
###Output
Document not inserted: Document failed validation, full error: {'index': 0, 'code': 121, 'errmsg': 'Document failed validation'}
###Markdown
Vinden van niet-valide documentenAls je achteraf de validator aanpast, kan de collection nog steeds documenten bevatten die niet aan deze nieuwe validator voldoen. Dit moet je zelf controleren, en de data eventueel aanpassen.> Het is verstandig om dit bij elke verandering van een schema te doen, anders loop je het risico op een foutmelding bij een `update` van een niet-valide document.
###Code
list(contacts.find({"$nor": [contact_schema]}))
###Output
_____no_output_____
###Markdown
Opdracht* definieer het schema `contact_schema` zodat naast een document naast de naam, een telefoonnummer of een emailadres, *ook* een fysiek adres bevat. Dit fysieke adres heeft (tenminste) de eigenschap (property) `city`.> tip: voor het zoeken naar veld `b` als onderdeel van veld `a` gebruik je de notatie `"a.b": ...`.
###Code
contact_schema = {"name": {"$type": "string"},
"$or": [{"email": {"$type": "string"}},
{"tel": {"$type": "string"}}
],
"address.city": {"$type": "string"}
}
###Output
_____no_output_____
###Markdown
* zoek alle documenten die *niet* aan dit schema voldoen.
###Code
list(contacts.find({"$nor": [contact_schema]}))
list(contacts.find(contact_schema))
###Output
_____no_output_____
###Markdown
* (her)definieer de collection-validator met dit nieuwe schema.
###Code
db.command("collMod", "contacts", validator=contact_schema)
###Output
_____no_output_____
###Markdown
Demonstreer dat het schema goed werkt bij het toevoegen voor het volgende document.> Ga zelf na of dit aan het schema voldoet. Welk resultaat verwacht je?> Pas zo nodig het document aan, en voer de opdracht nogmaals uit
###Code
person = {"name": "Henk de Vries",
"email": "[email protected]",
"address": {"straat": "Kastanjelaan 31", "plaats": "Almere"}}
try:
contacts.insert_one(person)
except pymongo.errors.WriteError as s:
print("Document not inserted: " + str(s))
else:
print("insert OK")
###Output
Document not inserted: Document failed validation, full error: {'index': 0, 'code': 121, 'errmsg': 'Document failed validation'}
###Markdown
OpdrachtWe willen alleen adressen met (tenminste) `street`, `city` en `postcode` toestaan.* Herdefinieer het `contact_schema` zodat al deze velden hierin opgenomen zijn als string.> *Opmerking*: je kunt met behulp van reguliere expressies nog preciezer beschrijven hoe een postcode eruit kan zien,maar dat later we hier buiten beschouwing. `string` is voldoende.Herdefinieer de validator met dit nieuwe `address_schema`.
###Code
contact_schema = {"name": {"$type": "string"},
"$or": [{"email": {"$type": "string"}},
{"tel": {"$type": "string"}}
],
"address.street": {"$type": "string"},
"address.city": {"$type": "string"},
"address.postcode": {"$type": "string"}
}
###Output
_____no_output_____
###Markdown
* pas de validator van `contacts` aan
###Code
db.command("collMod", "contacts", validator=contact_schema)
###Output
_____no_output_____
###Markdown
* Geef een voorbeeld van een insert van een document dat voldoet aan deze validator.
###Code
person = {"name": "Margreet Braaksma",
"email": "[email protected]",
"address": {"street": "Planetenstraat 42", "city": "Zierikzee","postcode": "1023 AB"}}
try:
contacts.insert_one(person)
except pymongo.errors.WriteError as s:
print("Document not inserted: " + str(s))
else:
print("insert OK")
###Output
insert OK
###Markdown
* Geef een voorbeeld van een insert van een document dat *niet* voldoet aan deze validator.
###Code
person = {"name": "Margreet Braaksma",
"email": "[email protected]",
"address": {"street": "Planetenstraat 42", "city": "Zierikzee"}}
try:
contacts.insert_one(person)
except pymongo.errors.WriteError as s:
print("Document not inserted: " + str(s))
else:
print("insert OK")
###Output
Document not inserted: Document failed validation, full error: {'index': 0, 'code': 121, 'errmsg': 'Document failed validation'}
###Markdown
Opmerkingen* voor het beschrijven van een schema voor een validator biedt MongoDB twee mogelijkheden: * de oorspronkelijke MongoDB-query-notatie, zoals hierboven gebruikt; * JSON-schema, een (internet/IETF) draft-standaard voor JSON-documenten (zie https://json-schema.org, https://json-schema.org/latest/json-schema-core.html, en https://json-schema.org/understanding-json-schema/index.html). JSON schemaWe geven hieronder zonder commentaar het oorspronkelijke validatieschema, met `name` als verplicht veld, en met de keuze tussen `tel` en `email` als verplichte velden.JSON schema is tegenwoordig de voorkeurnotatie voor validatie in MongoDB. Je kunt hierin vrij precies vastleggen hoe documenten eruit moeten zien, inclusief de structuur van niet-verplichte onderdelen. Je kunt JSON schema ook gebruiken in normale query-opdrachten (`find`).> `anyOf` staat voor "or", met een lijst van alternatieven.
###Code
schema = {"type": "object",
"required": ["name"],
"properties": {
"name": {"type": "string"}
},
"anyOf": [
{"properties": {"email": {"anyOf": [{"type": "string"},
{"type": "array",
"items": {"type": "string"}}
]}},
"required": ["email"]},
{"properties": {"tel": {"type": "string"}},
"required": ["tel"]}
]
}
list(contacts.find({"$jsonSchema": schema}))
list(contacts.find({"$jsonSchema": {"not": schema}}))
###Output
_____no_output_____
###Markdown
---(Einde van dit Jupyter notebook.)
###Code
db.getCollectionInfos()
###Output
_____no_output_____ |
Vit and Mixer/tpu_vit_mlp_mixer.ipynb | ###Markdown
Pre-processing
###Code
from google.colab import drive
drive.mount('/content/drive')
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
!pip install einops
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.9-cp37-cp37m-linux_x86_64.whl
!pip install timm
!pip install pydicom
!unzip ./drive/MyDrive/torch_project/medi/mlpmixer/chexnet/rsna-pneumonia-detection-challenge.zip
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use("ggplot")
import pydicom
import os
from os import listdir
from os.path import isfile, join
import glob, pylab
from torch import nn
from torch import Tensor
from PIL import Image
import torchvision.transforms as transforms
from torchvision.transforms import Compose, Resize, ToTensor
from torchvision import datasets, transforms, models
import torchvision
from einops import rearrange, reduce, repeat
from einops.layers.torch import Rearrange, Reduce
from torchsummary import summary
import torch
import torch.nn.functional as F
import torch_xla
import torch_xla.core.xla_model as xm
import torch_xla.distributed.xla_multiprocessing as xmp
import torch_xla.distributed.parallel_loader as pl
import timm
import gc
import time
import random
from datetime import datetime
from tqdm.notebook import tqdm
from sklearn import model_selection, metrics
from sklearn.metrics import f1_score
# Image examples
train_images_dir = './stage_2_train_images/'
train_images = [f for f in listdir(train_images_dir) if isfile(join(train_images_dir, f))]
test_images_dir = './stage_2_test_images/'
test_images = [f for f in listdir(test_images_dir) if isfile(join(test_images_dir, f))]
print('5 Training images', train_images[:5]) # Print the first 5
print('Number of train images:', len(train_images))
print('Number of test images:', len(test_images))
train_labels = pd.read_csv('./stage_2_train_labels.csv')
train_labels.head()
# Number of positive targets
print(round((8964 / (8964 + 20025)) * 100, 2), '% of the examples are positive')
pd.DataFrame(train_labels.groupby('Target')['patientId'].count())
# Distribution of Target in Training Set
plt.style.use('ggplot')
plot = train_labels.groupby('Target') \
.count()['patientId'] \
.plot(kind='bar', figsize=(10,4), rot=0)
# For parallelization in TPUs
os.environ["XLA_USE_BF16"] = "1"
os.environ["XLA_TENSOR_ALLOCATOR_MAXSIZE"] = "100000000"
def seed_everything(seed):
"""
Seeds basic parameters for reproductibility of results
Arguments:
seed {int} -- Number of the seed
"""
random.seed(seed)
os.environ["PYTHONHASHSEED"] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# 폐렴 증상이 여러부위가 있으므로 patientId에 중복이 있음, id 중복제거 df
temp = train_labels.drop_duplicates(['patientId'])
# Script to prepare combined dataset
# Class 0: Normal
# Class 1: Pneumonia
seed_everything(1)
DATA_PATH = './drive/My Drive/torch_project/medi/mlpmixer/chexnet/chest_jpg/'
df_val = temp.sample(n=int(temp.shape[0]*0.3))
df_train=temp.drop(index=df_val.index)
df_test = df_val.sample(n=int(temp.shape[0]*0.1))
df_val=df_val.drop(index=df_test.index)
df_train.shape, df_val.shape, df_test.shape
# model specific global variables
IMG_SIZE = 224
BATCH_SIZE = 16
LR = 2e-05
GAMMA = 0.7
N_EPOCHS = 10
DATA_DIR="./drive/My Drive/torch_project/medi/mlpmixer"
VIT_PATH = (
"./drive/My Drive/torch_project/medi/mlpmixer/jx_vit_base_p16_224-80ecf9dd.pth"
)
MLP_PATH = "./drive/My Drive/torch_project/medi/mlpmixer/jx_mixer_b16_224-76587d61.pth"
class pneumonia_dataset(torch.utils.data.Dataset):
"""
Helper Class to create the pytorch dataset
"""
def __init__(self, df, data_path=DATA_PATH, transforms=None):
super().__init__()
self.df_data = df.values
self.data_path = data_path
self.transforms = transforms
#self.mode = mode
#self.data_dir = "train" if mode == "train" else "val"
def __len__(self):
return len(self.df_data)
def __getitem__(self, index):
img_name, _, _, _, _, label = self.df_data[index]
img_path = os.path.join(self.data_path, img_name+'.jpg')
img = Image.open(img_path).convert("RGB")
if self.transforms is not None:
image = self.transforms(img)
return image, label
# create image augmentations
transforms_train = transforms.Compose(
[
transforms.Resize((IMG_SIZE, IMG_SIZE)),
transforms.RandomHorizontalFlip(p=0.3),
transforms.RandomVerticalFlip(p=0.3),
transforms.RandomResizedCrop(IMG_SIZE),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
]
)
transforms_val = transforms.Compose(
[
transforms.Resize((IMG_SIZE, IMG_SIZE)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),
]
)
###Output
_____no_output_____
###Markdown
ViT Pre-trained
###Code
class ViT_MLP_Base16(nn.Module):
def __init__(self, n_classes, pretrained=False, vit=True):
super(ViT_MLP_Base16, self).__init__()
if vit :
self.model = timm.create_model("vit_base_patch16_224", pretrained=False, in_chans=3)
else :
self.model = timm.create_model("gmixer_24_224", pretrained=False, in_chans=3)
# self.model.norm = nn.LayerNorm((768,), eps=1e-5, elementwise_affine=True)
if pretrained:
if vit :
self.model.load_state_dict(torch.load(VIT_PATH))
else :
self.model.load_state_dict(torch.load('./gmixer_24_224_raa-7daf7ae6.pth'))
self.model.head = nn.Linear(self.model.head.in_features, n_classes)
def forward(self, x):
x = self.model(x)
return x
def train_one_epoch(self, train_loader, criterion, optimizer, device):
# keep track of training loss
epoch_loss = 0.0
epoch_accuracy = 0.0
###################
# train the model #
###################
self.model.train()
for i, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if device.type == "cuda":
data, target = data.cuda(), target.cuda()
elif device.type == "xla":
data = data.to(device, dtype=torch.float32)
target = target.to(device, dtype=torch.int64)
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = self.forward(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# Calculate Accuracy
accuracy = (output.argmax(dim=1) == target).float().mean()
# update training loss and accuracy
epoch_loss += loss
epoch_accuracy += accuracy
# perform a single optimization step (parameter update)
if device.type == "xla":
xm.optimizer_step(optimizer)
if i % 20 == 0:
xm.master_print(f"\tBATCH {i+1}/{len(train_loader)} - LOSS: {loss}")
else:
optimizer.step()
return epoch_loss / len(train_loader), epoch_accuracy / len(train_loader)
def validate_one_epoch(self, valid_loader, criterion, device):
# keep track of validation loss
valid_loss = 0.0
valid_accuracy = 0.0
valid_f1 = 0.0
######################
# validate the model #
######################
self.model.eval()
for data, target in valid_loader:
# move tensors to GPU if CUDA is available
if device.type == "cuda":
data, target = data.cuda(), target.cuda()
elif device.type == "xla":
data = data.to(device, dtype=torch.float32)
target = target.to(device, dtype=torch.int64)
with torch.no_grad():
# forward pass: compute predicted outputs by passing inputs to the model
output = self.model(data)
# calculate the batch loss
loss = criterion(output, target)
# Calculate Accuracy
accuracy = (output.argmax(dim=1) == target).float().mean()
# update average validation loss and accuracy
valid_loss += loss
valid_accuracy += accuracy
valid_f1 += f1_score(output.argmax(dim=1).cpu().numpy(), target.cpu().numpy(), average='macro')
return valid_loss / len(valid_loader), valid_accuracy / len(valid_loader), valid_f1/ len(valid_loader)
def fit_tpu(
model, epochs, device, criterion, optimizer, train_loader, valid_loader=None
):
valid_loss_min = np.Inf # track change in validation loss
# keeping track of losses as it happen
train_losses = []
valid_losses = []
train_accs = []
valid_accs = []
valid_f1s = []
for epoch in range(1, epochs + 1):
gc.collect()
para_train_loader = pl.ParallelLoader(train_loader, [device])
xm.master_print(f"{'='*50}")
xm.master_print(f"EPOCH {epoch} - TRAINING...")
train_loss, train_acc = model.train_one_epoch(
para_train_loader.per_device_loader(device), criterion, optimizer, device
)
xm.master_print(
f"\n\t[TRAIN] EPOCH {epoch} - LOSS: {train_loss}, ACCURACY: {train_acc}\n"
)
train_losses.append(train_loss)
train_accs.append(train_acc)
gc.collect()
if valid_loader is not None:
gc.collect()
para_valid_loader = pl.ParallelLoader(valid_loader, [device])
xm.master_print(f"EPOCH {epoch} - VALIDATING...")
valid_loss, valid_acc, valid_f1 = model.validate_one_epoch(
para_valid_loader.per_device_loader(device), criterion, device
)
xm.master_print(f"\t[VALID] LOSS: {valid_loss}, ACCURACY: {valid_acc}, F1: {valid_f1}\n")
valid_losses.append(valid_loss)
valid_accs.append(valid_acc)
valid_f1s.append(valid_f1s)
gc.collect()
# save model if validation loss has decreased
if valid_loss <= valid_loss_min and epoch != 1:
xm.master_print(
"Validation loss decreased ({:.4f} --> {:.4f}). Saving model ...".format(
valid_loss_min, valid_loss
)
)
#xm.save(model.state_dict(), f'{DATA_DIR}/checkpoint/best_model.pth')
valid_loss_min = valid_loss
return {
"train_loss": train_losses,
"valid_losses": valid_losses,
"train_acc": train_accs,
"valid_acc": valid_accs,
"valid_f1:": valid_f1s
}
model = ViT_MLP_Base16(n_classes=2, pretrained=True, mode="vit")
def _run():
train_dataset = pneumonia_dataset(df_train, transforms=transforms_train)
valid_dataset = pneumonia_dataset(df_val, transforms=transforms_val)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True,
)
valid_sampler = torch.utils.data.distributed.DistributedSampler(
valid_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=False,
)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
sampler=train_sampler,
drop_last=True,
num_workers=8,
)
valid_loader = torch.utils.data.DataLoader(
dataset=valid_dataset,
batch_size=BATCH_SIZE,
sampler=valid_sampler,
drop_last=True,
num_workers=8,
)
criterion = nn.CrossEntropyLoss()
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = xm.xla_device()
model.to(device)
lr = LR * xm.xrt_world_size()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
xm.master_print(f"INITIALIZING TRAINING ON {xm.xrt_world_size()} TPU CORES")
start_time = datetime.now()
xm.master_print(f"Start Time: {start_time}")
logs = fit_tpu(
model=model,
epochs=N_EPOCHS,
device=device,
criterion=criterion,
optimizer=optimizer,
train_loader=train_loader,
valid_loader=valid_loader,
)
xm.master_print(f"Execution time: {datetime.now() - start_time}")
xm.master_print("Saving Model")
xm.save(
model.state_dict(), f'{DATA_DIR}/checkpoint/model_5e_{datetime.now().strftime("%Y%m%d-%H%M")}.pth'
)
# Start training processes
def _mp_fn(rank, flags):
torch.set_default_tensor_type("torch.FloatTensor")
a = _run()``
# _run()
FLAGS = {}
xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=8, start_method="fork")
# load model
vit_checkpoint_path = './drive/My Drive/torch_project/medi/mlpmixer/vit_checkpoint/vit_best.pth'
vit_checkpoint = torch.load(vit_checkpoint_path)
model.load_state_dict(vit_checkpoint)
def predict_scores(model, raw_data, device) :
model.to(device)
model.eval()
test_dataset = pneumonia_dataset(raw_data, transforms=transforms_val)
test_sampler = torch.utils.data.distributed.DistributedSampler(
test_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=False,
)
test_loader = torch.utils.data.DataLoader(
dataset=test_dataset,
batch_size=BATCH_SIZE,
sampler=test_sampler,
drop_last=True,
num_workers=8,
)
para_test_loader = pl.ParallelLoader(test_loader, [device])
test_accuracy = 0.0
test_f1 = 0.0
for data, target in para_test_loader.per_device_loader(device):
data = data.to(device, dtype=torch.float32)
target = target.to(device, dtype=torch.int64)
with torch.no_grad():
output = model(data)
accuracy = (output.argmax(dim=1) == target).float().mean()
test_accuracy += accuracy
test_f1 += f1_score(output.argmax(dim=1).cpu().numpy(), target.cpu().numpy(), average='macro')
return test_accuracy / len(test_loader), test_f1/ len(test_loader)
device = xm.xla_device()
predict_scores(model, df_test, device)
###Output
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
cpuset_checked))
###Markdown
MLP Mixer Pre-trained
###Code
!wget https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/gmixer_24_224_raa-7daf7ae6.pth
model = ViT_MLP_Base16(n_classes=2, pretrained=True, vit=False)
def _run():
train_dataset = pneumonia_dataset(df_train, transforms=transforms_train)
valid_dataset = pneumonia_dataset(df_val, transforms=transforms_val)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True,
)
valid_sampler = torch.utils.data.distributed.DistributedSampler(
valid_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=False,
)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
sampler=train_sampler,
drop_last=True,
num_workers=8,
)
valid_loader = torch.utils.data.DataLoader(
dataset=valid_dataset,
batch_size=BATCH_SIZE,
sampler=valid_sampler,
drop_last=True,
num_workers=8,
)
criterion = nn.CrossEntropyLoss()
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = xm.xla_device()
model.to(device)
lr = LR * xm.xrt_world_size()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
xm.master_print(f"INITIALIZING TRAINING ON {xm.xrt_world_size()} TPU CORES")
start_time = datetime.now()
xm.master_print(f"Start Time: {start_time}")
logs = fit_tpu(
model=model,
epochs=N_EPOCHS,
device=device,
criterion=criterion,
optimizer=optimizer,
train_loader=train_loader,
valid_loader=valid_loader,
)
xm.master_print(f"Execution time: {datetime.now() - start_time}")
xm.master_print("Saving Model")
xm.save(
model.state_dict(), f'{DATA_DIR}/checkpoint/model_5e_{datetime.now().strftime("%Y%m%d-%H%M")}.pth'
)
# Start training processes
def _mp_fn(rank, flags):
torch.set_default_tensor_type("torch.FloatTensor")
a = _run()
# _run()
FLAGS = {}
xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=8, start_method="fork")
# load model
mlp_checkpoint_path = './drive/My Drive/torch_project/medi/mlpmixer/checkpoint/mlp_pre.pth'
mlp_checkpoint = torch.load(mlp_checkpoint_path)
model.load_state_dict(mlp_checkpoint)
predict_scores(model, df_test, xm.xla_device())
###Output
_____no_output_____
###Markdown
ViT
###Code
class PatchEmbedding(nn.Module):
def __init__(self, in_channels: int = 3, patch_size: int = 16, emb_size: int = 768, img_size: int = 224):
self.patch_size = patch_size
super().__init__()
self.projection = nn.Sequential(
# using a conv layer instead of a linear one -> performance gains
nn.Conv2d(in_channels, emb_size, kernel_size=patch_size, stride=patch_size),
Rearrange('b e (h) (w) -> b (h w) e'),
)
self.cls_token = nn.Parameter(torch.randn(1,1, emb_size))
self.positions = nn.Parameter(torch.randn((img_size // patch_size) **2 + 1, emb_size))
def forward(self, x: Tensor) -> Tensor:
b, _, _, _ = x.shape
x = self.projection(x)
cls_tokens = repeat(self.cls_token, '() n e -> b n e', b=b)
# prepend the cls token to the input
x = torch.cat([cls_tokens, x], dim=1)
# add position embedding
x += self.positions
return x
class MultiHeadAttention(nn.Module):
def __init__(self, emb_size: int = 768, num_heads: int = 8, dropout: float = 0):
super().__init__()
self.emb_size = emb_size
self.num_heads = num_heads
# fuse the queries, keys and values in one matrix
self.qkv = nn.Linear(emb_size, emb_size * 3)
self.att_drop = nn.Dropout(dropout)
self.projection = nn.Linear(emb_size, emb_size)
def forward(self, x : Tensor, mask: Tensor = None) -> Tensor:
# split keys, queries and values in num_heads
qkv = rearrange(self.qkv(x), "b n (h d qkv) -> (qkv) b h n d", h=self.num_heads, qkv=3)
queries, keys, values = qkv[0], qkv[1], qkv[2]
# sum up over the last axis
energy = torch.einsum('bhqd, bhkd -> bhqk', queries, keys) # batch, num_heads, query_len, key_len
if mask is not None:
fill_value = torch.finfo(torch.float32).min
energy.mask_fill(~mask, fill_value)
scaling = self.emb_size ** (1/2)
att = F.softmax(energy, dim=-1) / scaling
att = self.att_drop(att)
# sum up over the third axis
out = torch.einsum('bhal, bhlv -> bhav ', att, values)
out = rearrange(out, "b h n d -> b n (h d)")
out = self.projection(out)
return out
class ResidualAdd(nn.Module):
def __init__(self, fn):
super().__init__()
self.fn = fn
def forward(self, x, **kwargs):
res = x
x = self.fn(x, **kwargs)
x += res
return x
class FeedForwardBlock(nn.Sequential):
def __init__(self, emb_size: int, expansion: int = 4, drop_p: float = 0.):
super().__init__(
nn.Linear(emb_size, expansion * emb_size),
nn.GELU(),
nn.Dropout(drop_p),
nn.Linear(expansion * emb_size, emb_size),
)
class TransformerEncoderBlock(nn.Sequential):
def __init__(self,
emb_size: int = 768,
drop_p: float = 0.,
forward_expansion: int = 4,
forward_drop_p: float = 0.,
** kwargs):
super().__init__(
ResidualAdd(nn.Sequential(
nn.LayerNorm(emb_size),
MultiHeadAttention(emb_size, **kwargs),
nn.Dropout(drop_p)
)),
ResidualAdd(nn.Sequential(
nn.LayerNorm(emb_size),
FeedForwardBlock(
emb_size, expansion=forward_expansion, drop_p=forward_drop_p),
nn.Dropout(drop_p)
)
))
class TransformerEncoder(nn.Sequential):
def __init__(self, depth: int = 12, **kwargs):
super().__init__(*[TransformerEncoderBlock(**kwargs) for _ in range(depth)])
class ClassificationHead(nn.Sequential):
def __init__(self, emb_size: int = 768, n_classes: int = 4):
super().__init__(
Reduce('b n e -> b e', reduction='mean'),
nn.LayerNorm(emb_size),
nn.Linear(emb_size, n_classes))
class ViT(nn.Sequential):
def __init__(self,
in_channels: int = 3,
patch_size: int = 16,
emb_size: int = 768,
img_size: int = 224,
depth: int = 12,
n_classes: int = 2,
**kwargs):
super().__init__(
PatchEmbedding(in_channels, patch_size, emb_size, img_size),
TransformerEncoder(depth, emb_size=emb_size, **kwargs),
ClassificationHead(emb_size, n_classes)
)
class ViT_MLP_CUSTOM(nn.Module):
def __init__(self, n_classes, pretrained=False, vit=True):
super(ViT_MLP_CUSTOM, self).__init__()
if vit :
self.model = ViT()
else :
self.model = MlpMixer(n_classes, 12, 16, 768, 384, 3072)
if pretrained:
if vit :
self.model.load_state_dict(torch.load(VIT_PATH))
else :
self.model.load_state_dict(torch.load('./gmixer_24_224_raa-7daf7ae6.pth'))
#self.model.head = nn.Linear(self.model.head.in_features, n_classes)
def forward(self, x):
x = self.model(x)
return x
def train_one_epoch(self, train_loader, criterion, optimizer, device):
# keep track of training loss
epoch_loss = 0.0
epoch_accuracy = 0.0
###################
# train the model #
###################
self.model.train()
for i, (data, target) in enumerate(train_loader):
# move tensors to GPU if CUDA is available
if device.type == "cuda":
data, target = data.cuda(), target.cuda()
elif device.type == "xla":
data = data.to(device, dtype=torch.float32)
target = target.to(device, dtype=torch.int64)
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = self.forward(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# Calculate Accuracy
accuracy = (output.argmax(dim=1) == target).float().mean()
# update training loss and accuracy
epoch_loss += loss
epoch_accuracy += accuracy
# perform a single optimization step (parameter update)
if device.type == "xla":
xm.optimizer_step(optimizer)
if i % 20 == 0:
xm.master_print(f"\tBATCH {i+1}/{len(train_loader)} - LOSS: {loss}")
else:
optimizer.step()
return epoch_loss / len(train_loader), epoch_accuracy / len(train_loader)
def validate_one_epoch(self, valid_loader, criterion, device):
# keep track of validation loss
valid_loss = 0.0
valid_accuracy = 0.0
valid_f1 = 0.0
######################
# validate the model #
######################
self.model.eval()
for data, target in valid_loader:
# move tensors to GPU if CUDA is available
if device.type == "cuda":
data, target = data.cuda(), target.cuda()
elif device.type == "xla":
data = data.to(device, dtype=torch.float32)
target = target.to(device, dtype=torch.int64)
with torch.no_grad():
# forward pass: compute predicted outputs by passing inputs to the model
output = self.model(data)
# calculate the batch loss
loss = criterion(output, target)
# Calculate Accuracy
accuracy = (output.argmax(dim=1) == target).float().mean()
# update average validation loss and accuracy
valid_loss += loss
valid_accuracy += accuracy
valid_f1 += f1_score(output.argmax(dim=1).cpu().numpy(), target.cpu().numpy(), average='macro')
return valid_loss / len(valid_loader), valid_accuracy / len(valid_loader), valid_f1/ len(valid_loader)
def fit_tpu(
model, epochs, device, criterion, optimizer, train_loader, valid_loader=None
):
valid_loss_min = np.Inf # track change in validation loss
# keeping track of losses as it happen
train_losses = []
valid_losses = []
train_accs = []
valid_accs = []
valid_f1s = []
for epoch in range(1, epochs + 1):
gc.collect()
para_train_loader = pl.ParallelLoader(train_loader, [device])
xm.master_print(f"{'='*50}")
xm.master_print(f"EPOCH {epoch} - TRAINING...")
train_loss, train_acc = model.train_one_epoch(
para_train_loader.per_device_loader(device), criterion, optimizer, device
)
xm.master_print(
f"\n\t[TRAIN] EPOCH {epoch} - LOSS: {train_loss}, ACCURACY: {train_acc}\n"
)
train_losses.append(train_loss)
train_accs.append(train_acc)
gc.collect()
if valid_loader is not None:
gc.collect()
para_valid_loader = pl.ParallelLoader(valid_loader, [device])
xm.master_print(f"EPOCH {epoch} - VALIDATING...")
valid_loss, valid_acc, valid_f1 = model.validate_one_epoch(
para_valid_loader.per_device_loader(device), criterion, device
)
xm.master_print(f"\t[VALID] LOSS: {valid_loss}, ACCURACY: {valid_acc}, F1: {valid_f1}\n")
valid_losses.append(valid_loss)
valid_accs.append(valid_acc)
valid_f1s.append(valid_f1s)
gc.collect()
# save model if validation loss has decreased
if valid_loss <= valid_loss_min and epoch != 1:
xm.master_print(
"Validation loss decreased ({:.4f} --> {:.4f}). Saving model ...".format(
valid_loss_min, valid_loss
)
)
#xm.save(model.state_dict(), f'{DATA_DIR}/checkpoint/best_model.pth')
valid_loss_min = valid_loss
return {
"train_loss": train_losses,
"valid_losses": valid_losses,
"train_acc": train_accs,
"valid_acc": valid_accs,
"valid_f1:": valid_f1s
}
model = ViT_MLP_CUSTOM(n_classes=2, pretrained=False, vit=True)
def _run():
train_dataset = pneumonia_dataset(df_train, transforms=transforms_train)
valid_dataset = pneumonia_dataset(df_val, transforms=transforms_val)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True,
)
valid_sampler = torch.utils.data.distributed.DistributedSampler(
valid_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=False,
)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
sampler=train_sampler,
drop_last=True,
num_workers=8,
)
valid_loader = torch.utils.data.DataLoader(
dataset=valid_dataset,
batch_size=BATCH_SIZE,
sampler=valid_sampler,
drop_last=True,
num_workers=8,
)
criterion = nn.CrossEntropyLoss()
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = xm.xla_device()
model.to(device)
lr = LR * xm.xrt_world_size()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
xm.master_print(f"INITIALIZING TRAINING ON {xm.xrt_world_size()} TPU CORES")
start_time = datetime.now()
xm.master_print(f"Start Time: {start_time}")
logs = fit_tpu(
model=model,
epochs=N_EPOCHS,
device=device,
criterion=criterion,
optimizer=optimizer,
train_loader=train_loader,
valid_loader=valid_loader,
)
xm.master_print(f"Execution time: {datetime.now() - start_time}")
xm.master_print("Saving Model")
xm.save(
model.state_dict(), f'{DATA_DIR}/checkpoint/model_5e_{datetime.now().strftime("%Y%m%d-%H%M")}.pth'
)
# Start training processes
def _mp_fn(rank, flags):
torch.set_default_tensor_type("torch.FloatTensor")
a = _run()
# _run()
FLAGS = {}
xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=8, start_method="fork")
# load model
vit_custom_checkpoint_path = './drive/My Drive/torch_project/medi/mlpmixer/checkpoint/vit_custom.pth'
vit_custom_checkpoint = torch.load(vit_custom_checkpoint_path)
model.load_state_dict(vit_custom_checkpoint)
predict_scores(model, df_test, xm.xla_device())
###Output
_____no_output_____
###Markdown
MLP Mixer
###Code
class MlpBlock(nn.Module):
def __init__(self, hidden_dim, mlp_dim):
super(MlpBlock, self).__init__()
self.mlp = nn.Sequential(
nn.Linear(hidden_dim, mlp_dim),
nn.GELU(),
nn.Linear(mlp_dim, hidden_dim)
)
def forward(self, x):
return self.mlp(x)
class MixerBlock(nn.Module):
def __init__(self, num_tokens, hidden_dim, tokens_mlp_dim, channels_mlp_dim):
super(MixerBlock, self).__init__()
self.ln_token = nn.LayerNorm(hidden_dim)
self.token_mix = MlpBlock(num_tokens, tokens_mlp_dim)
self.ln_channel = nn.LayerNorm(hidden_dim)
self.channel_mix = MlpBlock(hidden_dim, channels_mlp_dim)
def forward(self, x):
out = self.ln_token(x).transpose(1, 2)
x = x + self.token_mix(out).transpose(1, 2)
out = self.ln_channel(x)
x = x + self.channel_mix(out)
return x
class MlpMixer(nn.Module):
def __init__(self, num_classes, num_blocks, patch_size, hidden_dim, tokens_mlp_dim, channels_mlp_dim, image_size=224):
super(MlpMixer, self).__init__()
num_tokens = (image_size // patch_size)**2
self.patch_emb = nn.Conv2d(3, hidden_dim, kernel_size=patch_size, stride=patch_size, bias=False)
self.mlp = nn.Sequential(*[MixerBlock(num_tokens, hidden_dim, tokens_mlp_dim, channels_mlp_dim) for _ in range(num_blocks)])
self.ln = nn.LayerNorm(hidden_dim)
self.fc = nn.Linear(hidden_dim, num_classes)
def forward(self, x):
x = self.patch_emb(x)
x = x.flatten(2).transpose(1, 2)
x = self.mlp(x)
x = self.ln(x)
x = x.mean(dim=1)
x = self.fc(x)
return x
model = ViT_MLP_CUSTOM(n_classes=2, pretrained=False, vit=False)
def _run():
train_dataset = pneumonia_dataset(df_train, transforms=transforms_train)
valid_dataset = pneumonia_dataset(df_val, transforms=transforms_val)
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=True,
)
valid_sampler = torch.utils.data.distributed.DistributedSampler(
valid_dataset,
num_replicas=xm.xrt_world_size(),
rank=xm.get_ordinal(),
shuffle=False,
)
train_loader = torch.utils.data.DataLoader(
dataset=train_dataset,
batch_size=BATCH_SIZE,
sampler=train_sampler,
drop_last=True,
num_workers=8,
)
valid_loader = torch.utils.data.DataLoader(
dataset=valid_dataset,
batch_size=BATCH_SIZE,
sampler=valid_sampler,
drop_last=True,
num_workers=8,
)
criterion = nn.CrossEntropyLoss()
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = xm.xla_device()
model.to(device)
lr = LR * xm.xrt_world_size()
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
xm.master_print(f"INITIALIZING TRAINING ON {xm.xrt_world_size()} TPU CORES")
start_time = datetime.now()
xm.master_print(f"Start Time: {start_time}")
logs = fit_tpu(
model=model,
epochs=N_EPOCHS,
device=device,
criterion=criterion,
optimizer=optimizer,
train_loader=train_loader,
valid_loader=valid_loader,
)
xm.master_print(f"Execution time: {datetime.now() - start_time}")
xm.master_print("Saving Model")
xm.save(
model.state_dict(), f'{DATA_DIR}/checkpoint/model_5e_{datetime.now().strftime("%Y%m%d-%H%M")}.pth'
)
# Start training processes
def _mp_fn(rank, flags):
torch.set_default_tensor_type("torch.FloatTensor")
a = _run()
# _run()
FLAGS = {}
xmp.spawn(_mp_fn, args=(FLAGS,), nprocs=8, start_method="fork")
# load model
mlp_custom_checkpoint_path = './drive/My Drive/torch_project/medi/mlpmixer/checkpoint/mlp_custom.pth'
mlp_custom_checkpoint = torch.load(mlp_custom_checkpoint_path)
model.load_state_dict(mlp_custom_checkpoint)
predict_scores(model, df_test, xm.xla_device())
###Output
_____no_output_____ |
2_correlation_analysis/0_examine_data.ipynb | ###Markdown
Examine dataThis notebook is examining the expression data that will be used in the network analysis
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import pandas as pd
import plotnine as pn
import seaborn as sns
import matplotlib.pyplot as plt
import umap
import random
import numpy as np
from scripts import paths
# Load expression data
pao1_compendium_filename = paths.PAO1_COMPENDIUM
pa14_compendium_filename = paths.PA14_COMPENDIUM
pao1_compendium = pd.read_csv(pao1_compendium_filename, sep="\t", header=0, index_col=0)
pa14_compendium = pd.read_csv(pa14_compendium_filename, sep="\t", header=0, index_col=0)
###Output
_____no_output_____
###Markdown
Visualize distribution of expression data
###Code
# Random PAO1 genes
random_pao1_ids = random.sample(list(pao1_compendium.columns), 4)
sns.pairplot(pao1_compendium[random_pao1_ids])
plt.suptitle("Random set of genes (PAO1)", y=1.05)
# Try removing outlier samples
pao1_compendium_tmp = pao1_compendium[pao1_compendium["PA1337"] < 200]
# Co-operonic PAO1 genes
# pao1_co_operonic_ids = ["PA0001", "PA0002", "PA0003", "PA0004"]
# pao1_co_operonic_ids = ["PA0054","PA0055", "PA0056"]
pao1_co_operonic_ids = ["PA1335", "PA1336", "PA1337"]
sns.pairplot(pao1_compendium_tmp[pao1_co_operonic_ids])
plt.suptitle("Co-operonic set of genes (PAO1)", y=1.05)
# Houskeeping PAO1 gene that we would expect a consistently high expression across samples
# which doesn't have that peak at 0
sns.displot(pao1_compendium["PA1805"])
# Random PA14 gene
random_pa14_ids = random.sample(list(pa14_compendium.columns), 4)
sns.pairplot(pa14_compendium[random_pa14_ids])
plt.suptitle("Random set of genes (PA14)", y=1.05)
###Output
_____no_output_____ |
notebooks/Fig5AB.ipynb | ###Markdown
The following lines need to be used if data the from downloaded dataset should be used. The location of the ``Data`` folder needs to be specified by the parameter ``DATA_FOLDER_PATH`` in the file ``input_params.json``. If you want to analyse your own dataset you need to set the variable ``file_path`` to the folder where the simulation is located. Importantly, in this folder there should only be located exactly one simulation.
###Code
model = 'LDDR' # options 'LDDR' or 'LDDR_titration'
indx = 0 # specifies growth rate if 0 doubling time is 2 h
file_path_input_params_json = '../input_params.json'
input_param_dict = mainClass.extract_variables_from_input_params_json(file_path_input_params_json)
root_path = input_param_dict["DATA_FOLDER_PATH"]
simulation_location = 'fig_5/time_traces/'+model
file_path = os.path.join(root_path, simulation_location)
print('file_path', file_path)
parameter_path = os.path.join(file_path, 'parameter_set.csv')
print('parameter_path', parameter_path)
###Output
file_path /home/berger/Documents/Arbeit/PhD/data/UltrasensitivityCombined/NatCom/fig_5/time_traces/LDDR
parameter_path /home/berger/Documents/Arbeit/PhD/data/UltrasensitivityCombined/NatCom/fig_5/time_traces/LDDR/parameter_set.csv
###Markdown
Make data frame from time traces
###Code
data_frame = makeDataframe.make_dataframe(file_path)
data_frame = data_frame.sort_values(by=['rate_growth'])
time_traces_data_frame = pd.read_hdf(data_frame['path_dataset'].iloc[indx], key='dataset_time_traces')
v_init_data_frame = pd.read_hdf(data_frame['path_dataset'].iloc[indx], key='dataset_init_events')
v_init = v_init_data_frame.iloc[-1]['v_init_per_ori']
v_init_per_ori = v_init_data_frame.iloc[-1]['v_init_per_ori']
t_init_list = v_init_data_frame['t_init'].to_numpy()
v_d_data_frame = pd.read_hdf(data_frame['path_dataset'].iloc[indx], key='dataset_div_events')
data_frame
time = np.array(time_traces_data_frame["time"])
volume = np.array(time_traces_data_frame["volume"])
n_ori = np.array(time_traces_data_frame["n_ori"])
active_fraction = np.array(time_traces_data_frame["active_fraction"])
free_conc = np.array(time_traces_data_frame["free_conc"])
print(time.size)
cycle_0 = 8
cycle_f = 11
t_0 = time[volume==v_d_data_frame['v_b'][cycle_0]]
indx_0 = np.where(time==t_0)[0][0]
t_f = time[volume==v_d_data_frame['v_b'][cycle_f]]
indx_f = np.where(time==t_f)[0][0]+10
print(indx_0, indx_f)
n_ori_cut = n_ori[indx_0:indx_f]
time_cut = time[indx_0:indx_f]
volume_cut = volume[indx_0:indx_f]
active_fraction_cut = active_fraction[indx_0:indx_f]
free_conc_cut = free_conc[indx_0:indx_f]
t_init_list_cut_1 = t_init_list[t_init_list>t_0]
t_init_list_cut = t_init_list_cut_1[t_init_list_cut_1<t_f]
t_b = t_init_list + data_frame.iloc[indx]['t_CD']
t_b_cut_1 = t_b[t_b<t_f]
t_b_cut = t_b_cut_1[t_b_cut_1>t_0]
print(t_init_list_cut, t_b_cut)
###Output
100000
20420 26431
[21.418 23.419 25.419] [22.418 24.419 26.419]
###Markdown
Color definitions
###Code
pinkish_red = (247 / 255, 109 / 255, 109 / 255)
green = (0 / 255, 133 / 255, 86 / 255)
dark_blue = (36 / 255, 49 / 255, 94 / 255)
light_blue = (168 / 255, 209 / 255, 231 / 255)
darker_light_blue = (112 / 255, 157 / 255, 182 / 255)
blue = (55 / 255, 71 / 255, 133 / 255)
yellow = (247 / 255, 233 / 255, 160 / 255)
###Output
_____no_output_____
###Markdown
Plot three figures
###Code
label_list = [r'$V(t)$', r'$[D]_{\rm T, f}(t)$', r'$f(t)$', r'$[D]_{\rm ATP, f}(t)$']
x_axes_list = [time_cut, time_cut, time_cut, time_cut]
y_axes_list = [volume_cut, free_conc_cut, active_fraction_cut, free_conc_cut * active_fraction_cut]
color_list = [green, dark_blue, darker_light_blue, pinkish_red]
fig, ax = plt.subplots(4, figsize=(3.2,4))
plt.xlabel(r'time [$\tau_{\rm d}$]')
y_min_list = [0,0,0,0]
y_max_list = [1, 1.2, 1.2, 1.2]
doubling_time = 1/data_frame.iloc[indx]['doubling_rate']
print(1/doubling_time)
print('number of titration sites per origin:', data_frame.iloc[indx]['n_c_max_0'])
for item in range(0, len(label_list)):
ax[item].set_ylabel(label_list[item])
ax[item].plot(x_axes_list[item], y_axes_list[item], color=color_list[item])
ax[item].set_ylim(ymin=0)
ax[item].tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False) # labels along the bottom edge are off
ax[item].spines["top"].set_visible(False)
ax[item].spines["right"].set_visible(False)
ax[item].margins(0)
for t_div in t_b_cut:
ax[item].axvline(x=t_div,
ymin=y_min_list[item],
ymax=y_max_list[item],
c="black",
zorder=0,
linewidth=0.8,
clip_on=False)
for t_init in t_init_list_cut:
ax[item].axvline(x=t_init,
ymin=y_min_list[item],
ymax=y_max_list[item],
c="black",
zorder=0,
linewidth=0.8,
linestyle='--',
clip_on=False)
if indx==0:
ax[0].set_yticks([0, v_init])
ax[0].set_yticklabels(['0',r'$v^\ast$'])
ax[0].get_yticklabels()[1].set_color(green)
ax[0].axhline(y=v_init, color=green, linestyle='--')
if indx==1:
ax[0].set_yticks([0, v_init, 2*v_init])
ax[0].set_yticklabels(['0',r'$v^\ast$',r'$2 \, v^\ast$'])
ax[0].get_yticklabels()[1].set_color(green)
ax[0].get_yticklabels()[2].set_color(green)
ax[0].axhline(y=v_init, color=green, linestyle='--')
ax[0].axhline(y=2 * v_init, color=green, linestyle='--')
if indx==2:
ax[0].set_yticks([0, 2 * v_init, 4*v_init])
ax[0].set_yticklabels(['0',r'$2 \, v^\ast$',r'$4 \, v^\ast$'])
ax[0].get_yticklabels()[1].set_color(green)
ax[0].get_yticklabels()[2].set_color(green)
ax[0].axhline(y=2 * v_init, color=green, linestyle='--')
ax[0].axhline(y=4 * v_init, color=green, linestyle='--')
# ax[0].set_yticks([0, v_init_per_ori, 2*v_init_per_ori, 4*v_init_per_ori])
# # ax[0].set(ylim=(0, v_init+0.01))
# ax[0].set_yticklabels(['0',r'$v^\ast$',r'$2 \,v^\ast$',r'$4 \, v^\ast$'])
# ax[0].get_yticklabels()[1].set_color(green)
# ax[0].get_yticklabels()[2].set_color(green)
# ax[0].get_yticklabels()[3].set_color(green)
# ax[0].axhline(y=v_init, color=green, linestyle='--')
ax[1].axhline(y=data_frame.iloc[0]['michaelis_const_initiator'], color=color_list[1], linestyle='--')
ax[1].set_yticks([0, data_frame.iloc[0]['michaelis_const_initiator']])
ax[1].set_yticklabels([0, r'$K_{\rm D}$'])
ax[1].get_yticklabels()[1].set_color(color_list[1])
ax[1].set(ylim=(0,data_frame.iloc[0]['michaelis_const_initiator']*1.5))
# ax[2].axhline(y=data_frame.iloc[0]['frac_init'], color=pinkish_red, linestyle='--')
ax[2].set_yticks([0, 0.5, 1])
ax[2].set_yticklabels(['0', '0.5', '1'])
ax[3].set_yticks([0, data_frame.iloc[0]['critical_free_active_conc']])
ax[3].set_yticklabels(['0',r'$[D]_{\rm ATP, f}^\ast$'])
ax[3].get_yticklabels()[1].set_color(color_list[3])
if model == 'LDDR':
ax[3].axhline(y=data_frame.iloc[0]['init_conc'], color=color_list[3], linestyle='--')
else:
ax[3].axhline(y=data_frame.iloc[0]['critical_free_active_conc'], color=color_list[3], linestyle='--')
ax[3].tick_params(bottom=True, labelbottom=True)
ax[3].tick_params(axis='x', colors='black')
ax[3].set_xticks([time_cut[0],
time_cut[0]+ doubling_time,
time_cut[0]+ 2*doubling_time,
time_cut[0]+ 3*doubling_time
])
ax[3].set_xticklabels(['0', '1', '2', '3'])
plt.savefig(file_path + '/S11_'+model+'_'+str(indx)+'.pdf', format='pdf',bbox_inches='tight')
###Output
0.5
number of titration sites per origin: 0.0
|
notebooks/md3-preparing-and-cleansing-data.ipynb | ###Markdown
Moneyball Project: UEFA Euro 2020 Fantasy FootballPassion project to leverage data-driven decision making for team selection in [UEFA Euro 2020 Fantasy Football](https://gaming.uefa.com/en/uefaeuro2020fantasyfootball/overview) Data Preparation and Cleansing----------------------------- PurposeInitial exploration on available dataset, aggregating and merging to dataframe for further exploration. Author[Christian Wibisono](https://github.com/christianwbsn) 1. Import Library
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import json
import re
pd.set_option('display.max_columns', 500)
pd.set_option('display.max_row',50)
import difflib
from tqdm import tqdm
from nltk import everygrams
DATA_DIR = "../data"
###Output
_____no_output_____
###Markdown
2. Common Function
###Code
def camel_to_snake(name):
name = re.sub(" ", "", name)
name = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name)
return re.sub('([a-z0-9])([A-Z])', r'\1_\2', name).lower()
def extract_date(date):
return pd.Series([date.year, date.month, date.day])
def euro_fantasy_score(df):
# Not covered by dataset
# Common - Goal from outside the box 2 points
# Common - Winning a penalty 2 points
# Common - Conceding a penalty -1 points
# Common - Own Goal -2 points
# common
score = 1
if df["min"] >= 60:
score += 1
if df["assists"] > 0:
score += (df["assists"] * 3)
if df["penalty_kick_miss"] > 0:
score -= (df["penalty_kick_miss"] * 2)
if df["yellow_cards"] > 0:
score -= 1
if df["red_cards"] > 0:
score -= 3
# position specific
if df["position"] == "F":
score += (df["goals"] * 4)
if df["position"] == "M":
score += (df["goals"] * 5)
if df["min"] >= 60 and df["clean_sheet"] > 0:
score += 1
if df["position"] == "D":
score += (df["goals"] * 6)
if df["min"] >= 60 and df["clean_sheet"] > 0:
score += 4
score -= (df['goals_allowed'] // 2)
if df["position"] == "GK":
score += (df["goals"] * 6)
score += (df["penalty_kick_saved"] * 5)
if df["min"] >= 60 and df["clean_sheet"] > 0:
score += 4
score += (df["saves"] // 3)
score -= (df["goals_allowed"] // 2)
return score
###Output
_____no_output_____
###Markdown
3. Dataset Exploration 3.1 Main Dataset
###Code
## Using this dataset as SSOT for player name and team name
main_df = pd.read_csv("{}/interim/md_1_df.csv".format(DATA_DIR))
main_df["date"] = pd.to_datetime(main_df["date"])
###Output
_____no_output_____
###Markdown
3.2 Euro 2020 Dataset 3.2.1 Players 3.2.1.1 Appending last matchday data
###Code
with open('{}/raw/euro-2020/players_1.json'.format(DATA_DIR))as f:
data = json.load(f)
players = data["data"]["value"]["playerList"]
old_players_df = pd.json_normalize(players)
old_players_df.rename(camel_to_snake, axis=1, inplace=True)
with open('{}/raw/euro-2020/players_2.json'.format(DATA_DIR))as f:
data = json.load(f)
players = data["data"]["value"]["playerList"]
players_df = pd.json_normalize(players)
players_df.rename(camel_to_snake, axis=1, inplace=True)
players_df = players_df[players_df["trained"]!='']
players_df = pd.merge(players_df, old_players_df[["p_f_name", "g_s", "assist", "y_c", "r_c", "p_m"]],
on="p_f_name", suffixes=("", "_last_md"))
players_df["g_s"] = players_df["g_s"] - players_df["g_s_last_md"]
players_df["assist"] = players_df["assist"] - players_df["assist_last_md"]
players_df["y_c"] = players_df["y_c"] - players_df["y_c_last_md"]
players_df["r_c"] = players_df["r_c"] - players_df["r_c_last_md"]
players_df["p_m"] = players_df["p_m"] - players_df["p_m_last_md"]
players_df = players_df.drop(["g_s_last_md", "assist_last_md", "y_c_last_md", "r_c_last_md", "p_m_last_md"], axis=1)
players_df["date"] = players_df["current_matches_list"].apply(lambda x: x[0]["matchDate"])
players_df["opponent_name"] = players_df["current_matches_list"].apply(lambda x: x[0]["vsTSCode"])
all_players_name = main_df["player"].unique()
def get_closest_match(name):
# return closest match for join operation
return ''.join(list(difflib.get_close_matches(name, all_players_name, n=1, cutoff=0.7)))
players_df["closest_match"] = players_df["p_f_name"].apply(get_closest_match)
players_df["player"] = players_df.apply(lambda x: x["closest_match"] if x["closest_match"] != "" else x["p_f_name"], axis=1)
players_df["date"] = pd.to_datetime(players_df["date"])
players_df[["year", "month", "day"]] = players_df["date"].apply(extract_date)
main_df.head()
players_df.head()
with open('{}/raw/euro-2020/fixtures.json'.format(DATA_DIR))as f:
data = json.load(f)
fixtures = data["data"]["value"][1]["match"]
fixtures_df = pd.json_normalize(fixtures)
fixtures_df["atName"] = fixtures_df["atName"].apply(lambda x: x.strip())
fixtures_df["htName"] = fixtures_df["htName"].apply(lambda x: x.strip())
def heuristic_minutes_played(df):
if (df["g_s"] == 0) and (df["assist"] == 0) and (df["y_c"] == 0) and (df["r_c"] == 0) and (df["last_gd_points"] == 2):
return 90
elif df["last_gd_points"] >= 2:
return 90
elif df["last_gd_points"] == 0:
return 0
else:
return 59
players_df["min"] = players_df.apply(heuristic_minutes_played, axis=1)
players_df.rename(columns={"t_name": "team_name", "g_s": "goals", "assist": "assists",
"y_c": "yellow_cards", "r_c" : "red_cards", "last_gd_points": "points",
"p_m": "penalty_kick_miss"}, inplace=True)
players_df["league_name"] = "European Championship 2020"
fixtures_df.head()
fixtures_df[fixtures_df["htName"] == "Belgium"]["htScore"]
def goals_allowed(df):
if df["team_name"] in fixtures_df["htName"].values:
return int(fixtures_df[fixtures_df["htName"] == df["team_name"]].reset_index()["atScore"])
else:
return int(fixtures_df[fixtures_df["atName"] == df["team_name"]].reset_index()["htScore"])
players_df["goals_allowed"] = players_df.apply(goals_allowed, axis=1)
players_df["clean_sheet"] = players_df["goals_allowed"].apply(lambda x: 1 if x == 0 else 0)
players_df["game_started"] = players_df["min"].apply(lambda x: 1 if x >= 60 else 0)
players_df.shape
# if players have multiple position choose the most common position
position = main_df.groupby("player").agg(position=('position', "first")).to_dict()["position"]
players_df["position"] = players_df["player"].apply(lambda x: position[x]
if x in position.keys() else "")
players_df.shape
players_df
def update_data(df):
main_df_columns = ["player", "date", "league_name", "game_started",
"team_name", "opponent_name", "position", "goals_allowed", "clean_sheet",
"year", "month", "day", "min", "goals", "assists",
"penalty_kick_miss","yellow_cards", "red_cards", "saves", "points"]
return df[main_df_columns]
new_train = update_data(players_df)
main_df = pd.concat([main_df, new_train])
main_df = main_df.fillna(0)
main_df.to_csv("{}/interim/md_2_df.csv".format(DATA_DIR), index=False)
###Output
_____no_output_____
###Markdown
3.2.1.2 Generating test data
###Code
with open('{}/raw/euro-2020/players_3.json'.format(DATA_DIR))as f:
data = json.load(f)
players = data["data"]["value"]["playerList"]
players_df = pd.json_normalize(players)
players_df.rename(camel_to_snake, axis=1, inplace=True)
players_df = players_df[players_df["trained"]!='']
players_df.head()
players_df["date"] = players_df["upcoming_matches_list"].apply(lambda x: x[0]["matchDate"])
players_df["opponent_name"] = players_df["upcoming_matches_list"].apply(lambda x: x[0]["vsTSCode"])
all_players_name = main_df["player"].unique()
def get_closest_match(name):
# return closest match for join operation
return ''.join(list(difflib.get_close_matches(name, all_players_name, n=1, cutoff=0.7)))
players_df["closest_match"] = players_df["p_f_name"].apply(get_closest_match)
players_df["player"] = players_df.apply(lambda x: x["closest_match"] if x["closest_match"] != "" else x["p_f_name"], axis=1)
players_df["date"] = pd.to_datetime(players_df["date"])
players_df[["year", "month", "day"]] = players_df["date"].apply(extract_date)
main_df.head()
players_df.head()
players_df.rename(columns={"t_name": "team_name"}, inplace=True)
players_df["league_name"] = "European Championship 2020"
players_df.shape
# if players have multiple position choose the most common position
position = main_df.groupby("player").agg(position=('position',
lambda x: x.value_counts().sort_index().sort_values(ascending=False).index[0])).to_dict()["position"]
players_df["position"] = players_df["player"].apply(lambda x: position[x]
if x in position.keys() else "")
players_df.shape
def generate_test_data(df):
main_df_columns = ["player", "date", "league_name",
"team_name", "opponent_name", "position",
"year", "month", "day"]
return df[main_df_columns]
test = generate_test_data(players_df)
main_df = pd.concat([main_df, test])
main_df = pd.merge(main_df, players_df[["player", "value", "skill"]], on=["player"], how="left")
def get_agg_before(df):
merged_df = df.copy()
merged_df = pd.merge(merged_df, df, on=["player", "team_name"])
merged_df = merged_df[merged_df['date_y'] < merged_df["date_x"]]
merged_df["is_scoring"] = merged_df["goals_y"].apply(lambda x: 1 if x > 0 else 0)
merged_df["is_assisting"] = merged_df["assists_y"].apply(lambda x: 1 if x > 0 else 0)
merged_df_1 = merged_df.groupby(["player", "team_name", "date_x"]).agg(
prev_mean_points=("points_y", "mean"),
prev_mean_goals=("goals_y", "median"),
prev_mean_assists=("assists_y", "mean"),
prev_max_points=("points_y", "max"),
prev_std_points=("points_y", "std"),
prev_std_goals=("goals_y", "std"),
prev_std_assists=("assists_y", "std"),
prev_median_min=("min_y", "median"),
prev_ratio_starter=("game_started_y", "mean"),
count_played=("date_y","nunique"),
goal_consistency=("is_scoring", "mean"),
assist_consistency=("is_assisting", "mean"),
clean_sheet_consistency=("clean_sheet_y", "mean")
)
merged_df_1 = merged_df_1.reset_index()
merged_df_1.rename(columns={"date_x": "date"}, inplace=True)
merged_df_2 = merged_df.groupby(["team_name", "date_x"]).agg(count_team_played=("date_y", "nunique"))
merged_df_2 = merged_df_2.reset_index()
merged_df_2.rename(columns={"date_x": "date"}, inplace=True)
merged_df_3 = merged_df[merged_df["opponent_name_x"] == merged_df["opponent_name_y"]]
merged_df_3 = merged_df_3.groupby(["player", "team_name", "date_x"]).agg(prev_max_goal_to_specific_opp=("goals_y", "max"),
prev_max_points_to_specific_opp=("points_y", "max"),
prev_mean_points_to_specific_opp=("points_y", "mean"))
merged_df_3 = merged_df_3.reset_index()
merged_df_3.rename(columns={"date_x": "date", "opponent_name_y": "opponent_name"}, inplace=True)
merged_df = pd.merge(merged_df_1, merged_df_2, on=["team_name", "date"], how="left")
merged_df = pd.merge(merged_df, merged_df_3, on=["player", "team_name", "date"], how="left")
merged_df["prev_ratio_played"] = merged_df["count_played"] / merged_df["count_team_played"]
return merged_df
agg = get_agg_before(main_df)
main_df.head()
main_df = main_df.sort_values(["player", "date"])
main_df['last_md_points'] = main_df.groupby("player")["points"].shift()
main_df['last_md_goals'] = main_df.groupby("player")["goals"].shift()
main_df['last_md_assists'] = main_df.groupby("player")["assists"].shift()
main_df = main_df.drop(["goals", "assists", "shots", "shots_on_goal", "crosses", "fouls_drawn",
"fouls_committed", "tackles_won", "interceptions", "yellow_cards", "red_cards",
"penalty_kick_miss", "clean_sheet", "goals_allowed", "accurate_passes",
"shots_assisted", "shootout_goals", "shootout_misses", "game_started", "saves", "wins",
"penalty_kick_saved", "shootout_saves"], axis=1)
main_df = pd.merge(main_df, agg, how="left", on=["player", "team_name", "date"])
main_df.columns
players_df.to_csv("{}/interim/fantasy_euro.csv".format(DATA_DIR), index=False)
main_df
###Output
_____no_output_____
###Markdown
3.3 National Team FIFA Rank Dataset
###Code
main_df["date"].describe()
fifa_rank = pd.read_csv("{}/raw/historical-match-and-rank/fifa_ranking-2021-05-27.csv".format(DATA_DIR))
CUTOFF_DATE = "2018-01-01"
fifa_rank = fifa_rank[fifa_rank["rank_date"] > CUTOFF_DATE]
fifa_rank = fifa_rank[["country_full", "rank", "total_points", "rank_date"]]
fifa_rank["rank_date"] = pd.to_datetime(fifa_rank["rank_date"])
fifa_rank = fifa_rank.sort_values(by=["country_full", "rank_date"])
# get fifa rank closest to the match date
df_with_rank = pd.merge(main_df[["team_name", "date"]], fifa_rank, how="left", left_on="team_name", right_on="country_full")
df_with_rank["time_diff"] = df_with_rank.apply(lambda x: (x['date']-x['rank_date']).total_seconds(), axis=1)
df_with_rank = df_with_rank[df_with_rank["time_diff"] > 0] # filter out rank after match
df_with_rank = df_with_rank.sort_values(by=["team_name", "time_diff"], ascending=False)
df_with_rank = df_with_rank.groupby(["team_name", "date"]).agg(prev_team_highest_rank=("rank", "min"),
team_rank=("rank", "last"),
team_total_points=("total_points", "last")).reset_index()
main_df = pd.merge(main_df, df_with_rank, how="left", on=["team_name", "date"])
# get fifa rank closest to the match date
df_with_rank = pd.merge(main_df[["opponent_name", "date"]], fifa_rank, how="left", left_on="opponent_name", right_on="country_full")
df_with_rank["time_diff"] = df_with_rank.apply(lambda x: (x['date']-x['rank_date']).total_seconds(), axis=1)
df_with_rank = df_with_rank[df_with_rank["time_diff"] > 0] # filter out rank after match
df_with_rank = df_with_rank.sort_values(by=["opponent_name", "time_diff"], ascending=False)
df_with_rank = df_with_rank.groupby(["opponent_name", "date"]).agg(prev_opponent_highest_rank=("rank", "min"),
opponent_rank=("rank", "last"),
opponent_total_points=("total_points", "last")).reset_index()
main_df = pd.merge(main_df, df_with_rank, how="left", on=["opponent_name", "date"])
main_df.head()
main_df.to_csv("{}/interim/main.csv".format(DATA_DIR), index=False)
historical_matches = pd.read_csv("{}/raw/historical-match-and-rank/international-footbal-match.csv".format(DATA_DIR))
historical_matches["date"] = pd.to_datetime(historical_matches["date"])
historical_matches = historical_matches[historical_matches["date"] > "2010-01-01"]
historical_matches["match"] = historical_matches["home_team"] + ',' + historical_matches['away_team']
historical_matches["match"] = historical_matches["match"].apply(lambda x: ' '.join(sorted(x.split(","))))
def get_match_result(df):
if df["home_score"] > df["away_score"]:
return df["home_team"]
elif df["away_score"] > df["home_score"]:
return df["away_team"]
else:
return "Draw"
historical_matches["result"] = historical_matches.apply(get_match_result, axis=1)
historical_matches["margin"] = historical_matches.apply(lambda x: abs(x["home_score"] - x["away_score"]), axis=1)
historical_matches.head()
def get_all_historical_matches(df, team, opp, date):
name_tuple = ' '.join(sorted([team, opp]))
hist = df[(df['match'] == name_tuple) & (df["date"] < date)]
hth = hist["result"].value_counts()
team_win, opp_win, draw = 0, 0, 0
if "Draw" in hth.keys():
draw = hth["Draw"]
if team in hth.keys():
team_win = hth[team]
if opp in hth.keys():
opp_win = hth[opp]
max_margin = hist["margin"].max()
team_score = hist[hist['home_team'] == team]["home_score"].sum() + hist[hist['away_team'] == team]["away_score"].sum()
opp_score = hist[hist['home_team'] == opp]["home_score"].sum() + hist[hist['away_team'] == opp]["away_score"].sum()
return pd.Series([team_win, opp_win, draw, team_score, opp_score, max_margin])
main_df[["hth_team_win", "hth_opp_win", "hth_draw", "hth_team_score", "hth_opp_score", "htt_max_margin"]] = main_df.apply(lambda x: get_all_historical_matches(historical_matches, x["team_name"], x["opponent_name"], x["date"]), axis=1)
###Output
_____no_output_____
###Markdown
3.4 Transfermarkt Dataset 3.4.1 National Team Level
###Code
euro = pd.read_excel("{}/raw/transfermarkt/transfermarkt-market-value.xlsx".format(DATA_DIR), sheet_name=0)
nations_league = pd.read_excel("{}/raw/transfermarkt/transfermarkt-market-value.xlsx".format(DATA_DIR), sheet_name=1)
euro_qual = pd.read_excel("{}/raw/transfermarkt/transfermarkt-market-value.xlsx".format(DATA_DIR), sheet_name=2)
wc_euro_qual = pd.read_excel("{}/raw/transfermarkt/transfermarkt-market-value.xlsx".format(DATA_DIR), sheet_name=3)
nations_league["league_name"] = "UEFA Nations League"
euro_qual["league_name"] = "European Championship Qualifiers"
wc_euro_qual["league_name"] = "European World Cup Qualifiers"
euro["league_name"] = "European Championship 2020"
euro = euro.drop(["EURO participations"], axis=1)
euro.rename(columns={"Average Age": "Age"}, inplace=True)
def preprocess_market_value(text):
match = re.sub("€", "", text)
match = re.search("(\d+(?:\.\d+)?)", text)
val = float(match.group())
num = text[match.end():]
if num == "bn":
val *= 10e9
elif num == "m":
val *= 10e6
elif num == "Th.":
val *= 10e3
return val
mv_df = pd.concat([nations_league, euro_qual, wc_euro_qual, euro])
mv_df["market_value"] = mv_df["Market Value"].apply(preprocess_market_value)
mv_df["mean_market_value"] = mv_df["Average Market Value"].apply(preprocess_market_value)
mv_df = mv_df.drop_duplicates(subset=["Club", "league_name"], keep="first")
mv_df = mv_df[["Club", "league_name", "Age", "market_value", "mean_market_value"]]
mv_df.rename(columns={"Club" : "team_name", "Age": "mean_squad_age"}, inplace=True)
main_df = pd.merge(main_df, mv_df, how="left", on=["team_name", "league_name"])
main_df.rename(columns={"mean_squad_age" : "team_mean_squad_age",
"mean_market_value": "team_mean_market_value",
"market_value" : "team_market_value"
}, inplace=True)
mv_df.rename(columns={"team_name" : "opponent_name"}, inplace=True)
main_df = pd.merge(main_df, mv_df, how="left", on=["opponent_name", "league_name"])
main_df.rename(columns={"mean_squad_age" : "opponent_mean_squad_age",
"mean_market_value": "opponent_mean_market_value",
"market_value" : "opponent_market_value"
}, inplace=True)
###Output
_____no_output_____
###Markdown
3.6 FIFA Dataset
###Code
fifa_21 = pd.read_csv("{}/raw/fifa/fifa-players_21.csv".format(DATA_DIR))
fifa_20 = pd.read_csv("{}/raw/fifa/players_20.csv".format(DATA_DIR))
fifa_19 = pd.read_csv("{}/raw/fifa/players_19.csv".format(DATA_DIR))
fifa_18 = pd.read_csv("{}/raw/fifa/players_18.csv".format(DATA_DIR))
fifa_21["nationality"] = fifa_21["nationality"].apply(lambda x: x.strip())
fifa_20["nationality"] = fifa_20["nationality"].apply(lambda x: x.strip())
fifa_19["nationality"] = fifa_19["nationality"].apply(lambda x: x.strip())
fifa_18["nationality"] = fifa_18["nationality"].apply(lambda x: x.strip())
fifa_21 = fifa_21[fifa_21['nationality'].isin(main_df['team_name'].unique())]
fifa_20 = fifa_20[fifa_20['nationality'].isin(main_df['team_name'].unique())]
fifa_19 = fifa_19[fifa_19['nationality'].isin(main_df['team_name'].unique())]
fifa_18 = fifa_18[fifa_18['nationality'].isin(main_df['team_name'].unique())]
fifa_21['len_name'] = fifa_21["long_name"].apply(lambda x: len(x.split(" ")))
fifa_21['len_short_name'] = fifa_21["short_name"].apply(lambda x: len(x.split(" ")))
fifa_21['min_char_in_name'] = fifa_21['long_name'].apply(lambda x: min(len(y) for y in x.split()))
fifa_21['min_char_in_short_name'] = fifa_21['short_name'].apply(lambda x: min(len(y) for y in x.split()))
fifa_20['len_name'] = fifa_20["long_name"].apply(lambda x: len(x.split(" ")))
fifa_20['len_short_name'] = fifa_20["short_name"].apply(lambda x: len(x.split(" ")))
fifa_20['min_char_in_name'] = fifa_20['long_name'].apply(lambda x: min(len(y) for y in x.split()))
fifa_20['min_char_in_short_name'] = fifa_20['short_name'].apply(lambda x: min(len(y) for y in x.split()))
fifa_19['len_name'] = fifa_19["long_name"].apply(lambda x: len(x.split(" ")))
fifa_19['len_short_name'] = fifa_19["short_name"].apply(lambda x: len(x.split(" ")))
fifa_19['min_char_in_name'] = fifa_19['long_name'].apply(lambda x: min(len(y) for y in x.split()))
fifa_19['min_char_in_short_name'] = fifa_19['short_name'].apply(lambda x: min(len(y) for y in x.split()))
fifa_18['len_name'] = fifa_18["long_name"].apply(lambda x: len(x.split(" ")))
fifa_18['len_short_name'] = fifa_18["short_name"].apply(lambda x: len(x.split(" ")))
fifa_18['min_char_in_name'] = fifa_18['long_name'].apply(lambda x: min(len(y) for y in x.split()))
fifa_18['min_char_in_short_name'] = fifa_18['short_name'].apply(lambda x: min(len(y) for y in x.split()))
def join_tuple_string(strings_tuple):
return ' '.join(strings_tuple)
def create_unigram_bigram_trigram_quadgram(text, x):
token_list = text.split(" ")
tuple_gram = list(everygrams(token_list, 2, x))
result = map(join_tuple_string, tuple_gram)
if x > 2:
return list(result) + [' '.join(token_list[::len(token_list)-1])]
return list(result)
def calculate_closest_token(df):
everygram = create_unigram_bigram_trigram_quadgram(df["long_name"], df['len_name'])
closest = difflib.get_close_matches(df["short_name"], everygram, n=1)
return ''.join(closest)
fifa_21['closest_match'] = fifa_21.apply(calculate_closest_token, axis=1)
fifa_20['closest_match'] = fifa_20.apply(calculate_closest_token, axis=1)
fifa_19['closest_match'] = fifa_19.apply(calculate_closest_token, axis=1)
fifa_18['closest_match'] = fifa_18.apply(calculate_closest_token, axis=1)
name_mapping = {
'Aleksandar Dragović': 'Aleksandar Dragovic',
'Aleš Matějů': 'Ales Mateju',
'Alex Král': 'Alex Kral',
'Anatoliy Trubin': 'Anatolii Trubin',
'András Schäfer': 'Andras Schafer',
'Dean Cornelius': 'Andreas Cornelius',
'Andrej Kramarić': 'Andrej Kramaric',
'Ante Rebić': 'Ante Rebic',
'Bartosz Bereszyński': 'Bartosz Bereszynski',
'Bećir Omeragić': 'Becir Omeragic',
'Bogdan Mykhaylychenko': 'Bogdan Mykhaylichenko',
'Borna Barišić': 'Borna Barisic',
'B. Embolo': 'Breel Embolo',
'Bruno Petković': 'Bruno Petkovic',
'Burak Yılmaz': 'Burak Yilmaz',
'Che Adams': 'Che Adams',
'D.Rice': "Declan Rice",
'Christian Günter': 'Chris Gunter',
'Liam Craig Gordon': 'Craig Gordon',
'Azpilicueta': 'César Azpilicueta',
'Anga Dedryck Boyata': 'Dedryck Boyata',
'Davor Lovren': 'Dejan Lovren',
'Lemi Zakaria': 'Denis Zakaria',
'Diego Javier Llorente': 'Diego Llorente',
'Dmitriy Barinov': 'Dimitri Barinov',
'Domagoj Bradarić': 'Domagoj Bradaric',
'Dominik Livaković': 'Dominik Livakovic',
'van de Beek': 'Donny van de Beek',
'Dorukhan Toköz': 'Dorukhan Tokoz',
'Duje Ćaleta-Car': 'Duje Caleta-Car',
'Dušan Kuciak': 'Dusan Kuciak',
'Miklós Sigér': 'Dávid Miklós Sigér',
'Eray Ervin Cömert': 'Eray Cömert',
'Frederik Rønnow': 'Frederik Rönnow',
'Georgiy Bushchan': 'Georgi Bushchan',
'Georgiy Dzhikiya': 'Georgi Dzhikiya',
'Glen Adjei Kamara': 'Glen Kamara',
'Greg Taylor': 'Greg Taylor',
'Hakan Çalhanoğlu': 'Hakan Calhanoglu',
'Hakan Calhanoglu':'Hakan Calhanoglu',
'Haris Seferović': 'Haris Seferovic',
'İlkay Gündoğan': 'Ilkay Gündogan',
'İrfan Can Kahveci': 'Irfan Kahveci',
'Ivan Perišić': 'Ivan Perisic',
'Jakub Holúbek': 'Jakub Holubek',
'Jamal Musiala': 'Jamal Musiala',
'Alexander Lawrence': 'James Alexander Lawrence',
'Jan Bořil': 'Jan Boril',
'Jens Jønsson': 'Jens Jonsson',
'Jere Juhani Uronen': 'Jere Uronen',
'Jiří Pavlenka': 'Jirí Pavlenka',
'Joakim Mæhle': 'Joakim Maehle',
'Joseff Morrell': 'Joe Morrell',
'Jordi Alba Ramos': 'Jordi Alba',
'Josip Juranović': 'Josip Juranovic',
'Palhinha': 'João Palhinha',
'Jérémy Doku': 'Jéremy Doku',
'Kamil Jóźwiak': 'Kamil Jozwiak',
'Karol Świderski': 'Karol Swiderski',
'Stefan Ristovski': 'Stefan Spirovski',
'Kurt Happy Zouma': 'Kurt Zouma',
'Lasse Schøne': 'Lasse Schöne',
'Lovre Kalinić': 'Lovre Kalinic',
'Lucas Hernández Pi': 'Lucas Hernández',
'Luka Modrić': 'Luka Modric',
'Lukáš Haraslín': 'Lukas Haraslin',
'Lukáš Masopust': 'Lukas Masopust',
'Łukasz Fabiański': 'Lukasz Fabianski',
'Lukáš Hrádecký': 'Lukás Hrádecky',
'Manuel Viana': 'Manuel Akanji',
'Marcelo Brozović': 'Marcelo Brozovic',
'Marcus Danielsson': 'Marcus Danielson',
'Marek Hamšík': 'Marek Hamsik',
'Marko Arnautović': 'Marko Arnautovic',
'Martin Dúbravka': 'Martin Dubravka',
'Matěj Vydra': 'Matej Vydra',
'Mateo Kovačić': 'Mateo Kovacic',
'Matúš Bero': 'Matús Bero',
'Michael Krmenčík': 'Michal Krmencik',
'Michael Gurski': 'Michal Duris',
'Michał Helik': 'Michal Helik',
'Carl Mikael Lustig': 'Mikael Lustig',
'Oyarzabal': 'Mikel Oyarzabal',
'Milan Škriniar': 'Milan Skriniar',
'Mile Svilar': 'Mile Skoric',
'Mislav Oršić': 'Mislav Orsic',
'M. Kean': 'Moise Kean',
'Mykola Matvienko': 'Mykola Matvyenko',
'Nemanja Nikolić': 'Nemanja Nikolics',
'N. Hämäläinen': 'Niko Hämäläinen',
'Nikola Vlašić': 'Nikola Vlasic',
'Nélson Cabral Semedo': 'Nélson Semedo',
'Okay Yokuşlu': 'Okay Yokuslu',
'Aleksandr Zhirov': 'Oleksandr Zubkov',
'Ondřej Čelůstka': 'Ondrej Celustka',
'Ondřej Kúdela': 'Ondrej Kudela',
'Orkun Kökçü': 'Orkun Kökcü',
'O. Kabak': 'Ozan Kabak',
'Patrik Hrošovský': 'Patrik Hrosovsky',
'Pavel Kadeřábek': 'Pavel Kaderábek',
'Petr Ševčík': 'Petr Sevcik',
'Philip Foden': 'Phil Foden',
'Leo Bengtsson': 'Pierre Bengtsson',
'Piotr Zieliński': 'Piotr Zielinski',
'Przemysław Frankowski': 'Przemyslaw Frankowski',
'Przemysław Płacheta': 'Przemyslaw Placheta',
'Raphaël Varane': 'Raphael Varane',
'Renato Júnior Luz Sanches': 'Renato Sanches',
'Róbert Boženík': 'Robert Bozenik',
'Ruslan Malinovskyi': 'Ruslan Malinovskiy',
'Ryan Jiro Gravenberch': 'Ryan Gravenberch',
'Saša Kalajdžić': 'Sasa Kalajdzic',
'Sergiy Kryvtsov': 'Serhii Kryvtsov',
'Šime Vrsaljko': 'Sime Vrsaljko',
'Tamás Cseri': 'Tamas Cseri',
'Taylan Antalyalı': 'Taylan Antalyali',
'Tomáš Pekhart': 'Tomas Pekhart',
'Tomáš Souček': 'Tomas Soucek',
'Tomáš Suslov': 'Tomas Suslov',
'Tomasz Kędziora': 'Tomasz Kedziora',
'Thomas Holmes': 'Tomás Holes',
'Tomáš Vaclík': 'Tomás Vaclik',
'Uğurcan Çakır': 'Ugurcan Çakir',
'Umut Meraş': 'Umut Meras',
'Cengiz Umut Meraş': 'Umut Meras',
'Vitaliy Mykolenko': 'Vitalii Mykolenko',
'Vladimír Coufal': 'Vladimir Coufal',
'Vladimír Darida': 'Vladimir Darida',
'William Silva de Carvalho': 'William Carvalho',
'Yuriy Zhirkov': 'Yuri Zhirkov',
'Yusuf Yazıcı': 'Yusuf Yazici',
'Çağlar Söyüncü': 'Çaglar Söyüncü',
'C. Eriksen': "Christian Eriksen",
'Alexander Walke': 'Alexander Isak',
'Aleksandr Sobolev': 'Alexander Sobolev',
'Antonín Barák': 'Antonin Barak',
'Benjamin Cabango': 'Ben Cabango',
'Bogdan Mykhaylichenko': 'Bogdan Mykhaylichenko',
'Borna Barisic': 'Borna Barisic',
'Mikael Lustig': 'Carl Mikael Lustig',
'Che Adams': 'Che Adams',
'Chris Gunter': 'Chris Gunter',
'Christian Gentner': 'Christian Günter',
'Daniel Avramovski': 'Daniel Avramovski',
'Declan Rice': 'Declan Rice',
'Dejan Kulusevski': 'Dejan Kulusevski',
'Diogo José': 'Diogo Jota',
'Domagoj Vida': 'Domagoj Vida',
'Dominik Livakovic': 'Dominik Livakovic',
'Dylan Levitt': 'Dylan Levitt',
'Dávid Sigér': 'Dávid Sigér',
'Eduard Sobol': 'Eduard Sobol',
'Eljif Elmas': 'Eljif Elmas',
'Eric García Martret': 'Eric García',
'Ethan Ampadu': 'Ethan Ampadu',
'Ferhan Hasani': 'Ferhan Hasani',
'Filip Helander': 'Filip Holender',
'Greg Taylor': 'Greg Taylor',
'Halil Dervişoğlu': 'Halil Dervisoglu',
'Irfan Kahveci': 'Irfan Can Kahveci',
'Ivan Trickovski': 'Ivan Trickovski',
'Jakub Świerczok': 'Jakub Swierczok',
'Jamal Musiala': 'Jamal Musiala',
'James Lawrence': 'Jamie Lawrence',
'Jens-Lys Cajuste': 'Jens Cajuste',
'Josip Juranovic': 'Josip Juranovic',
'Jude Bellingham': 'Jude Bellingham',
'Kacper Trelowski': 'Kacper Kozlowski',
'Kamil Piątkowski': 'Kamil Piatkowski',
'Leo Väisänen': 'Leo Väisänen',
'Łukasz Skorupski': 'Lukasz Skorupski',
'Lukáš Provod': 'Lukáš Provod',
'Lyndon Dykes': 'Lyndon Dykes',
'Magomed Ozdoev': 'Magomed Ozdoev',
'Mário Fernandes': 'Mario Fernandes',
'Mehmet Zeki Çelik': 'Mehmet Zeki Çelik',
'Merih Demiral': 'Merih Demiral',
'Mert Müldür': 'Mert Müldür',
'Paweł Dawidowicz': 'Pawel Dawidowicz',
'Petr Sevcik': 'Petr Sevcik',
'Pyry Soiri': 'Pyry Soiri',
'Rabbi Matondo': 'Rabbi Matondo',
'Rıdvan Yılmaz': 'Ridvan Yilmaz',
'Robert Bozenik': 'Robert Bozenik',
'Robert Sanchez': 'Robert Sánchez',
'Serhiy Sydorchuk': 'Serhiy Sydorchuk',
'Tamas Cseri': 'Tamas Cseri',
'Tomáš Kalas': 'Tomas Kalas',
'Tomás Holes': 'Tomás Holes',
'Tomáš Koubek': 'Tomáš Souček',
'Ugurcan Çakir': 'Ugurcan Cakir',
'Vitalii Mykolenko': 'Vitaliy Mykolenko',
'Vladimir Coufal': 'Vladimir Coufal',
'Vladimir Darida': 'Vladimír Darida',
'Vlatko Stojanovski': 'Vlatko Stojanovski',
'Wojciech Szczęsny': 'Wojciech Szczesny',
'Simon Thorup Kjær': "Simon Kjaer",
'Simon Kjær': "Simon Kjaer",
"Simon Kjær": "Simon Kjaer",
'Ádám Lang': 'Ádám Lang',
'Luís Gayà': 'José Gayá',
'João Félix Sequeira': 'João Félix',
'De Gea':'David de Gea',
'Ferrán Torres': 'Ferran Torres',
'Mehmet Çelik': 'Mehmet Zeki Çelik',
'Can Kahveci': 'Irfan Can Kahveci',
'Mert Günok': 'Fehmi Mert Günok',
'J. Stryger Larsen': 'Jens Stryger Larsen',
'Jens Larsen': 'Jens Stryger Larsen',
'José Guerreiro': 'Raphael Guerreiro',
'D. Sow': 'Djibril Sow',
'Ben Yedder': 'Wissam Ben Yedder',
'Lukás Hrádecky': 'Lukas Hradecky',
'Mikael Lustig' : 'Carl Mikael Lustig',
'Thiago':'Thiago Alcántara',
'Vladimír Darida' : "Vladimír Darida",
'Tomáš Hubočan': 'Tomas Hubocan',
'Anga Boyata': 'Dedryck Boyata',
'Ilkay Gündogan': 'İlkay Gündoğan',
"Morata":'Álvaro Morata',
"I. Perišić" :"Ivan Perišić",
"Andrew Robertson": "Andy Robertson",
"Peter McLaughlin": "Jon McLaughlin",
"Iván Rodríguez": "Ricardo Rodríguez",
"Landry Mvogo":"Yvon Mvogo",
"Alexander Granlund": "Albin Granlund"
}
def heuristic_match(df):
if df["len_short_name"] == 1:
return df["short_name"]
if len(df["closest_match"].split()) == 0:
return df["short_name"]
elif df["min_char_in_short_name"] >= 4:
return df["short_name"]
elif df["len_name"] > 3:
return df["closest_match"]
elif df["min_char_in_name"] >= 3:
return df["closest_match"]
else:
return df["long_name"]
fifa_21["player"] = fifa_21.apply(heuristic_match, axis=1)
fifa_20["player"] = fifa_20.apply(heuristic_match, axis=1)
fifa_19["player"] = fifa_19.apply(heuristic_match, axis=1)
fifa_18["player"] = fifa_18.apply(heuristic_match, axis=1)
def map_name(name):
global name_mapping
if name in name_mapping.keys():
return name_mapping[name]
else:
return name
fifa_21['player'] = fifa_21['player'].apply(map_name)
fifa_20['player'] = fifa_20['player'].apply(map_name)
fifa_19['player'] = fifa_19['player'].apply(map_name)
fifa_18['player'] = fifa_18['player'].apply(map_name)
col = ["player", "nationality", "work_rate", "age", "height_cm", "weight_kg", "league_rank", "overall", "potential", "wage_eur",
"international_reputation", "pace", "shooting", "passing", "dribbling", "defending", 'nation_position', 'nation_jersey_number',
"physic", "attacking_crossing", "attacking_finishing", "attacking_heading_accuracy", "attacking_short_passing",
"attacking_volleys", "skill_dribbling", "skill_curve", "skill_fk_accuracy", "skill_long_passing", "skill_ball_control",
"movement_acceleration", "movement_sprint_speed", "movement_agility", "movement_reactions", "movement_balance", "power_shot_power",
"power_jumping", "power_stamina","power_strength", "power_long_shots", "mentality_aggression", "mentality_interceptions",
"mentality_positioning", "mentality_vision", "mentality_penalties", "mentality_composure", "defending_standing_tackle",
"defending_sliding_tackle", "goalkeeping_diving", "goalkeeping_handling", "goalkeeping_kicking", "goalkeeping_positioning", "goalkeeping_reflexes"]
fifa_21 = fifa_21[col]
fifa_20 = fifa_20[col]
fifa_19 = fifa_19[col]
fifa_18 = fifa_18[col]
fifa_21.rename(columns={"nationality": "team_name"}, inplace=True)
fifa_20.rename(columns={"nationality": "team_name"}, inplace=True)
fifa_19.rename(columns={"nationality": "team_name"}, inplace=True)
fifa_18.rename(columns={"nationality": "team_name"}, inplace=True)
fifa_21 = fifa_21.drop_duplicates(subset=["player", "team_name"], keep="first")
fifa_20 = fifa_20.drop_duplicates(subset=["player", "team_name"], keep="first")
fifa_19 = fifa_19.drop_duplicates(subset=["player", "team_name"], keep="first")
fifa_18 = fifa_18.drop_duplicates(subset=["player", "team_name"], keep="first")
fifa_21["year"] = 2021
fifa_20["year"] = 2020
fifa_19["year"] = 2019
fifa_18["year"] = 2018
fifa = pd.concat([fifa_21, fifa_20, fifa_19, fifa_18])
main_df = pd.merge(main_df, fifa, how="left", on=["player", "team_name", "year"])
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
main_df.head()
main_df["diff_team_points"] = main_df['team_total_points'] - main_df['opponent_total_points']
main_df["diff_team_ranking"]= main_df['team_rank'] - main_df['opponent_rank']
main_df["diff_team_market_value"] = main_df['team_market_value'] - main_df['opponent_market_value']
main_df["diff_team_mean_market_value"] = main_df['team_mean_market_value'] - main_df['opponent_mean_market_value']
main_df["diff_team_mean_squad_age"] = main_df['team_mean_squad_age'] - main_df['opponent_mean_squad_age']
main_df["diff_team_ranking"]= main_df['team_rank'] - main_df['opponent_rank']
main_df["is_senior"] = main_df["age"] > main_df["team_mean_squad_age"]
main_df["is_imbalanced"]= main_df['diff_team_ranking'].apply(lambda x: abs(x) > 10)
main_df["gap_to_potential"] = main_df["potential"] - main_df["overall"]
main_df["roi"] = main_df["points"] / main_df["value"]
main_df["more_likely_to_win"] = (main_df["hth_team_win"] - main_df["hth_opp_win"]) >= 2
main_df["work_rate"] = main_df['work_rate'].fillna("")
main_df[["attacking_work_rate", "defending_work_rate"]] = main_df["work_rate"].apply(lambda x: pd.Series(x.split("/")))
main_df = main_df.drop(["work_rate"], axis=1)
main_df.drop_duplicates(subset=["player", "date"], inplace=True)
main_df.to_csv("{}/processed/dataset_md3.csv".format(DATA_DIR), index=False)
###Output
_____no_output_____ |
Day_009_HW.ipynb | ###Markdown
檢視與處理 Outliers 為何會有 outliers, 常見的 outlier 原因* 未知值,隨意填補 (約定俗成的代入),如年齡常見 0,999* 可能的錯誤紀錄/手誤/系統性錯誤,如某本書在某筆訂單的銷售量 = 1000 本 [作業目標]- 依照下列提示與引導, 以幾種不同的方式, 檢視可能的離群值 [作業重點]- 從原始資料篩選可能的欄位, 看看那些欄位可能有離群值 (In[3], Out[3])- 繪製目標值累積密度函數(ECDF)的圖形, 和常態分布的累積密度函數對比, 以確認是否有離群值的情形 (In[6], Out[6], In[7], Out[7])
###Code
# Import 需要的套件
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
app_train = pd.read_csv('application_train.csv')
app_train.head()
app_train.dtypes.value_counts()
###Output
_____no_output_____
###Markdown
請參考 HomeCredit_columns_description.csv 的欄位說明,觀察並列出三個你覺得可能有 outlier 的欄位並解釋可能的原因
###Code
# 先篩選數值型的欄位
numeric_columns = list(app_train.columns[list((app_train.dtypes == 'int64') | (app_train.dtypes == 'float64'))])
print("{} columns with number type".format(len(numeric_columns)))
# 再把只有 2 值 (通常是 0,1) 的欄位去掉
numeric_columns = list(app_train[numeric_columns].columns[list(app_train[numeric_columns].apply(lambda x:len(x.unique())!=2 ))])
print("{} numeric columns without bool type".format(len(numeric_columns)))
# 檢視這些欄位的數值範圍
for col in numeric_columns:
sns.boxplot(y = col, data = app_train)
plt.show()
# 從上面的圖檢查的結果,至少這三個欄位好像有點可疑
# AMT_INCOME_TOTAL
# REGION_POPULATION_RELATIVE
# OBS_60_CNT_SOCIAL_CIRCLE
###Output
_____no_output_____
###Markdown
Hints: Emprical Cumulative Density Plot, [ECDF](https://zh.wikipedia.org/wiki/%E7%BB%8F%E9%AA%8C%E5%88%86%E5%B8%83%E5%87%BD%E6%95%B0), [ECDF with Python](https://stackoverflow.com/questions/14006520/ecdf-in-python-without-step-function)
###Code
# 最大值離平均與中位數很遠
#print(app_train['AMT_INCOME_TOTAL'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
YOUR CODE HERE
"""
value_counts_df = app_train['AMT_INCOME_TOTAL'].value_counts()
#print(value_counts_df)
sorted_df = value_counts_df.sort_index()
#print(sorted_df)
cdf = sorted_df.cumsum()
#print(cdf)
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.xlim([cdf.index.min(), cdf.index.max() * 1.05]) # 限制顯示圖片的範圍
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
# 改變 y 軸的 Scale, 讓我們可以正常檢視 ECDF
plt.bar(np.log(list(cdf.index)), cdf/cdf.max())
plt.xlabel('Value (log-scale)')
plt.ylabel('ECDF')
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
print(app_train['AMT_INCOME_TOTAL'].value_counts().sort_index(ascending = False))
###Output
_____no_output_____
###Markdown
補充:Normal dist 的 ECDF
###Code
# 最大值落在分布之外
print(app_train['REGION_POPULATION_RELATIVE'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
Your Code Here
"""
cdf = app_train['REGION_POPULATION_RELATIVE'].value_counts().sort_index().cumsum()
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
app_train['REGION_POPULATION_RELATIVE'].hist()
plt.show()
print(app_train['REGION_POPULATION_RELATIVE'].value_counts().sort_index(ascending = False))
# 就以這個欄位來說,雖然有資料掉在分布以外,也不算異常,僅代表這間公司在稍微熱鬧的地區有的據點較少,
# 導致 region population relative 在少的部分較為密集,但在大的部分較為疏漏
# 最大值落在分布之外
print(app_train['OBS_60_CNT_SOCIAL_CIRCLE'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
cdf = app_train['OBS_60_CNT_SOCIAL_CIRCLE'].value_counts().sort_index().cumsum()
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.xlim([cdf.index.min() * 0.95, cdf.index.max() * 1.05])
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
app_train['OBS_60_CNT_SOCIAL_CIRCLE'].hist()
plt.show()
print(app_train['OBS_60_CNT_SOCIAL_CIRCLE'].value_counts().sort_index(ascending = False))
###Output
count 306490.000000
mean 1.405292
std 2.379803
min 0.000000
25% 0.000000
50% 0.000000
75% 2.000000
max 344.000000
Name: OBS_60_CNT_SOCIAL_CIRCLE, dtype: float64
###Markdown
注意:當 histogram 畫出上面這種圖 (只出現一條,但是 x 軸延伸很長導致右邊有一大片空白時,代表右邊有值但是數量稀少。這時可以考慮用 value_counts 去找到這些數值
###Code
# 把一些極端值暫時去掉,在繪製一次 Histogram
# 選擇 OBS_60_CNT_SOCIAL_CIRCLE 小於 20 的資料點繪製
"""
Your Code Here
"""
loc_a = list(app_train['OBS_60_CNT_SOCIAL_CIRCLE'] < 2.0)
loc_b = ['OBS_60_CNT_SOCIAL_CIRCLE']
app_train.loc[loc_a, loc_b].hist()
plt.show()
###Output
_____no_output_____
###Markdown
檢視與處理 Outliers 為何會有 outliers, 常見的 outlier 原因* 未知值,隨意填補 (約定俗成的代入),如年齡常見 0,999* 可能的錯誤紀錄/手誤/系統性錯誤,如某本書在某筆訂單的銷售量 = 1000 本 [作業目標]- 依照下列提示與引導, 以幾種不同的方式, 檢視可能的離群值 [作業重點]- 從原始資料篩選可能的欄位, 看看那些欄位可能有離群值 (In[3], Out[3])- 繪製目標值累積密度函數(ECDF)的圖形, 和常態分布的累積密度函數對比, 以確認是否有離群值的情形 (In[6], Out[6], In[7], Out[7])
###Code
# Import 需要的套件
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# 設定 data_path
# dir_data = './data'
# f_app = os.path.join(dir_data, 'application_train.csv')
# print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv('application_train.csv')
app_train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 307511 entries, 0 to 307510
Columns: 122 entries, SK_ID_CURR to AMT_REQ_CREDIT_BUREAU_YEAR
dtypes: float64(65), int64(41), object(16)
memory usage: 286.2+ MB
###Markdown
請參考 HomeCredit_columns_description.csv 的欄位說明,觀察並列出三個你覺得可能有 outlier 的欄位並解釋可能的原因
###Code
# 先篩選數值型的欄位
"""
YOUR CODE HERE, fill correct data types (for example str, float, int, ...)
"""
dtype_select = [np.dtype('float64'), np.dtype('int64')]
numeric_columns = list(app_train.columns[list(app_train.dtypes.isin(dtype_select))])
# 再把只有 2 值 (通常是 0,1) 的欄位去掉
numeric_columns = list(app_train[numeric_columns].columns[list(app_train[numeric_columns].apply(lambda x:len(x.unique())!=2 ))])
# print("Numbers of remain columns" % len(numeric_columns) )
# 檢視這些欄位的數值範圍
for col in numeric_columns:
"""
Your CODE HERE, make the box plot
"""
fig,axes = plt.subplots()
app_train[col].plot(kind='box',ax=axes)
plt.show()
# 從上面的圖檢查的結果,至少這三個欄位好像有點可疑
# AMT_INCOME_TOTAL
# REGION_POPULATION_RELATIVE
# OBS_60_CNT_SOCIAL_CIRCLE
# 另外發現覺得可疑的欄位
# OBS_30_CNT_SOCIAL_CIRCLE
# DEF_30_CNT_SOCIAL_CIRCLE
# DEF_60_CNT_SOCIAL_CIRCLE
# AMT_REQ_CREDIT_BUREAU_QRT
###Output
_____no_output_____
###Markdown
Hints: Emprical Cumulative Density Plot, [ECDF](https://zh.wikipedia.org/wiki/%E7%BB%8F%E9%AA%8C%E5%88%86%E5%B8%83%E5%87%BD%E6%95%B0), [ECDF with Python](https://stackoverflow.com/questions/14006520/ecdf-in-python-without-step-function)
###Code
# 最大值離平均與中位數很遠
print(app_train['AMT_INCOME_TOTAL'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
YOUR CODE HERE
"""
cdf = app_train['AMT_INCOME_TOTAL']
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.xlim([cdf.index.min(), cdf.index.max() * 1.05]) # 限制顯示圖片的範圍
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
# 改變 y 軸的 Scale, 讓我們可以正常檢視 ECDF
plt.plot(np.log(list(cdf.index)), cdf/cdf.max())
plt.xlabel('Value (log-scale)')
plt.ylabel('ECDF')
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
###Output
count 3.075110e+05
mean 1.687979e+05
std 2.371231e+05
min 2.565000e+04
25% 1.125000e+05
50% 1.471500e+05
75% 2.025000e+05
max 1.170000e+08
Name: AMT_INCOME_TOTAL, dtype: float64
###Markdown
補充:Normal dist 的 ECDF
###Code
# 最大值落在分布之外
print(app_train['REGION_POPULATION_RELATIVE'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
Your Code Here
"""
cdf = app_train['REGION_POPULATION_RELATIVE']
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
app_train['REGION_POPULATION_RELATIVE'].hist()
plt.show()
app_train['REGION_POPULATION_RELATIVE'].value_counts()
# 就以這個欄位來說,雖然有資料掉在分布以外,也不算異常,僅代表這間公司在稍微熱鬧的地區有的據點較少,
# 導致 region population relative 在少的部分較為密集,但在大的部分較為疏漏
# 最大值落在分布之外
print(app_train['OBS_60_CNT_SOCIAL_CIRCLE'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
# Your Code Here
cdf = app_train['OBS_60_CNT_SOCIAL_CIRCLE']
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.xlim([cdf.index.min() * 0.95, cdf.index.max() * 1.05])
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
app_train['OBS_60_CNT_SOCIAL_CIRCLE'].hist()
plt.show()
print(app_train['OBS_60_CNT_SOCIAL_CIRCLE'].value_counts().sort_index(ascending = False))
###Output
count 306490.000000
mean 1.405292
std 2.379803
min 0.000000
25% 0.000000
50% 0.000000
75% 2.000000
max 344.000000
Name: OBS_60_CNT_SOCIAL_CIRCLE, dtype: float64
###Markdown
注意:當 histogram 畫出上面這種圖 (只出現一條,但是 x 軸延伸很長導致右邊有一大片空白時,代表右邊有值但是數量稀少。這時可以考慮用 value_counts 去找到這些數值
###Code
# 把一些極端值暫時去掉,在繪製一次 Histogram
# 選擇 OBS_60_CNT_SOCIAL_CIRCLE 小於 20 的資料點繪製
"""
Your Code Here
"""
loc_a = app_train['OBS_60_CNT_SOCIAL_CIRCLE'] <= 20
loc_b = 'OBS_60_CNT_SOCIAL_CIRCLE'
app_train.loc[loc_a, loc_b].hist()
plt.show()
cdf = app_train['OBS_60_CNT_SOCIAL_CIRCLE']
cdf.max()
###Output
_____no_output_____ |
workflow/RGI10.ipynb | ###Markdown
RGI10 (Asia North)F. Maussion & S. Galos, June-December 2021
###Code
import pandas as pd
import geopandas as gpd
import subprocess
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import seaborn as sns
import numpy as np
from utils import mkdir, submission_summary, needs_size_filter, size_filter, plot_map, plot_date_hist, open_zip_shapefile
import os
###Output
_____no_output_____
###Markdown
Files and storage paths
###Code
# Region of interest
reg = 10
# go down from rgi7_scripts/workflow
data_dir = '../../rgi7_data/'
# Level 2 GLIMS files
l2_dir = os.path.join(data_dir, 'l2_sel_reg_tars')
# Output directories
output_dir = mkdir(os.path.join(data_dir, 'l3_rgi7a'))
output_dir_tar = mkdir(os.path.join(data_dir, 'l3_rgi7a_tar'))
# RGI v6 file for comparison later
rgi6_reg_file = os.path.join(data_dir, 'l0_RGIv6', '10_rgi60_NorthAsia.zip')
# Specific to this region: boxes where data has to be selected differently
support_dir = os.path.join(data_dir, 'l0_support_data')
# OK path to file
box_file = os.path.join(support_dir, 'rgi10_boxes.zip')
###Output
_____no_output_____
###Markdown
Load the input data
###Code
# Read L2 files
shp = gpd.read_file('tar://' + l2_dir + f'/RGI{reg:02d}.tar.gz/RGI{reg:02d}/RGI{reg:02d}.shp')
###Output
_____no_output_____
###Markdown
List of submissions
###Code
sdf, df_cat = submission_summary(shp)
sdf
###Output
_____no_output_____
###Markdown
- 636 is RGI6- 698 is GAMDAMv2 - we use it- 726 is a mapping of a few remaining nominal glaciers on three De Long Islands- 743 is an update of the Barr inventory for Kamchatka
###Code
# # Optional: write out selection in intermediate shape files for manual GIS review
# tmp_output_dir = mkdir(os.path.join(data_dir, 'l0_tmp_data', f'rgi{reg:02d}_inventories'))
# tmp_output_dir_tar = mkdir(os.path.join(data_dir, 'l0_tmp_data'))
# for subid in shp.subm_id.unique():
# s_loc = shp.loc[shp.subm_id == subid]
# s_loc.to_file(tmp_output_dir + f'/subm_{int(subid):03d}.shp')
# print('Taring...')
# print(subprocess.run(['tar', '-zcvf', f'{tmp_output_dir_tar}/rgi{reg:02d}_inventories.tar.gz', '-C',
# os.path.join(data_dir, 'l0_tmp_data'), f'rgi{reg:02d}_inventories']))
###Output
_____no_output_____
###Markdown
Outline selection
###Code
glims_rgi = shp.loc[shp.subm_id.isin([636])].copy()
glims_rgi['is_rgi6'] = True
all_others = shp.loc[shp.subm_id.isin([698, 726, 743])].copy()
all_others['is_rgi6'] = False
# Preselected areas to remove
box = open_zip_shapefile(support_dir + '/rgi10_boxes.zip')
# Remove the new regions from rgi
rp = glims_rgi.representative_point()
rp = rp.to_frame('geometry')
rp['orig_index'] = glims_rgi.index
difference = gpd.overlay(rp, box, how='difference')
glims_rgi = glims_rgi.loc[difference['orig_index']].copy()
# Size filter?
needs_size_filter(glims_rgi), needs_size_filter(all_others)
print(len(all_others))
all_others = size_filter(all_others)
print(len(all_others))
rgi7 = pd.concat([glims_rgi, all_others])
###Output
_____no_output_____
###Markdown
Some sanity checks
###Code
sdf, df_class = submission_summary(rgi7)
df_class
# Check the orphaned rock outcrops
orphan_f = os.path.join(data_dir, 'l1_orphan_interiors', f'RGI{reg:02d}', f'RGI{reg:02d}.shp')
if os.path.exists(orphan_f):
orphan_f = gpd.read_file(orphan_f)
check = np.isin(rgi7.subm_id.unique(), orphan_f.subm_id.unique())
if np.any(check):
print(f'Orphan rock outcrops detected in subm_id {rgi7.subm_id.unique()[check]}')
orphan_f['area'] = orphan_f.to_crs({'proj':'cea'}).area
###Output
_____no_output_____
###Markdown
Plots
###Code
plot_map(rgi7, reg, figsize=(22, 10), linewidth=3, loc='upper center')
plot_map(rgi7, reg, figsize=(22, 10), linewidth=3, loc='upper center', is_rgi6=True)
plot_date_hist(rgi7, reg)
###Output
_____no_output_____
###Markdown
Text for github
###Code
fgh = sdf.T
fgh
print(fgh.to_markdown(headers=np.append(['subm_id'], fgh.columns)))
###Output
| subm_id | 636 | 698 | 726 | 743 |
|:--------------|:-----------------------------------------------------------------------|:-------|:-----------|:------------------------------|
| N | 1646 | 2984 | 12 | 2471 |
| A | 394.0 | 1245.2 | 73.0 | 926.7 |
| analysts | Cogley, Earl, Gardner, Raup | Sakai | Kochtitzky | Barr, Khromova, Paul, Rastner |
| submitters | Cogley | Sakai | Kochtitzky | Paul |
| release_date | 2015 | 2018 | 2021 | 2021 |
| geog_area | Randolph Glacier Inventory; Umbrella RC for merging the RGI into GLIMS | Asia | Canada | Various (GlobGlacier) |
| src_date_mode | 2013 | 2002 | 1999 | 2000 |
| src_date_min | 1999 | 1996 | 1999 | 2000 |
| src_date_max | 2013 | 2008 | 2000 | 2016 |
###Markdown
Write out and tar
###Code
dd = mkdir(f'{output_dir}/RGI{reg:02d}/', reset=True)
print('Writing...')
rgi7.to_file(dd + f'RGI{reg:02d}.shp')
print('Taring...')
print(subprocess.run(['tar', '-zcvf', f'{output_dir_tar}/RGI{reg:02d}.tar.gz', '-C', output_dir, f'RGI{reg:02d}']))
###Output
Writing...
Taring...
CompletedProcess(args=['tar', '-zcvf', '../../rgi7_data/l3_rgi7a_tar/RGI10.tar.gz', '-C', '../../rgi7_data/l3_rgi7a', 'RGI10'], returncode=0)
###Markdown
Consistency check with RGI6
###Code
rgi6 = open_zip_shapefile(rgi6_reg_file)
len(rgi7), len(rgi6)
###Output
_____no_output_____
###Markdown
Test the areas:
###Code
rgi6['area'] = rgi6.to_crs({'proj':'cea'}).area
print('Area RGI7a (km2)', rgi7['area'].sum() * 1e-6)
print('Area RGI6 (km2)', rgi6['area'].sum() * 1e-6)
print('diff areas RGI6 - RGI7 computed by us (km2)', (rgi6['area'].sum() - rgi7['area'].sum()) * 1e-6)
# Remove the ids
rp = rgi6.representative_point()
rp = rp.to_frame('geometry')
rp['orig_index'] = rgi6.index
difference = gpd.overlay(rp, box, how='difference')
rgi6_old = rgi6.loc[difference['orig_index']].copy()
difference = gpd.overlay(rp, box, how='intersection')
rgi6_new = rgi6.loc[difference['orig_index']].copy()
assert len(rgi6_new) + len(rgi6_old) == len(rgi6)
print(f'N1 = {len(rgi6_old)} , N2 = {len(glims_rgi)}')
print('Area RGI7 (km2)', glims_rgi['area'].sum() * 1e-6)
print('Area RGI6 (km2)', rgi6_old['area'].sum() * 1e-6)
print('diff', (rgi6_old['area'].sum() - glims_rgi['area'].sum()) * 1e-6)
###Output
N1 = 1646 , N2 = 1646
Area RGI7 (km2) 394.0329721890614
Area RGI6 (km2) 394.0330784789445
diff 0.00010628988313674926
###Markdown
RGI-07: Region 10 (Asia North)F. Maussion & S. Galos, June 2021
###Code
import geopandas as gpd
import pandas as pd
import matplotlib.pyplot as plt
import subprocess
import os
from utils import mkdir
###Output
_____no_output_____
###Markdown
Files and storage paths
###Code
# Region of interest
reg = 10
# go down from rgi7_scripts/workflow
data_dir = '../../rgi7_data/'
# Level 2 GLIMS files
l2_dir = os.path.join(data_dir, 'l2_sel_reg_tars')
# Output directories
output_dir = mkdir(os.path.join(data_dir, 'l3_rgi7a'))
output_dir_tar = mkdir(os.path.join(data_dir, 'l3_rgi7a_tar'))
# RGI v6 file for comparison later
rgi6_reg_file = os.path.join(data_dir, 'l0_RGIv6', '10_rgi60_NorthAsia.zip')
# Specific to this region: boxes where data has to be selected differently
support_dir = os.path.join(data_dir, 'l0_support_data')
# Option 1: selection by S. Galos (exchange allmost all glaciers in Kamtchatka with few exeptions (where RGI seems to be better)...
#... plus keep most of small RGI6 glaciers not covered by Barr)
#box_type = 'RGI07_R10_Barr_sel'
# Option 2: exchange all Kamtchatka glaciers by Barr data
box_type = 'RGI07_R10_Barr_all'
# OK path to file
box_file = os.path.join(support_dir, f'{box_type}.tar.gz')
###Output
_____no_output_____
###Markdown
Load the input data
###Code
# Read L2 files
shp = gpd.read_file('tar://' + l2_dir + f'/RGI{reg:02d}.tar.gz/RGI{reg:02d}/RGI{reg:02d}.shp')
###Output
_____no_output_____
###Markdown
Apply selection criteria to create the RGI7 data subset Step 1: extract RGI6 from GLIMS data and do a check
###Code
#...extract RGI06 from GLIMS based on 'geog_area'
RGI_ss = shp.loc[shp['geog_area']=='Randolph Glacier Inventory; Umbrella RC for merging the RGI into GLIMS']
###Output
_____no_output_____
###Markdown
load reference data (here RGI6) to enable comparison
###Code
# Just to know the name of the file to open from zip
import zipfile
with zipfile.ZipFile(rgi6_reg_file, "r") as z:
for f in z.filelist:
if '.shp' in f.filename:
fname = f.filename
# load reference data
ref_odf = gpd.read_file('zip://' + rgi6_reg_file + '/' + fname)
###Output
_____no_output_____
###Markdown
Number of elements (differences do not necessarily depict major problems)
###Code
print('Number of glaciers in new RGI subset:', len(RGI_ss))
print('Number of glaciers in reference data:', len(ref_odf))
print('Difference:', len(RGI_ss)-len(ref_odf))
###Output
_____no_output_____
###Markdown
check for dublicate glacier IDs
###Code
print ('number of glaciers without unique id in RGI06:', len(ref_odf)-len(ref_odf['GLIMSId'].unique()))
print ('number of glaciers without unique id in RGI06 from GLIMS data base:', len(RGI_ss)-len(RGI_ss['glac_id'].unique()))
###Output
_____no_output_____
###Markdown
Check for 'nominal glaciers' in the RGI6 original data and delete them from new RGI subset from GLIMS if they are in there See https://github.com/GLIMS-RGI/glims_issue_tracker/issues/6 for context.
###Code
# how many nominals in RGI06 (identifiable via 'Status' attribute in RGI 06)
nom = ref_odf.loc[ref_odf.Status == 2].copy()
len(nom)
# drop nominal glaciers from new RGI subset
RGI_ss = (RGI_ss.loc[~RGI_ss['glac_id'].isin(nom['GLIMSId'])]).copy()
###Output
_____no_output_____
###Markdown
Total area
###Code
# add an area field to RGI_ss and reference data
RGI_ss['area'] = RGI_ss.to_crs({'proj':'cea'}).area
ref_odf['area'] = ref_odf.to_crs({'proj':'cea'}).area
nom['area'] = nom.to_crs({'proj':'cea'}).area
# print and compare area values
Area_RGI = RGI_ss['area'].sum() * 1e-6
print('Area RGI [km²]:', Area_RGI)
Area_ref = ref_odf['area'].sum() * 1e-6
print('Area Ref [km²]:', Area_ref)
Area_nom = nom['area'].sum() * 1e-6
print('Area Nom [km²]:', Area_nom)
d = (Area_RGI + Area_nom - Area_ref) * 1e-6
print('Area difference [km²]:',d)
###Output
_____no_output_____
###Markdown
result of check (RGI from Glims global data base vs. RGI06 original):The number of individual gaciers differs by 9 but areas of both files differ by only 1527 m² for whole Region! The difference in number of individuals results cannot clearly be explained but the fact that total areas are "equal" and an overlay test shows no cases of lost glaciers leads to the assumption that the difference in individual glacier numbers is a merging issue and hence of minor relevance as 'all' glacierized areas are covered. See https://github.com/GLIMS-RGI/glims_issue_tracker/issues/5 for contextTODO: find these glaciers as has been done for RGI01 and 13, 14, 15 If RGI07 should be equal to RGI06 stop here, else... ...start refinement and introduce Barr data for Kamtchatka
###Code
# extract data by Barr from GLIMS data which is subm_id 716
Barr = shp.loc[shp['subm_id']== 716]
## load a shapefile containing polygons which define areas uncovered by newly created RGI07 file
# OPTION A
# load a shapefile indicating the areas where glacier outlines of RGI06 shall be replaced by data by Barr (decided by steph)
# OPTION B
# load a shapefile indicating the areas where glacier outlines of RGI06 shall be replaced by data by Barr (replace whole region)
# See above to select the one you want
RA = gpd.read_file('tar://' + box_file + f'/{box_type}/{box_type}.shp')
# do an overlay of Barr data (subm_id 716) and the shapefile above to drop all glaciers outside
Barr_ov = gpd.overlay(Barr, RA , how='intersection')
# do an overlay of RGI06 (subm_id 636) and the shapefile above to drop all glaciers inside
RGI_ss_ov = gpd.overlay(RGI_ss, RA , how='difference')
# combine the two selections and thereby create RGI07-reg10
RGI07_reg10 = RGI_ss_ov.append(Barr_ov, sort=True)
# add a column with the geometry area to enable comparison with RGI6
RGI07_reg10['area'] = RGI07_reg10.to_crs({'proj':'cea'}).area
# print and compare area values
Area_RGI07_reg10 = RGI07_reg10['area'].sum() * 1e-6
print('Area RGI07 [km²]:', Area_RGI07_reg10)
Area_ref = ref_odf['area'].sum() * 1e-6
print('Area RGI06 [km²]:', Area_ref)
d = (Area_RGI07_reg10 - Area_ref)
print('Area difference [km²]:',d)
dn = d + Area_nom
print('Area difference considering dropped nominals [km²]:',dn)
###Output
_____no_output_____
###Markdown
Write out and tar
###Code
dd = mkdir(f'{output_dir}/RGI{reg:02d}/', reset=True)
print('Writing...')
RGI07_reg10.to_file(dd + f'RGI{reg:02d}.shp')
print('Taring...')
print(subprocess.run(['tar', '-zcvf', f'{output_dir_tar}/RGI{reg:02d}.tar.gz', '-C', output_dir, f'RGI{reg:02d}']))
###Output
_____no_output_____ |
EHR_Claims/GBT/Comp_D_No_GBT_EHR_Claims.ipynb | ###Markdown
General Population
###Code
best_clf = xgBoost(co_train_gpop, out_train_comp_gpop)
print("Train gpop", file = open('comp_no_gbt_ehr_claims.out', 'a'))
cross_val(co_train_gpop, out_train_comp_gpop)
print()
print("Test gpop", file = open('comp_no_gbt_ehr_claims.out', 'a'))
scores(co_validation_gpop, out_validation_comp_gpop)
###Output
Fitting 5 folds for each of 4 candidates, totalling 20 fits
###Markdown
High Continuity
###Code
best_clf = xgBoost(co_train_high, out_train_comp_high)
print("Train high", file = open('comp_no_gbt_ehr_claims.out', 'a'))
cross_val(co_train_high, out_train_comp_high)
print()
print("Test high", file = open('comp_no_gbt_ehr_claims.out', 'a'))
scores(co_validation_high, out_validation_comp_high)
###Output
Fitting 5 folds for each of 4 candidates, totalling 20 fits
###Markdown
Low Continuity
###Code
best_clf = xgBoost(co_train_low, out_train_comp_low)
print("Train low", file = open('comp_no_gbt_ehr_claims.out', 'a'))
cross_val(co_train_low, out_train_comp_low)
print()
print("Test low", file = open('comp_no_gbt_ehr_claims.out', 'a'))
scores(co_validation_low, out_validation_comp_low)
###Output
Fitting 5 folds for each of 4 candidates, totalling 20 fits
|
site/en/r1/tutorials/non-ml/pdes.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
#Import libraries for simulation
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
#Import libraries for simulation
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
#Import libraries for simulation
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatibility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
#Import libraries for simulation
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
#Import libraries for simulation
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
#Import libraries for simulation
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
#Import libraries for simulation
import tensorflow as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
#Import libraries for simulation
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
#Import libraries for simulation
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
#Import libraries for simulation
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Partial Differential Equations Run in Google Colab View source on GitHub TensorFlow isn't just for machine learning. Here you will use TensorFlow to simulate the behavior of a [partial differential equation](https://en.wikipedia.org/wiki/Partial_differential_equation). You'll simulate the surface of a square pond as a few raindrops land on it. Basic setupA few imports you'll need.
###Code
#Import libraries for simulation
import tensorflow.compat.v1 as tf
import numpy as np
#Imports for visualization
import PIL.Image
from io import BytesIO
from IPython.display import clear_output, Image, display
###Output
_____no_output_____
###Markdown
A function for displaying the state of the pond's surface as an image.
###Code
def DisplayArray(a, fmt='jpeg', rng=[0,1]):
"""Display an array as a picture."""
a = (a - rng[0])/float(rng[1] - rng[0])*255
a = np.uint8(np.clip(a, 0, 255))
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
clear_output(wait = True)
display(Image(data=f.getvalue()))
###Output
_____no_output_____
###Markdown
Here you start an interactive TensorFlow session for convenience in playing around. A regular session would work as well if you were doing this in an executable .py file.
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Computational convenience functions
###Code
def make_kernel(a):
"""Transform a 2D array into a convolution kernel"""
a = np.asarray(a)
a = a.reshape(list(a.shape) + [1,1])
return tf.constant(a, dtype=1)
def simple_conv(x, k):
"""A simplified 2D convolution operation"""
x = tf.expand_dims(tf.expand_dims(x, 0), -1)
y = tf.nn.depthwise_conv2d(x, k, [1, 1, 1, 1], padding='SAME')
return y[0, :, :, 0]
def laplace(x):
"""Compute the 2D laplacian of an array"""
laplace_k = make_kernel([[0.5, 1.0, 0.5],
[1.0, -6., 1.0],
[0.5, 1.0, 0.5]])
return simple_conv(x, laplace_k)
###Output
_____no_output_____
###Markdown
Define the PDEOur pond is a perfect 500 x 500 square, as is the case for most ponds found in nature.
###Code
N = 500
###Output
_____no_output_____
###Markdown
Here you create a pond and hit it with some rain drops.
###Code
# Initial Conditions -- some rain drops hit a pond
# Set everything to zero
u_init = np.zeros([N, N], dtype=np.float32)
ut_init = np.zeros([N, N], dtype=np.float32)
# Some rain drops hit a pond at random points
for n in range(40):
a,b = np.random.randint(0, N, 2)
u_init[a,b] = np.random.uniform()
DisplayArray(u_init, rng=[-0.1, 0.1])
###Output
_____no_output_____
###Markdown
Now you specify the details of the differential equation.
###Code
# Parameters:
# eps -- time resolution
# damping -- wave damping
eps = tf.placeholder(tf.float32, shape=())
damping = tf.placeholder(tf.float32, shape=())
# Create variables for simulation state
U = tf.Variable(u_init)
Ut = tf.Variable(ut_init)
# Discretized PDE update rules
U_ = U + eps * Ut
Ut_ = Ut + eps * (laplace(U) - damping * Ut)
# Operation to update the state
step = tf.group(
U.assign(U_),
Ut.assign(Ut_))
###Output
_____no_output_____
###Markdown
Run the simulationThis is where it gets fun -- running time forward with a simple for loop.
###Code
# Initialize state to initial conditions
tf.global_variables_initializer().run()
# Run 1000 steps of PDE
for i in range(1000):
# Step simulation
step.run({eps: 0.03, damping: 0.04})
# Show final image
DisplayArray(U.eval(), rng=[-0.1, 0.1])
###Output
_____no_output_____ |
07 - Work with Compute.ipynb | ###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
from azureml.data.datapath import DataPath
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
Dataset.File.upload_directory(src_dir='data',
target=DataPath(default_ds, 'diabetes-data-new/')
)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment in a Conda specification file, adding packages by using **conda** or **pip** to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies.Run the following cell to create a Conda specification file named *experiment_env.yml* in the same folder as this notebook.
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
_____no_output_____
###Markdown
Now you can use your custom conda specification file to create an environment for your experiment
###Code
from azureml.core import Environment
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Let Azure ML manage dependencies
experiment_env.python.user_managed_dependencies = False
# Print the environment details
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **use_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'experiment_env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments). Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
> **Note**: Compute instances and clusters are based on standard Azure virtual machine images. For this exercise, the *Standard_DS11_v2* image is recommended to achieve the optimal balance of cost and performance. If your subscription has a quota that does not include this image, choose an alternative image; but bear in mind that a larger image may incur higher cost and a smaller image may not be sufficient to complete the tasks. Alternatively, ask your Azure administrator to extend your quota. Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Keep an eye on the kernel indicator at the top right of the page, when it turns from **&9899;** to **&9711;**, the code has finished running.After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
컴퓨팅 대상 사용
스크립트를 Azure Machine Learning 실험으로 실행할 때는 실험 실행용 실행 컨텍스트를 정의해야 합니다. 실행 컨텍스트는 다음 항목으로 구성됩니다.
* 스크립트용 Python 환경. 스크립트에서 사용되는 모든 Python 패키지가 포함되어 있어야 합니다.
* 스크립트를 실행할 컴퓨팅 대상. 실험 실행이 시작되는 로컬 워크스테이션일 수도 있고 요청에 따라 프로비전되는 학습 클러스터 등의 원격 컴퓨팅 대상일 수도 있습니다.
이 Notebook에서는 실험용 *환경* 및 *컴퓨팅 대상*을 살펴봅니다. 작업 영역에 연결
이 Notebook의 작업을 시작하려면 먼저 작업 영역에 연결합니다.
> **참고**: Azure 구독에 인증된 세션을 아직 설정하지 않은 경우에는 링크를 클릭하고 인증 코드를 입력한 다음 Azure에 로그인하여 인증하라는 메시지가 표시됩니다.
###Code
import azureml.core
from azureml.core import Workspace
# 저장된 구성 파일에서 작업 영역 로드
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
실험용 데이터 준비
이 Notebook에서는 당뇨병 환자의 세부 정보가 포함된 데이터 세트를 사용합니다. 아래 셀을 실행하여 이 데이터 세트를 만듭니다. 데이터 세트가 이미 있으면 코드가 기존 버전을 찾습니다.
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
학습 스크립트 만들기
아래의 두 셀을 실행하여 다음 항목을 만듭니다.
1. 새 실험용 폴더
2. **scikit-learn**을 사용하여 모델을 학습시키고 **matplotlib**를 사용하여 ROC 곡선을 그리는 학습 스크립트 파일
###Code
import os
# 실험 파일용 폴더 만들기
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# 라이브러리 가져오기
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# 스크립트 인수 가져오기
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 정규화 하이퍼 매개 변수 설정
reg = args.reg_rate
# 실험 실행 컨텍스트 가져오기
run = Run.get_context()
# 당뇨병 데이터 로드(입력 데이터 세트로 전달됨)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 기능 및 레이블 분리
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# 데이터를 학습 세트와 테스트 세트로 분할
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# 로지스틱 회귀 모델 학습
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# 정확도 계산
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC 계산
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 곡선 그리기
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 대각선 50% 선 그리기
plt.plot([0, 1], [0, 1], 'k--')
# 모델의 FPR 및 TPR 그리기
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 출력 폴더에 저장된 메모 파일이 실험 레코드에 자동으로 업로드됨
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
환경 정의
Azure Machine Learning에서 Python 스크립트를 실험으로 실행하면 스크립트용 실행 컨텍스트를 정의하는 Conda 환경이 작성됩니다. Azure Machine Learning은 여러 공통 패키지가 포함된 기본 환경을 제공합니다. 이러한 패키지로는 실험 실행 사용에 필요한 라이브러리가 포함된 **azureml-defaults** 패키지, 그리고 **pandas**/**numpy** 등의 널리 사용되는 패키지 등이 있습니다.
**conda** 또는 **pip**를 사용하여 자체 환경을 정의하고 패키지를 추가할 수도 있습니다. 그러면 실험에서 필요한 모든 라이브러리에 액세스할 수 있습니다.
> **참고**: conda 종속성이 먼저 설치된 후 pip 종속성이 설치됩니다. pip 종속성을 설치하려면 **pip** 패키지가 필요하므로 conda 종속성에 이 패키지를 포함하는 것이 좋습니다. 패키지를 포함하지 않으면 Azure ML에서 자동으로 설치하지만 로그에 경고가 표시됩니다.
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# 실험용 Python 환경 만들기
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
# 패키지 종속성 집합 만들기(필요에 따라 conda 또는 pip)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# 환경에 종속성 추가
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
이제 환경을 사용하여 스크립트를 실험으로 실행할 수 있습니다.
다음 코드는 작성된 환경을 ScriptRunConfig에 할당하고 실험을 제출합니다. 실험이 실행되면 위젯과 **azureml_logs/60_control_log.txt** 출력 로그에서 실행 세부 정보를 관찰합니다. 그러면 conda 환경이 작성되고 있음을 확인할 수 있습니다.
> **참고**: 아래 코드는 스크립트 실행을 위한 **DockerConfiguration**을 만들며, 스크립트의 환경을 Docker 컨테이너에서 호스팅하기 위해 **user_docker** 특성을 **True**로 설정합니다. 이것은 기본 동작이므로 생략해도 됩니다. 하지만 여기에서는 확실히 설명하기 위해 포함시켰습니다.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# 학습 데이터 세트 가져오기
diabetes_ds = ws.datasets.get("diabetes dataset")
# 스크립트 구성 만들기
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# 실험 제출
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
실험에서는 필요한 모든 패키지가 포함된 환경을 정상적으로 사용했습니다. 아래 코드를 실행하거나 Azure Machine Learning Studio를 통해 실험 실행의 출력과 메트릭을 확인할 수 있습니다. 예를 들어 **scikit-learn**을 사용하여 학습시킨 모델, **matplotlib**를 사용하여 생성된 ROC 차트 이미지 등을 확인할 수 있습니다.
###Code
# 로깅된 메트릭 가져오기
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
환경 등록
필요한 패키지가 포함된 환경을 정의한 후에는 작업 영역에 해당 환경을 등록할 수 있습니다.
###Code
# 환경 등록
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
환경은 처음 만들 때 할당한 이름(여기서는 *diabetes-experiment-env*)으로 등록됩니다.
환경을 등록하면 요구 사항이 같은 모든 스크립트에 해당 환경을 재사용할 수 있습니다. 여기서는 환경 재사용의 예를 확인하기 위해 다른 알고리즘을 사용하여 당뇨병 모델을 학습시키는 스크립트와 폴더를 만들어 보겠습니다.
###Code
import os
# 실험 파일용 폴더 만들기
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# 라이브러리 가져오기
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# 스크립트 인수 가져오기
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 실험 실행 컨텍스트 가져오기
run = Run.get_context()
# 당뇨병 데이터 로드(입력 데이터 세트로 전달됨)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 기능 및 레이블 분리
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# 데이터를 학습 세트와 테스트 세트로 분할
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# 의사 결정 트리 모델 학습 진행
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# 정확도 계산
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC 계산
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 곡선 그리기
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 대각선 50% 선 그리기
plt.plot([0, 1], [0, 1], 'k--')
# 모델의 FPR 및 TPR 그리기
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 출력 폴더에 저장된 메모 파일이 실험 레코드에 자동으로 업로드됨
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
이제 등록된 환경을 검색한 다음 대체 학습 스크립트를 실행하는 새 실험에서 사용할 수 있습니다. 단, 의사 결정 트리 분류기에는 정규화 매개 변수가 필요하지 않으므로 이번에는 정규화 매개 변수가 없습니다.
###Code
# 등록된 환경 가져오기
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# 학습 데이터 세트 가져오기
diabetes_ds = ws.datasets.get("diabetes dataset")
# 스크립트 구성 만들기
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# 실험 제출
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
이번에는 실험이 더 빠르게 실행됩니다. 이전 실행에서 일치하는 환경이 캐시되었으므로 로컬 컴퓨팅에서 환경을 다시 만들 필요가 없기 때문입니다. 하지만 컴퓨팅 대상이 다르더라도 같은 환경이 작성되어 사용됩니다. 그러므로 실험 스크립트 실행 컨텍스트의 일관성이 유지됩니다.
이번에는 실험의 메트릭과 출력을 살펴보겠습니다.
###Code
# 로깅된 메트릭 가져오기
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
등록된 환경 확인
직접 만든 환경을 등록할 수도 있고, 일반 실험 유형에는 미리 작성된 "큐레이트" 환경을 활용할 수도 있습니다. 다음 코드는 등록된 모든 환경의 목록을 표시합니다.
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
모든 큐레이트 환경의 이름은 ***AzureML-***로 시작됩니다. 직접 만든 환경에는 이 접두사를 사용할 수 없습니다.
큐레이트 환경을 더 자세히 살펴보고 각 환경에 포함된 패키지를 확인해 보겠습니다.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
컴퓨팅 클러스터 만들기
대부분의 경우 로컬 컴퓨팅 리소스만으로는 대량의 데이터를 처리해야 하는 복잡한 실험이나 장기 실행 실험을 처리하기 어려울 수 있습니다. 이러한 상황에서는 클라우드에서 컴퓨팅 리소스를 동적으로 만들고 사용하는 기능을 활용할 수 있습니다. Azure Machine Learning은 광범위한 컴퓨팅 대상을 지원합니다. 이러한 컴퓨팅 대상은 작업 영역에서 정의하고 실험을 실행하는 데 사용할 수 있으며, 리소스 사용 시에만 비용을 지불하면 됩니다.
컴퓨팅 클러스터는 [Azure Machine Learning Studio](https://ml.azure.com)에서 만들 수도 있고 Azure Machine Learning SDK를 사용하여 만들 수도 있습니다. 다음 코드 셀은 작업 영역에 지정된 이름의 컴퓨팅 클러스터가 있는지를 확인한 후 해당 클러스터가 없으면 만듭니다.
> **중요**: 컴퓨팅 클러스터를 실행하기 전에 아래 코드에서 *your-compute-cluster*를 실제 컴퓨팅 클러스터에 적합한 이름으로 변경합니다. 기존 클러스터가 있으면 해당 클러스터의 이름을 지정할 수 있습니다. 클러스터 이름은 2~16자 사이의 전역으로 고유한 이름이어야 합니다. 유효한 문자는 영문자, 숫자 및 - 문자입니다.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
원격 컴퓨팅 대상에서 실험 실행
이제 이전에 실행했던 실험을 다시 실행할 수 있습니다. 이번에는 방금 직접 만든 컴퓨팅 클러스터에서 실험을 실행합니다.
> **참고**: 이번에는 실험을 실행하는 데 시간이 훨씬 오래 걸립니다. conda 환경이 포함된 이미지를 작성한 다음 클러스터 노드를 시작하고 이미지를 배포해야 스크립트를 실행할 수 있기 때문입니다. 따라서 당뇨병 학습 스크립트와 같은 간단한 실험에서는 효율적이지 않을 수도 있습니다. 하지만 몇 시간이 걸리는 훨씬 복잡한 실험을 실행해야 하는 경우 확장성이 더 높은 컴퓨팅을 동적으로 만들면 전체 실험 시간을 크게 줄일 수도 있습니다.
###Code
# 스크립트 구성 만들기
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# 실험 제출
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
실험이 실행되는 동안 위의 위젯이나 [Azure Machine Learning Studio](https://ml.azure.com)에서 컴퓨팅 상태를 확인할 수 있습니다. 아래 코드를 사용하여 컴퓨팅 상태를 확인할 수도 있습니다.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
상태가 *steady*에서 *resizing*으로 바뀌려면 시간이 오래 걸리므로 잠시 휴식을 취하셔도 됩니다. 실행이 완료될 때까지 커널을 차단하려면 아래 셀을 실행합니다.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
실험이 완료되고 나면 실험 실행에서 생성된 파일과 메트릭을 가져올 수 있습니다. 이번에는 이미지 작성 및 컴퓨팅 관리용 로그가 파일에 포함됩니다.
###Code
# 로깅된 메트릭 가져오기
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
이제 실험을 통해 학습된 모델을 등록할 수 있습니다.
###Code
from azureml.core import Model
# 모델 등록
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# 등록된 모델 목록 표시
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Before you startIf you haven't already done so, you must install the latest version of the **azureml-sdk** and **azureml-widgets** packages before running this notebook. To do this, run the cell below and then ***restart the kernel*** before running the subsequent cells.
###Code
!pip install --upgrade azureml-sdk azureml-widgets
###Output
_____no_output_____
###Markdown
Connect to your workspaceWith the latest version of the SDK installed, now you're ready to connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config(".\\Working_Files\\config.json")
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.24.0 to work with AML
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Overwriting diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
#diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
diabetes-experiment-env defined.
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568520112794097
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1616134461_29ddbe01/ROC_1616134926.png
ROC_1616134926.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/15924_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Overwriting diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8988888888888888
AUC 0.884901225534219
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1616134947_867332ef/ROC_1616134964.png
ROC_1616134964.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/4396_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name diabetes-experiment-env
Name AzureML-PyTorch-1.0-GPU
Name AzureML-Scikit-learn-0.20.3
Name AzureML-TensorFlow-1.12-CPU
Name AzureML-PyTorch-1.2-GPU
Name AzureML-TensorFlow-2.0-GPU
Name AzureML-TensorFlow-2.0-CPU
Name AzureML-Chainer-5.1.0-GPU
Name AzureML-TensorFlow-1.13-CPU
Name AzureML-Minimal
Name AzureML-Chainer-5.1.0-CPU
Name AzureML-PySpark-MmlSpark-0.15
Name AzureML-PyTorch-1.3-CPU
Name AzureML-PyTorch-1.1-GPU
Name AzureML-TensorFlow-1.10-GPU
Name AzureML-PyTorch-1.2-CPU
Name AzureML-TensorFlow-1.13-GPU
Name AzureML-TensorFlow-1.10-CPU
Name AzureML-PyTorch-1.3-GPU
Name AzureML-Tutorial
Name AzureML-PyTorch-1.0-CPU
Name AzureML-PyTorch-1.1-CPU
Name AzureML-TensorFlow-1.12-GPU
Name AzureML-TensorFlow-2.2-GPU
Name AzureML-TensorFlow-2.2-CPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-PyTorch-1.6-GPU
Name AzureML-Triton
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-DeepSpeed-0.3-GPU
Name AzureML-TensorFlow-2.1-GPU
Name AzureML-PyTorch-1.5-GPU
Name AzureML-PyTorch-1.5-CPU
Name AzureML-PyTorch-1.4-GPU
Name AzureML-TensorFlow-2.1-CPU
Name AzureML-Hyperdrive-ForecastDNN
Name AzureML-PyTorch-1.4-CPU
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-AutoML-GPU
Name AzureML-Designer-Score
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
Name AzureML-PyTorch-1.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.0
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
name: azureml_2e157ca425d4987ff14c3f307bed97cf
Name AzureML-Scikit-learn-0.20.3
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- scikit-learn==0.20.3
- scipy==1.2.1
- joblib==0.13.2
name: azureml_3d6fa1d835846f1a28a18b506bcad70f
Name AzureML-TensorFlow-1.12-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.12
- horovod==0.15.2
name: azureml_935139c0a8e56a190fafce06d6edc3cd
Name AzureML-PyTorch-1.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.2
- torchvision==0.4.0
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_7b29eb0faf69300b2c4353d784107d34
Name AzureML-TensorFlow-2.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.0.0
- horovod==0.18.1
name: azureml_65a7428a47e1ac7aed09e91b25d6e127
Name AzureML-TensorFlow-2.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.0
- horovod==0.18.1
name: azureml_1a75e67c0587456b4ca58af5ea7ce7f7
Name AzureML-Chainer-5.1.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- chainer==5.1.0
- cupy-cuda90==5.1.0
- mpi4py==3.0.0
name: azureml_ddd7019e826fef0c011fe2473301bad4
Name AzureML-TensorFlow-1.13-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.13.1
- horovod==0.16.1
name: azureml_71d30d49ae0ea16ff794742485e953e5
Name AzureML-Minimal
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
name: azureml_39d18bde647c9e3afa8a97c1b8e8468f
Name AzureML-Chainer-5.1.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- chainer==5.1.0
- mpi4py==3.0.0
name: azureml_5beb73f5839a4cc0a61198ee0bfa449d
Name AzureML-PySpark-MmlSpark-0.15
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
name: azureml_ba04eb03753f110d643f552f15c3bb42
Name AzureML-PyTorch-1.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-PyTorch-1.1-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.1
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_89dbc5bca1a4bdc6fd62f99a3d6295e5
Name AzureML-TensorFlow-1.10-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.10.0
- horovod==0.15.2
name: azureml_1c4b6b5c3d2c6ddcf034838a695c12de
Name AzureML-PyTorch-1.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.2
- torchvision==0.4.0
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_7b29eb0faf69300b2c4353d784107d34
Name AzureML-TensorFlow-1.13-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.13.1
- horovod==0.16.1
name: azureml_08e699281a2ab6d3b68ab09f106952c4
Name AzureML-TensorFlow-1.10-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.10
- horovod==0.15.2
name: azureml_3810220929dbc5cb90f19492d15e7151
Name AzureML-PyTorch-1.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-Tutorial
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- azureml-widgets==1.21.0
- azureml-pipeline-core==1.21.0
- azureml-pipeline-steps==1.21.0
- azureml-opendatasets==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-train-automl==1.21.0
- azureml-train==1.21.0
- azureml-sdk==1.21.0
- azureml-interpret==1.21.0
- azureml-tensorboard==1.21.0
- azureml-mlflow==1.21.0
- mlflow
- sklearn-pandas
- pandas
- numpy
- tqdm
- scikit-learn
- matplotlib
name: azureml_df6ad66e80d4bc0030b6d046a4e46427
Name AzureML-PyTorch-1.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.0
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
name: azureml_2e157ca425d4987ff14c3f307bed97cf
Name AzureML-PyTorch-1.1-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-telemetry==1.19.0
- azureml-train-restclients-hyperdrive==1.19.0
- azureml-train-core==1.19.0
- torch==1.1
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_6e145d82f92c27509a9b9e457edff086
Name AzureML-TensorFlow-1.12-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.12.0
- horovod==0.15.2
name: azureml_f6491bb45aa53d4e966d894b801f618f
Name AzureML-TensorFlow-2.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.2.0
- horovod==0.19.5
name: azureml_bffd025ba247b2f6ba16288746ca76d1
Name AzureML-TensorFlow-2.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.2.0
- horovod==0.19.5
name: azureml_ff6c5e7cf1cbe3e8ae7acc2938177052
Name AzureML-PyTorch-1.6-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_140c1aa5004c5a4a803b984404272b7b
Name AzureML-PyTorch-1.6-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.18.0.post1
- azureml-defaults==1.18.0
- azureml-telemetry==1.18.0
- azureml-train-restclients-hyperdrive==1.18.0
- azureml-train-core==1.18.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9d2a515d5c77954f2d0562cc5eb8a1fc
Name AzureML-Triton
packages channels:
- conda-forge
dependencies:
- python=3.7.9
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults[async]
- azureml-contrib-services==1.21.0
- numpy
- inference-schema[numpy-support]
- grpcio-tools
- geventhttpclient
- https://developer.download.nvidia.com/compute/redist/tritonclient/tritonclient-2.4.0-py3-none-manylinux1_x86_64.whl
name: azureml_c12df398a0c995ce0030ed7e73c50b18
Name AzureML-TensorFlow-2.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_b18c1b901df407fc9b08209bb6771b6d
Name AzureML-TensorFlow-2.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.18.0.post1
- azureml-defaults==1.18.0
- azureml-telemetry==1.18.0
- azureml-train-restclients-hyperdrive==1.18.0
- azureml-train-core==1.18.0
- tensorflow-gpu==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_60ad88840fdbe40e31e03ddbbc134dec
Name AzureML-DeepSpeed-0.3-GPU
packages channels:
- pytorch
- conda-forge
dependencies:
- python=3.6.2
- cudatoolkit-dev=10.1.243
- cudatoolkit=10.1
- pytorch==1.6.0
- torchvision==0.7.0
- gxx_linux-64
- pip<=20.2
- pip:
- azureml-core==1.21.0.post2
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- azureml-mlflow==1.21.0
- azureml-dataprep
- cmake==3.18.2
- mkl==2018.0.3
- tensorboard==1.14.0
- future==0.17.1
- matplotlib
- boto3
- h5py
- sklearn
- scipy
- pillow
- tqdm
- cupy-cuda101
- mpi4py
- deepspeed==0.3.11
name: azureml_906c8afffa36ce16d94f224cc03d7c62
Name AzureML-TensorFlow-2.1-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.1.0
- horovod==0.19.1
name: azureml_060b2dd5226b12c758ebdfc8056984b9
Name AzureML-PyTorch-1.5-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_0763e70e313e54c3fca03e22ca1a1886
Name AzureML-PyTorch-1.5-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_0763e70e313e54c3fca03e22ca1a1886
Name AzureML-PyTorch-1.4-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.4.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9478de1acf723bf396f36694a0291988
Name AzureML-TensorFlow-2.1-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.1.0
- horovod==0.19.1
name: azureml_12fcb82f6ee32ce4eecb8a52dcd60745
Name AzureML-Hyperdrive-ForecastDNN
packages dependencies:
- python=3.7
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-pipeline-core==1.21.0
- azureml-telemetry==1.21.0
- azureml-defaults==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-contrib-automl-dnn-forecasting==1.21.0
name: azureml_551b0d285970bc512cb183aa28be2c7f
Name AzureML-PyTorch-1.4-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.4.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9478de1acf723bf396f36694a0291988
Name AzureML-VowpalWabbit-8.8.0
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-dataset-runtime[fuse,pandas]
name: azureml_769be4b756b756954fa484d1287d5153
Name AzureML-AutoML-GPU
packages channels:
- anaconda
- conda-forge
- pytorch
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.24.0.post1
- azureml-pipeline-core==1.24.0
- azureml-telemetry==1.24.0
- azureml-defaults==1.24.0
- azureml-interpret==1.24.0
- azureml-automl-core==1.24.0
- azureml-automl-runtime==1.24.0
- azureml-train-automl-client==1.24.0
- azureml-train-automl-runtime==1.24.0
- azureml-dataset-runtime==1.24.0
- azureml-dataprep==2.11.2
- azureml-mlflow==1.24.0
- inference-schema
- py-cpuinfo==5.0.0
- boto3==1.15.18
- botocore==1.18.18
- numpy~=1.18.0
- scikit-learn==0.22.1
- pandas~=0.25.0
- fbprophet==0.5
- holidays==0.9.11
- setuptools-git
- psutil>5.0.0,<6.0.0
name: azureml_cde433fc51995440f5f84a38d2f2e6fd
Name AzureML-Designer-Score
packages channels:
- defaults
dependencies:
- python=3.6.8
- pip:
- azureml-designer-score-modules==0.0.16
name: azureml_18573b1d77e5ef62bcbe8903c11ceafe
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "LC-ML-Cluster-1"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
Found existing cluster, use it.
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
Steady 0
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
コンピューティングを操作する
Azure Machine Learning 実験としてスクリプトを実行する場合は、実験実行の実行コンテキストを定義する必要があります。実行コンテキストは以下で構成されます。
* スクリプト向けの Python 環境。スクリプトで使用するすべての Python パッケージを含める必要があります。
* スクリプトが実行されるコンピューティング ターゲット。実験実行を開始するローカル ワークステーション、またはオンデマンドで提供されるトレーニング クラスターなどのリモート コンピューター先になります。
このノートブックでは、実験の*環境*と*コンピューティング ターゲット*について学びます。 ワークスペースに接続する
作業を開始するには、ワークスペースに接続します。
> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
実験用データを準備する
このノートブックでは、糖尿病患者の詳細を含むデータセットを使用します。次のセルを実行してこのデータセットを作成します (すでに存在する場合は、コードによって既存のバージョンを検索します)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
トレーニング スクリプトを作成する
次の 2 つのセルを実行して作成します。
1. 新しい実験用のフォルダー
2. **SCIkit-learn を使用** してモデルをトレーニングし、**matplotlib** を使用して ROC 曲線をプロットするトレーニング スクリプト ファイル。
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
環境を定義する
Azure Machine Learning で実験として Python スクリプトを実行すると、Conda 環境が作成され、スクリプトの実行コンテキストが定義されます。Azure Machine Learning には、多くの共通パッケージを含む既定の環境を提供します。これには、実験実行の操作に必要なライブラリを含む **azureml-defaults** パッケージ、**Pandas** や **numpy** などの一般なパッケージが含まれます。
また、**Conda** または **PIP** を使用して Conda 仕様ファイルで独自の環境を定義し、パッケージを追加して、実験が必要なすべてのライブラリにアクセスできるようにすることもできます。
> **注**: Conda 依存関係を最初にインストールした後、pip 依存関係をインストールします。**pip** パッケージは pip 依存関係をインストールするために必要になるため、これを conda 依存関係に含めることが推薦されます。
次のセルを実行して、このノートブックと同じフォルダーに *experiment_env.yml* という名前の Conda 仕様ファイルを作成します。
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
_____no_output_____
###Markdown
これで、カスタム conda 仕様ファイルを使用して、実験用の環境を作成することが可能になります。
###Code
from azureml.core import Environment
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Let Azure ML manage dependencies
experiment_env.python.user_managed_dependencies = False
# Print the environment details
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
これで、環境を使用し、スクリプトを実験として実行することができます。
次のコードでは、作成した環境を ScriptRunConfig に割り当て、実験を送信します。実験の実行中に、ウィジェットおよび **azureml_logs/60_control_log.txt** 出力ログで実行の詳細を確認すると、Conda 環境が構築されていることがわかります。
> **注**: 以下のコードは、スクリプト実行に使用する **DockerConfiguration** を作成し、スクリプトの環境を Docker コンテナーでホストするために、その **use_docker** 属性を **True** に設定します。これはデフォルトの動作であるため、省略して構いませんが、明示する目的でここに含めています。
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
実験では、必要なすべてのパッケージを含む環境が正常に使用されました - Azure Machine Learning Studio で実行された実験のメトリックと出力を表示するか、以下のコード (**Scikit-learn** を使用してトレーニングされたモデルや **matplotlib を**使用して生成された ROC チャート イメージを含む) を実行して表示できます。
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
環境を登録する
必要なパッケージを使用して環境を定義する手間が省けたので、ワークスペースに登録できます。
###Code
# Register the environment
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
環境は、最初に作成したときに割り当てた名前 (この場合、*diabetes-experiment-env*) で登録されていることに注意してください。
環境が登録されている場合、同じ要件を持つすべてのスクリプトに再利用できます。たとえば、別のアルゴリズムを使用して糖尿病モデルをトレーニングするフォルダーとスクリプトを作成してみましょう。
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
これで、登録された環境を取得し、代替トレーニング スクリプトを実行する新しい実験でこれを使用できます (デシジョン ツリー分類子は正規化パラメーターを必要としないため、ここでは使われていません)。
###Code
# get the registered environment
registered_env = Environment.get(ws, 'experiment_env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
今回は、一致する環境が前回の実行からキャッシュされており、ローカル コンピューティングで再作成する必要がないため、実験の実行速度が速くなります。ただし、異なるコンピューティング ターゲットでも、同じ環境が作成および使用され、実験スクリプトの実行コンテキストの一貫性が確保されます。
実験のメトリックと出力を見てみましょう。
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
登録済み環境を表示する
独自の環境を登録するだけでなく、一般的な実験タイプに対して事前構築され「選別された」環境を活用できます。次のコードは、登録されているすべての環境を一覧表示します。
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
すべての選別された環境には、***AzureML-*** で始まる名前が付いています (このプレフィックスは、独自の環境では使用できません)。 コンピューティング クラスターを作成する
多くの場合、ローカル コンピューティングリソースでは、大量のデータを処理する必要がある複雑な実験や長時間実行される実験を処理するには十分でない場合があります。また、クラウドでコンピューティング リソースを動的に作成して使用する機能を活用する場合もあります。Azure Machine Learning は、さまざまなコンピューティング ターゲットをサポートしており、これをワークスペースで定義し、実験の実行に使用できます。リソースの支払いは使用時にのみ行われます。
コンピューティング クラスターは、[Azure Machine Learning Studio](https://ml.azure.com) で作成するか、Azure Machine Learning SDK を使用して作成できます。以下のコード セルは指定された名前を使ってコンピューティング クラスターがあるかどうかワークスペースを確認し、ない場合は作成します。
> **重要**: 実行する前に、以下のコードで *your-compute-cluster* をコンピューティング クラスターに適した名前に変更してください。既存のクラスターがある場合はその名前を指定できます。クラスター名は、長さが 2 〜 16 文字のグローバルに一意の名前である必要があります。英字、数字、- の文字が有効です。
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
> **注**: コンピューティング インスタンスとクラスターは、スタンダードの Azure 仮想マシンのイメージに基づいています。この演習では、コストとパフォーマンスの最適なバランスを実現するために、*Standard_DS11_v2* イメージが推薦されます。サブスクリプションにこのイメージを含まないクォータがある場合は、別のイメージを選択してください。 ただし、画像が大きいほどコストが高くなり、小さすぎるとタスクが完了できない場合があることに注意してください。Azure 管理者にクォータを拡張するように依頼していただくことも可能です。
リモート コンピューティングで実験を実行する
これで、以前に実行した実験を再実行する準備が整いましたが、今回は作成したコンピューティング クラスターで実行します。
> **注**: コンテナー イメージは Conda 環境で構築する必要があり、スクリプトを実行する前にクラスター ノードを起動してイメージをデプロイする必要があるため、実験にはかなり時間がかかります。糖尿病トレーニング スクリプトのような簡単な実験では、これは非効率的に見えるかもしれません。しかし、数時間かかるより複雑な実験を実行する必要があると想像してください - よりスケーラブルな計算を動的に作成すると、全体の時間が大幅に短縮される可能性があります。
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
実験の実行を待っている間に、上のウィジェットまたは [Azure Machine Learning Studio](https://ml.azure.com) でコンピューティングの状態を確認できます。次のコマンドを使用して、コンピューティングの状態を確認することもできます。
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
状態が*安定*から*サイズ変更中*に変わるまでにはしばらく時間がかかることに注意してください (コーヒーブレイクをするのによいタイミングです)。実行が完了するまでカーネルをブロックするには、下のセルを実行します。
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
ページの右上にあるカーネル インジケータに注目してください。**&9899;** から **&9711;** に変わると、コードの実行が終了します。
実験が完了したら、実験の実行によって生成されたメトリックとファイルを取得できます。今回は、ファイルには、イメージを構築し、コンピューティングを管理するためのログが含まれます。
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
これで、実験によってトレーニングされたモデルを登録できます。
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment in a Conda specification file, adding packages by using **conda** or **pip** to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies.Run the following cell to create a Conda specification file named *experiment_env.yml* in the same folder as this notebook.
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
_____no_output_____
###Markdown
Now you can use your custom conda specification file to create an environment for your experiment
###Code
from azureml.core import Environment
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Let Azure ML manage dependencies
experiment_env.python.user_managed_dependencies = False
# Print the environment details
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **use_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'experiment_env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
> **Note**: Compute instances and clusters are based on standard Azure virtual machine images. For this exercise, the *Standard_DS11_v2* image is recommended to achieve the optimal balance of cost and performance. If your subscription has a quota that does not include this image, choose an alternative image; but bear in mind that a larger image may incur higher cost and a smaller image may not be sufficient to complete the tasks. Alternatively, ask your Azure administrator to extend your quota. Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Keep an eye on the kernel indicator at the top right of the page, when it turns from **&9899;** to **&9711;**, the code has finished running.After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
处理计算
在以 Azure 机器学习试验形式运行脚本时,需要定义试验运行的执行上下文。执行上下文由以下内容组成:
* 脚本的 Python 环境,其中必须包括脚本中使用的所有 Python 包。
* 将在其上运行脚本的计算目标。该目标可以是启动试验运行的本地工作站,也可以是按需预配的远程计算目标,例如训练群集。
在该笔记本中,你将探索试验的环境和计算目标。 连接到工作区
首先,请连接到你的工作区。
> **备注**:如果尚未与 Azure 订阅建立经过身份验证的会话,则系统将提示你通过执行以下操作进行身份验证:单击链接,输入验证码,然后登录到 Azure。
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
准备试验数据
在该笔记本中,你将使用包含糖尿病患者详细信息的数据集。运行以下单元格以创建此数据集(如果已有此数据集,则代码将查找现有版本)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
创建训练脚本
运行以下两个单元格,创建以下内容:
1. 用于新试验的文件夹
2. 使用 **scikit-learn** 训练模型并使用 **matplotlib** 绘制 ROC 曲线的训练脚本文件。
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
定义环境
在 Azure 机器学习中以试验形式运行 Python 脚本时,系统将创建 Conda 环境,用于定义脚本的执行上下文。Azure 机器学习提供了一个默认环境,其中包括许多常见包;例如,**azureml-defaults** 包,其中包含处理试验运行所需的库,以及 **pandas** 和 **numpy** 等常用包。
你还可以在 Conda 规范文件中定义自己的环境,并使用 **conda** 或 **pip** 添加包,以确保试验可以访问其所需的所有库。
> **备注**:首先安装 conda 依赖项,然后安装 **pip** 依赖项。由于安装 pip 依赖项需要 pip 包,因此最好将其包含在 conda 依赖项中。
运行以下单元格以在与此笔记本相同的文件夹中创建名为 *Experiment_env.yml* 的 Conda 规范文件。
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
_____no_output_____
###Markdown
现在你可以使用自定义 conda 规范文件为试验创建一个环境
###Code
from azureml.core import Environment
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Let Azure ML manage dependencies
experiment_env.python.user_managed_dependencies = False
# Print the environment details
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
现在,你可以使用环境以试验方式运行脚本。
以下代码会将你创建的环境分配给 ScriptRunConfig,并提交试验。试验运行期间,注意小组件和 **azureml_logs/60_control_log.txt** 输出日志中的运行详细信息,你将看到正在构建 conda 环境。
> **备注**:下面的代码为脚本运行创建了 **DockerConfiguration**,并将其 **use_docker** 属性设置为 **True**,以便将脚本的环境托管在 Docker 容器中。这是默认行为,因此可省略该行为,但这里显式包含此行为。
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
试验成功使用了环境,其中包括其所需的所有包 - 你可以根据 Azure 机器学习工作室中运行的试验或通过运行以下代码来查看指标和输出 - 包括使用 **scikit-learn** 训练的模型和使用 **matplotlib** 生成的 ROC 图表图像。
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
注册环境
使用所需包定义环境后,可以在工作区中注册它。
###Code
# Register the environment
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
请注意,该环境的注册名称为首次创建时所分配的名称(本例中为 *diabetes-experiment-env*)。
注册环境后,可以将其重用于需求相同的任何脚本。例如,创建文件夹和脚本以使用其他算法训练糖尿病模型:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
现在,可以检索已注册的环境并在运行替代训练脚本的新试验中使用该环境(这次没有正则化参数,因为决策树分类器不需要该参数)。
###Code
# get the registered environment
registered_env = Environment.get(ws, 'experiment_env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
此时试验运行速度更快,这是因为上次运行中缓存有匹配环境,因此无需在本地计算机上重建。但即使是在不同的计算目标上,也会创建和使用相同的环境 - 确保试验脚本执行上下文的一致性。
接下来看一下试验的指标和输出。
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
查看已注册的环境
除注册自己的环境以外,你还可以将预建的“策展”环境用于常见类型的试验。以下代码列出了所有已注册的环境:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
所有策展环境的名称都以 ***AzureML-*** 开头(不能将此前缀用于自己的环境)。 创建计算群集
许多情况下,本地计算资源可能不足,无法进行需处理大量数据的复杂试验或长期运行的试验;并且你可能想利用在云中动态创建和使用计算资源这一功能。Azure 机器学习支持一系列计算目标,你可以在工作区中定义这些目标并将其用于运行试验;只需在使用资源时为其付费。
可以在 [Azure 机器学习工作室](https://ml.azure.com)中创建计算群集,或者使用 Azure 机器学习 SDK 创建。以下代码单元检查工作区中是否存在具有指定名称的计算群集,如果不存在,则创建它。
> **重要提示**:在运行计算群集之前,在下面的代码中将 *your-compute-cluster* 更改为适合你的计算群集的名称,如果有,则可以指定现有群集的名称。群集名称必须是长度在 2 到 16 个字符之间的全局唯一名称。有效字符是字母、数字和 - 字符。
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
> **备注**:计算实例和群集是基于标准的 Azure 虚拟机映像。对于本练习,建议使用 *Standard_DS11_v2* 映像来实现成本和性能的最佳平衡。如果你的订阅配额不包含此映像,请选择其他映像;但请记住,较大的映像可能会产生更高的成本,但较小的映像可能不够完成任务。或者,可以请 Azure 管理员扩展你的配额。
在远程计算上运行试验
现在即可再次运行之前运行的试验,不过这次是在你创建的计算群集上运行。
> **备注**:该试验耗时更长,这是因为容器映像必须在 conda 环境中构建,然后必须先启动群集节点并部署映像才能运行脚本。对于糖尿病训练脚本等简单试验,这似乎效率不高;但假设需运行的是耗时数小时的更复杂的试验 - 动态创建可缩放性更高的计算可能会显著减少总时长。
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
等待试验运行时,可以在上方的小组件中或 [Azure 机器学习工作室](https://ml.azure.com)中检查计算状态。你还可以使用以下代码检查计算状态。
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
请注意,状态从“*稳定*”变为“*调整中*”需要一段时间(此时非常适合休息喝咖啡!)。若要在运行结束之前阻止内核,请运行以下单元格。
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
请注意页面右上方的内核指示器,当它从 **&9899;** 变为 **&9711;** 时,表明代码已运行完毕。
试验完成后,可以获取试验运行所生成的指标和文件。此时,文件将包含用于构建映像和管理计算的日志。
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
现在即可注册由试验训练的模型。
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.22.0 to work with dp100_ml
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
diabetes-experiment-env defined.
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568509052814499
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1615922483_526b382d/ROC_1615922925.png
ROC_1615922925.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/9_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.9024444444444445
AUC 0.8879060007910097
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1615922936_2254f343/ROC_1615922950.png
ROC_1615922950.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/9_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name diabetes-experiment-env
Name AzureML-Tutorial
Name AzureML-Minimal
Name AzureML-Chainer-5.1.0-GPU
Name AzureML-PyTorch-1.2-CPU
Name AzureML-TensorFlow-1.12-CPU
Name AzureML-TensorFlow-1.13-CPU
Name AzureML-PyTorch-1.1-CPU
Name AzureML-TensorFlow-1.10-CPU
Name AzureML-PyTorch-1.0-GPU
Name AzureML-TensorFlow-1.12-GPU
Name AzureML-TensorFlow-1.13-GPU
Name AzureML-Chainer-5.1.0-CPU
Name AzureML-PyTorch-1.0-CPU
Name AzureML-Scikit-learn-0.20.3
Name AzureML-PyTorch-1.2-GPU
Name AzureML-PyTorch-1.1-GPU
Name AzureML-TensorFlow-1.10-GPU
Name AzureML-PyTorch-1.3-GPU
Name AzureML-TensorFlow-2.0-CPU
Name AzureML-PyTorch-1.3-CPU
Name AzureML-TensorFlow-2.0-GPU
Name AzureML-PySpark-MmlSpark-0.15
Name AzureML-PyTorch-1.4-GPU
Name AzureML-PyTorch-1.4-CPU
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-Hyperdrive-ForecastDNN
Name AzureML-AutoML-GPU
Name AzureML-PyTorch-1.5-CPU
Name AzureML-PyTorch-1.5-GPU
Name AzureML-Designer-Score
Name AzureML-TensorFlow-2.1-GPU
Name AzureML-TensorFlow-2.1-CPU
Name AzureML-TensorFlow-2.2-GPU
Name AzureML-TensorFlow-2.2-CPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-PyTorch-1.6-GPU
Name AzureML-Triton
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-DeepSpeed-0.3-GPU
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
Name AzureML-Tutorial
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- azureml-widgets==1.21.0
- azureml-pipeline-core==1.21.0
- azureml-pipeline-steps==1.21.0
- azureml-opendatasets==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-train-automl==1.21.0
- azureml-train==1.21.0
- azureml-sdk==1.21.0
- azureml-interpret==1.21.0
- azureml-tensorboard==1.21.0
- azureml-mlflow==1.21.0
- mlflow
- sklearn-pandas
- pandas
- numpy
- tqdm
- scikit-learn
- matplotlib
name: azureml_df6ad66e80d4bc0030b6d046a4e46427
Name AzureML-Minimal
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
name: azureml_39d18bde647c9e3afa8a97c1b8e8468f
Name AzureML-Chainer-5.1.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- chainer==5.1.0
- cupy-cuda90==5.1.0
- mpi4py==3.0.0
name: azureml_ddd7019e826fef0c011fe2473301bad4
Name AzureML-PyTorch-1.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.2
- torchvision==0.4.0
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_7b29eb0faf69300b2c4353d784107d34
Name AzureML-TensorFlow-1.12-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.12
- horovod==0.15.2
name: azureml_935139c0a8e56a190fafce06d6edc3cd
Name AzureML-TensorFlow-1.13-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.13.1
- horovod==0.16.1
name: azureml_71d30d49ae0ea16ff794742485e953e5
Name AzureML-PyTorch-1.1-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-telemetry==1.19.0
- azureml-train-restclients-hyperdrive==1.19.0
- azureml-train-core==1.19.0
- torch==1.1
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_6e145d82f92c27509a9b9e457edff086
Name AzureML-TensorFlow-1.10-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.10
- horovod==0.15.2
name: azureml_3810220929dbc5cb90f19492d15e7151
Name AzureML-PyTorch-1.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.0
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
name: azureml_2e157ca425d4987ff14c3f307bed97cf
Name AzureML-TensorFlow-1.12-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.12.0
- horovod==0.15.2
name: azureml_f6491bb45aa53d4e966d894b801f618f
Name AzureML-TensorFlow-1.13-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.13.1
- horovod==0.16.1
name: azureml_08e699281a2ab6d3b68ab09f106952c4
Name AzureML-Chainer-5.1.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- chainer==5.1.0
- mpi4py==3.0.0
name: azureml_5beb73f5839a4cc0a61198ee0bfa449d
Name AzureML-PyTorch-1.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.0
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
name: azureml_2e157ca425d4987ff14c3f307bed97cf
Name AzureML-Scikit-learn-0.20.3
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- scikit-learn==0.20.3
- scipy==1.2.1
- joblib==0.13.2
name: azureml_3d6fa1d835846f1a28a18b506bcad70f
Name AzureML-PyTorch-1.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.2
- torchvision==0.4.0
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_7b29eb0faf69300b2c4353d784107d34
Name AzureML-PyTorch-1.1-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.1
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_89dbc5bca1a4bdc6fd62f99a3d6295e5
Name AzureML-TensorFlow-1.10-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.10.0
- horovod==0.15.2
name: azureml_1c4b6b5c3d2c6ddcf034838a695c12de
Name AzureML-PyTorch-1.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-TensorFlow-2.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.0
- horovod==0.18.1
name: azureml_1a75e67c0587456b4ca58af5ea7ce7f7
Name AzureML-PyTorch-1.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-TensorFlow-2.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.0.0
- horovod==0.18.1
name: azureml_65a7428a47e1ac7aed09e91b25d6e127
Name AzureML-PySpark-MmlSpark-0.15
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
name: azureml_ba04eb03753f110d643f552f15c3bb42
Name AzureML-PyTorch-1.4-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.4.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9478de1acf723bf396f36694a0291988
Name AzureML-PyTorch-1.4-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.4.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9478de1acf723bf396f36694a0291988
Name AzureML-VowpalWabbit-8.8.0
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-dataset-runtime[fuse,pandas]
name: azureml_769be4b756b756954fa484d1287d5153
Name AzureML-Hyperdrive-ForecastDNN
packages dependencies:
- python=3.7
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-pipeline-core==1.21.0
- azureml-telemetry==1.21.0
- azureml-defaults==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-contrib-automl-dnn-forecasting==1.21.0
name: azureml_551b0d285970bc512cb183aa28be2c7f
Name AzureML-AutoML-GPU
packages channels:
- anaconda
- conda-forge
- pytorch
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.24.0.post1
- azureml-pipeline-core==1.24.0
- azureml-telemetry==1.24.0
- azureml-defaults==1.24.0
- azureml-interpret==1.24.0
- azureml-automl-core==1.24.0
- azureml-automl-runtime==1.24.0
- azureml-train-automl-client==1.24.0
- azureml-train-automl-runtime==1.24.0
- azureml-dataset-runtime==1.24.0
- azureml-dataprep==2.11.2
- azureml-mlflow==1.24.0
- inference-schema
- py-cpuinfo==5.0.0
- boto3==1.15.18
- botocore==1.18.18
- numpy~=1.18.0
- scikit-learn==0.22.1
- pandas~=0.25.0
- fbprophet==0.5
- holidays==0.9.11
- setuptools-git
- psutil>5.0.0,<6.0.0
name: azureml_cde433fc51995440f5f84a38d2f2e6fd
Name AzureML-PyTorch-1.5-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_0763e70e313e54c3fca03e22ca1a1886
Name AzureML-PyTorch-1.5-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_0763e70e313e54c3fca03e22ca1a1886
Name AzureML-Designer-Score
packages channels:
- defaults
dependencies:
- python=3.6.8
- pip:
- azureml-designer-score-modules==0.0.16
name: azureml_18573b1d77e5ef62bcbe8903c11ceafe
Name AzureML-TensorFlow-2.1-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.1.0
- horovod==0.19.1
name: azureml_060b2dd5226b12c758ebdfc8056984b9
Name AzureML-TensorFlow-2.1-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.1.0
- horovod==0.19.1
name: azureml_12fcb82f6ee32ce4eecb8a52dcd60745
Name AzureML-TensorFlow-2.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.2.0
- horovod==0.19.5
name: azureml_bffd025ba247b2f6ba16288746ca76d1
Name AzureML-TensorFlow-2.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.2.0
- horovod==0.19.5
name: azureml_ff6c5e7cf1cbe3e8ae7acc2938177052
Name AzureML-PyTorch-1.6-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_140c1aa5004c5a4a803b984404272b7b
Name AzureML-PyTorch-1.6-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.18.0.post1
- azureml-defaults==1.18.0
- azureml-telemetry==1.18.0
- azureml-train-restclients-hyperdrive==1.18.0
- azureml-train-core==1.18.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9d2a515d5c77954f2d0562cc5eb8a1fc
Name AzureML-Triton
packages channels:
- conda-forge
dependencies:
- python=3.7.9
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults[async]
- azureml-contrib-services==1.21.0
- numpy
- inference-schema[numpy-support]
- grpcio-tools
- geventhttpclient
- https://developer.download.nvidia.com/compute/redist/tritonclient/tritonclient-2.4.0-py3-none-manylinux1_x86_64.whl
name: azureml_c12df398a0c995ce0030ed7e73c50b18
Name AzureML-TensorFlow-2.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_b18c1b901df407fc9b08209bb6771b6d
Name AzureML-TensorFlow-2.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.18.0.post1
- azureml-defaults==1.18.0
- azureml-telemetry==1.18.0
- azureml-train-restclients-hyperdrive==1.18.0
- azureml-train-core==1.18.0
- tensorflow-gpu==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_60ad88840fdbe40e31e03ddbbc134dec
Name AzureML-DeepSpeed-0.3-GPU
packages channels:
- pytorch
- conda-forge
dependencies:
- python=3.6.2
- cudatoolkit-dev=10.1.243
- cudatoolkit=10.1
- pytorch==1.6.0
- torchvision==0.7.0
- gxx_linux-64
- pip<=20.2
- pip:
- azureml-core==1.21.0.post2
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- azureml-mlflow==1.21.0
- azureml-dataprep
- cmake==3.18.2
- mkl==2018.0.3
- tensorboard==1.14.0
- future==0.17.1
- matplotlib
- boto3
- h5py
- sklearn
- scipy
- pillow
- tqdm
- cupy-cuda101
- mpi4py
- deepspeed==0.3.11
name: azureml_906c8afffa36ce16d94f224cc03d7c62
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "dp100cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
Found existing cluster, use it.
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
Steady 0
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below. Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.9
AUC 0.8852500572906943
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1615923144_47a27316/ROC_1615924022.png
ROC_1615924022.png
azureml-logs/20_image_build_log.txt
azureml-logs/55_azureml-execution-tvmps_aae7cadae4c4c883584682e25dc3c159c2ab5833cc15f919254ec80c5cede709_d.txt
azureml-logs/65_job_prep-tvmps_aae7cadae4c4c883584682e25dc3c159c2ab5833cc15f919254ec80c5cede709_d.txt
azureml-logs/70_driver_log.txt
azureml-logs/75_job_post-tvmps_aae7cadae4c4c883584682e25dc3c159c2ab5833cc15f919254ec80c5cede709_d.txt
azureml-logs/process_info.json
azureml-logs/process_status.json
logs/azureml/113_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/job_prep_azureml.log
logs/azureml/job_release_azureml.log
outputs/diabetes_model.pkl
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 6
Training context : Compute cluster
AUC : 0.8852500572906943
Accuracy : 0.9
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8852500572906943
Accuracy : 0.9
diabetes_model version: 4
Training context : File dataset
AUC : 0.8568743524381947
Accuracy : 0.7891111111111111
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8484357430717946
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
amlstudio-designer-predict-dia version: 2
CreatedByAMLStudio : true
amlstudio-designer-predict-dia version: 1
CreatedByAMLStudio : true
AutoMLafb0d63c21 version: 1
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **user_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Install the Azure Machine Learning SDKThe Azure Machine Learning SDK is updated frequently. Run the following cell to upgrade to the latest release, along with the additional package to support notebook widgets.
###Code
!pip install --upgrade azureml-sdk azureml-widgets
###Output
Requirement already satisfied: azureml-sdk in /usr/local/anaconda3/lib/python3.8/site-packages (1.19.0)
Requirement already satisfied: azureml-widgets in /usr/local/anaconda3/lib/python3.8/site-packages (1.19.0)
Requirement already satisfied: azureml-dataset-runtime[fuse]~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-sdk) (1.19.0.post1)
Requirement already satisfied: azureml-pipeline~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-sdk) (1.19.0)
Requirement already satisfied: azureml-train~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-sdk) (1.19.0)
Requirement already satisfied: azureml-train-automl-client~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-sdk) (1.19.0)
Requirement already satisfied: azureml-core~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-sdk) (1.19.0)
Requirement already satisfied: azure-mgmt-resource<15.0.0,>=1.2.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (12.0.0)
Requirement already satisfied: requests>=2.19.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (2.25.1)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (2.8.1)
Requirement already satisfied: backports.tempfile in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (1.0)
Requirement already satisfied: jsonpickle in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (1.4.2)
Requirement already satisfied: urllib3>=1.23 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (1.26.2)
Requirement already satisfied: azure-mgmt-storage<16.0.0,>=1.5.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (11.2.0)
Requirement already satisfied: pyopenssl<20.0.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (19.1.0)
Requirement already satisfied: cryptography!=1.9,!=2.0.*,!=2.1.*,!=2.2.* in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (3.3.1)
Requirement already satisfied: SecretStorage in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (3.3.0)
Requirement already satisfied: azure-mgmt-containerregistry>=2.0.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (2.8.0)
Requirement already satisfied: contextlib2 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (0.6.0.post1)
Requirement already satisfied: docker in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (4.4.1)
Requirement already satisfied: pathspec in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (0.8.1)
Requirement already satisfied: azure-common>=1.1.12 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (1.1.26)
Requirement already satisfied: msrestazure>=0.4.33 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (0.6.4)
Requirement already satisfied: azure-graphrbac<1.0.0,>=0.40.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (0.61.1)
Requirement already satisfied: ndg-httpsclient in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (0.5.1)
Requirement already satisfied: msrest>=0.5.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (0.6.19)
Requirement already satisfied: azure-mgmt-authorization<1.0.0,>=0.40.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (0.61.0)
Requirement already satisfied: adal>=1.2.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (1.2.5)
Requirement already satisfied: jmespath in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (0.10.0)
Requirement already satisfied: ruamel.yaml>=0.15.35 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (0.15.87)
Requirement already satisfied: azure-mgmt-keyvault<7.0.0,>=0.40.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (2.2.0)
Requirement already satisfied: pytz in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (2020.4)
Requirement already satisfied: PyJWT<2.0.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-core~=1.19.0->azureml-sdk) (1.7.1)
Requirement already satisfied: pyarrow<2.0.0,>=0.17.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (1.0.1)
Requirement already satisfied: azureml-dataprep<2.7.0a,>=2.6.0a in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (2.6.3)
Requirement already satisfied: fusepy<4.0.0,>=3.0.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (3.0.1)
Requirement already satisfied: cloudpickle<2.0.0,>=1.1.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (1.6.0)
Requirement already satisfied: azure-identity<1.5.0,>=1.2.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (1.4.1)
Requirement already satisfied: azureml-dataprep-native<27.0.0,>=26.0.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (26.0.0)
Requirement already satisfied: azureml-dataprep-rslex<1.5.0a,>=1.4.0dev0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (1.4.0)
Requirement already satisfied: dotnetcore2<3.0.0,>=2.1.14 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (2.1.20)
Requirement already satisfied: azure-core<2.0.0,>=1.0.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azure-identity<1.5.0,>=1.2.0->azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (1.9.0)
Requirement already satisfied: six>=1.6 in /usr/local/anaconda3/lib/python3.8/site-packages (from azure-identity<1.5.0,>=1.2.0->azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (1.15.0)
Requirement already satisfied: msal-extensions~=0.2.2 in /usr/local/anaconda3/lib/python3.8/site-packages (from azure-identity<1.5.0,>=1.2.0->azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (0.2.2)
Requirement already satisfied: msal<2.0.0,>=1.3.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azure-identity<1.5.0,>=1.2.0->azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (1.8.0)
Requirement already satisfied: azureml-pipeline-core~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-pipeline~=1.19.0->azureml-sdk) (1.19.0)
Requirement already satisfied: azureml-pipeline-steps~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-pipeline~=1.19.0->azureml-sdk) (1.19.0)
Requirement already satisfied: azureml-train-core~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-pipeline-steps~=1.19.0->azureml-pipeline~=1.19.0->azureml-sdk) (1.19.0)
Requirement already satisfied: azureml-telemetry~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-train-automl-client~=1.19.0->azureml-sdk) (1.19.0)
Requirement already satisfied: azureml-automl-core~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-train-automl-client~=1.19.0->azureml-sdk) (1.19.0)
Requirement already satisfied: applicationinsights in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-telemetry~=1.19.0->azureml-train-automl-client~=1.19.0->azureml-sdk) (0.11.9)
Requirement already satisfied: azureml-train-restclients-hyperdrive~=1.19.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-train-core~=1.19.0->azureml-pipeline-steps~=1.19.0->azureml-pipeline~=1.19.0->azureml-sdk) (1.19.0)
Requirement already satisfied: cffi>=1.12 in /usr/local/anaconda3/lib/python3.8/site-packages (from cryptography!=1.9,!=2.0.*,!=2.1.*,!=2.2.*->azureml-core~=1.19.0->azureml-sdk) (1.14.4)
Requirement already satisfied: pycparser in /usr/local/anaconda3/lib/python3.8/site-packages (from cffi>=1.12->cryptography!=1.9,!=2.0.*,!=2.1.*,!=2.2.*->azureml-core~=1.19.0->azureml-sdk) (2.20)
Requirement already satisfied: distro>=1.2.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from dotnetcore2<3.0.0,>=2.1.14->azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (1.5.0)
Requirement already satisfied: portalocker~=1.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from msal-extensions~=0.2.2->azure-identity<1.5.0,>=1.2.0->azureml-dataprep<2.7.0a,>=2.6.0a->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (1.7.1)
Requirement already satisfied: requests-oauthlib>=0.5.0 in /Users/theo/.local/lib/python3.8/site-packages (from msrest>=0.5.1->azureml-core~=1.19.0->azureml-sdk) (1.3.0)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/anaconda3/lib/python3.8/site-packages (from msrest>=0.5.1->azureml-core~=1.19.0->azureml-sdk) (2020.12.5)
Requirement already satisfied: isodate>=0.6.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from msrest>=0.5.1->azureml-core~=1.19.0->azureml-sdk) (0.6.0)
Requirement already satisfied: numpy>=1.14 in /usr/local/anaconda3/lib/python3.8/site-packages (from pyarrow<2.0.0,>=0.17.0->azureml-dataset-runtime[fuse]~=1.19.0->azureml-sdk) (1.19.2)
Requirement already satisfied: chardet<5,>=3.0.2 in /usr/local/anaconda3/lib/python3.8/site-packages (from requests>=2.19.1->azureml-core~=1.19.0->azureml-sdk) (4.0.0)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/anaconda3/lib/python3.8/site-packages (from requests>=2.19.1->azureml-core~=1.19.0->azureml-sdk) (2.10)
Requirement already satisfied: oauthlib>=3.0.0 in /Users/theo/.local/lib/python3.8/site-packages (from requests-oauthlib>=0.5.0->msrest>=0.5.1->azureml-core~=1.19.0->azureml-sdk) (3.1.0)
Requirement already satisfied: ipywidgets>=7.0.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from azureml-widgets) (7.5.1)
Requirement already satisfied: widgetsnbextension~=3.5.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from ipywidgets>=7.0.0->azureml-widgets) (3.5.1)
Requirement already satisfied: nbformat>=4.2.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from ipywidgets>=7.0.0->azureml-widgets) (5.0.7)
Requirement already satisfied: ipython>=4.0.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from ipywidgets>=7.0.0->azureml-widgets) (7.19.0)
Requirement already satisfied: ipykernel>=4.5.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from ipywidgets>=7.0.0->azureml-widgets) (5.3.4)
Requirement already satisfied: traitlets>=4.3.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from ipywidgets>=7.0.0->azureml-widgets) (4.3.3)
Requirement already satisfied: appnope in /usr/local/anaconda3/lib/python3.8/site-packages (from ipykernel>=4.5.1->ipywidgets>=7.0.0->azureml-widgets) (0.1.2)
Requirement already satisfied: jupyter-client in /usr/local/anaconda3/lib/python3.8/site-packages (from ipykernel>=4.5.1->ipywidgets>=7.0.0->azureml-widgets) (6.1.7)
Requirement already satisfied: tornado>=4.2 in /usr/local/anaconda3/lib/python3.8/site-packages (from ipykernel>=4.5.1->ipywidgets>=7.0.0->azureml-widgets) (6.1)
Requirement already satisfied: backcall in /usr/local/anaconda3/lib/python3.8/site-packages (from ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (0.2.0)
Requirement already satisfied: decorator in /usr/local/anaconda3/lib/python3.8/site-packages (from ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (4.4.2)
Requirement already satisfied: jedi>=0.10 in /usr/local/anaconda3/lib/python3.8/site-packages (from ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (0.17.1)
Requirement already satisfied: setuptools>=18.5 in /usr/local/anaconda3/lib/python3.8/site-packages (from ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (51.0.0.post20201207)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (3.0.8)
Requirement already satisfied: pickleshare in /usr/local/anaconda3/lib/python3.8/site-packages (from ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (0.7.5)
Requirement already satisfied: pygments in /usr/local/anaconda3/lib/python3.8/site-packages (from ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (2.7.3)
Requirement already satisfied: pexpect>4.3 in /usr/local/anaconda3/lib/python3.8/site-packages (from ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (4.8.0)
Requirement already satisfied: parso<0.8.0,>=0.7.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from jedi>=0.10->ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (0.7.0)
Requirement already satisfied: ipython-genutils in /usr/local/anaconda3/lib/python3.8/site-packages (from nbformat>=4.2.0->ipywidgets>=7.0.0->azureml-widgets) (0.2.0)
Requirement already satisfied: jupyter-core in /usr/local/anaconda3/lib/python3.8/site-packages (from nbformat>=4.2.0->ipywidgets>=7.0.0->azureml-widgets) (4.7.0)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/anaconda3/lib/python3.8/site-packages (from nbformat>=4.2.0->ipywidgets>=7.0.0->azureml-widgets) (3.2.0)
Requirement already satisfied: pyrsistent>=0.14.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets>=7.0.0->azureml-widgets) (0.17.3)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from jsonschema!=2.5.0,>=2.4->nbformat>=4.2.0->ipywidgets>=7.0.0->azureml-widgets) (20.3.0)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/anaconda3/lib/python3.8/site-packages (from pexpect>4.3->ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (0.6.0)
Requirement already satisfied: wcwidth in /usr/local/anaconda3/lib/python3.8/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=4.0.0->ipywidgets>=7.0.0->azureml-widgets) (0.2.5)
Requirement already satisfied: notebook>=4.4.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (6.0.3)
Requirement already satisfied: Send2Trash in /usr/local/anaconda3/lib/python3.8/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (1.5.0)
Requirement already satisfied: terminado>=0.8.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (0.9.1)
Requirement already satisfied: jinja2 in /usr/local/anaconda3/lib/python3.8/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (2.11.2)
Requirement already satisfied: nbconvert in /usr/local/anaconda3/lib/python3.8/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (5.6.1)
Requirement already satisfied: pyzmq>=17 in /usr/local/anaconda3/lib/python3.8/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (20.0.0)
Requirement already satisfied: prometheus-client in /usr/local/anaconda3/lib/python3.8/site-packages (from notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (0.9.0)
Requirement already satisfied: backports.weakref in /usr/local/anaconda3/lib/python3.8/site-packages (from backports.tempfile->azureml-core~=1.19.0->azureml-sdk) (1.0.post1)
Requirement already satisfied: websocket-client>=0.32.0 in /usr/local/anaconda3/lib/python3.8/site-packages (from docker->azureml-core~=1.19.0->azureml-sdk) (0.57.0)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/anaconda3/lib/python3.8/site-packages (from jinja2->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (1.1.1)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (0.8.4)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (1.4.3)
Requirement already satisfied: defusedxml in /usr/local/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (0.6.0)
Requirement already satisfied: testpath in /usr/local/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (0.4.4)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (0.3)
Requirement already satisfied: bleach in /usr/local/anaconda3/lib/python3.8/site-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (3.2.1)
Requirement already satisfied: webencodings in /usr/local/anaconda3/lib/python3.8/site-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (0.5.1)
Requirement already satisfied: packaging in /usr/local/anaconda3/lib/python3.8/site-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (20.8)
Requirement already satisfied: pyasn1>=0.1.1 in /usr/local/anaconda3/lib/python3.8/site-packages (from ndg-httpsclient->azureml-core~=1.19.0->azureml-sdk) (0.4.8)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/anaconda3/lib/python3.8/site-packages (from packaging->bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.5.0->ipywidgets>=7.0.0->azureml-widgets) (2.4.7)
Requirement already satisfied: jeepney>=0.6 in /usr/local/anaconda3/lib/python3.8/site-packages (from SecretStorage->azureml-core~=1.19.0->azureml-sdk) (0.6.0)
###Markdown
Connect to your workspaceWith the latest version of the SDK installed, now you're ready to connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.19.0 to work with Azure_Learning_ML
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Overwriting diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
diabetes-experiment-env defined.
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name AzureML-PyTorch-1.3-GPU
Name AzureML-TensorFlow-2.0-CPU
Name AzureML-Tutorial
Name AzureML-PyTorch-1.3-CPU
Name AzureML-TensorFlow-2.0-GPU
Name AzureML-Chainer-5.1.0-GPU
Name AzureML-Minimal
Name AzureML-PyTorch-1.2-CPU
Name AzureML-TensorFlow-1.12-CPU
Name AzureML-TensorFlow-1.13-CPU
Name AzureML-PyTorch-1.1-CPU
Name AzureML-TensorFlow-1.10-CPU
Name AzureML-PyTorch-1.0-GPU
Name AzureML-TensorFlow-1.12-GPU
Name AzureML-TensorFlow-1.13-GPU
Name AzureML-Chainer-5.1.0-CPU
Name AzureML-PyTorch-1.0-CPU
Name AzureML-Scikit-learn-0.20.3
Name AzureML-PyTorch-1.2-GPU
Name AzureML-PyTorch-1.1-GPU
Name AzureML-TensorFlow-1.10-GPU
Name AzureML-PySpark-MmlSpark-0.15
Name AzureML-AutoML
Name AzureML-PyTorch-1.4-GPU
Name AzureML-PyTorch-1.4-CPU
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-Hyperdrive-ForecastDNN
Name AzureML-AutoML-GPU
Name AzureML-AutoML-DNN-GPU
Name AzureML-AutoML-DNN
Name AzureML-Designer-R
Name AzureML-Designer-Recommender
Name AzureML-Designer-Transform
Name AzureML-Designer
Name AzureML-Designer-IO
Name AzureML-Designer-NLP
Name AzureML-Dask-CPU
Name AzureML-Dask-GPU
Name AzureML-PyTorch-1.5-CPU
Name AzureML-PyTorch-1.5-GPU
Name AzureML-Sidecar
Name AzureML-Designer-CV-Transform
Name AzureML-Designer-Score
Name AzureML-Designer-PyTorch
Name AzureML-Designer-CV
Name AzureML-TensorFlow-2.1-GPU
Name AzureML-TensorFlow-2.1-CPU
Name AzureML-Designer-PyTorch-Train
Name AzureML-AutoML-DNN-Vision-GPU
Name AzureML-Designer-VowpalWabbit
Name AzureML-TensorFlow-2.2-GPU
Name AzureML-TensorFlow-2.2-CPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-PyTorch-1.6-GPU
Name AzureML-Triton
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-DeepSpeed-0.3-GPU
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
ComputeTargetException:
Message: Received bad response from Resource Provider:
Response Code: 400
Headers: {'Cache-Control': 'no-cache', 'Pragma': 'no-cache', 'Content-Length': '375', 'Content-Type': 'application/json; charset=utf-8', 'Expires': '-1', 'x-ms-correlation-request-id': 'f9e7c07c-a88e-41ce-ab90-c0e305067d60', 'x-ms-ratelimit-remaining-subscription-writes': '1198', 'Request-Context': 'appId=cid-v1:6a27ce65-5555-41a3-85f7-b7a1ce31fd6b', 'x-ms-response-type': 'standard', 'x-ms-request-id': '|00-c38e005898228742aa8019f1549572fe-bd7326bce0ce6741-00.8cb7231b_', 'X-Content-Type-Options': 'nosniff', 'x-request-time': '0.045', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains', 'x-ms-routing-request-id': 'GERMANYWESTCENTRAL:20201230T132732Z:f9e7c07c-a88e-41ce-ab90-c0e305067d60', 'Date': 'Wed, 30 Dec 2020 13:27:32 GMT'}
Content: b'{"error":{"code":"BadArgument","message":"Compute name is invalid. It can include letters, digits and dashes. It must start with a letter, end with a letter or digit, and be between 2 and 16 characters in length.","innererror":{"clientRequestId":"4e9c9f62-6c40-43b1-b30c-ccaccfdf2fe8","serviceRequestId":"|00-c38e005898228742aa8019f1549572fe-bd7326bce0ce6741-00.8cb7231b_"}}}'
InnerException None
ErrorResponse
{
"error": {
"message": "Received bad response from Resource Provider:\nResponse Code: 400\nHeaders: {'Cache-Control': 'no-cache', 'Pragma': 'no-cache', 'Content-Length': '375', 'Content-Type': 'application/json; charset=utf-8', 'Expires': '-1', 'x-ms-correlation-request-id': 'f9e7c07c-a88e-41ce-ab90-c0e305067d60', 'x-ms-ratelimit-remaining-subscription-writes': '1198', 'Request-Context': 'appId=cid-v1:6a27ce65-5555-41a3-85f7-b7a1ce31fd6b', 'x-ms-response-type': 'standard', 'x-ms-request-id': '|00-c38e005898228742aa8019f1549572fe-bd7326bce0ce6741-00.8cb7231b_', 'X-Content-Type-Options': 'nosniff', 'x-request-time': '0.045', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains', 'x-ms-routing-request-id': 'GERMANYWESTCENTRAL:20201230T132732Z:f9e7c07c-a88e-41ce-ab90-c0e305067d60', 'Date': 'Wed, 30 Dec 2020 13:27:32 GMT'}\nContent: b'{\"error\":{\"code\":\"BadArgument\",\"message\":\"Compute name is invalid. It can include letters, digits and dashes. It must start with a letter, end with a letter or digit, and be between 2 and 16 characters in length.\",\"innererror\":{\"clientRequestId\":\"4e9c9f62-6c40-43b1-b30c-ccaccfdf2fe8\",\"serviceRequestId\":\"|00-c38e005898228742aa8019f1549572fe-bd7326bce0ce6741-00.8cb7231b_\"}}}'"
}
}
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.34.0 to work with azml-ws
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment in a Conda specification file, adding packages by using **conda** or **pip** to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies.Run the following cell to create a Conda specification file named *experiment_env.yml* in the same folder as this notebook.
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
Writing diabetes_training_logistic/experiment_env.yml
###Markdown
Now you can use your custom conda specification file to create an environment for your experiment
###Code
from azureml.core import Environment
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Let Azure ML manage dependencies
experiment_env.python.user_managed_dependencies = False
# Print the environment details
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
experiment_env defined.
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **use_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568595320655352
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1635846727_fa237cc9/ROC_1635846903.png
ROC_1635846903.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/9_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'experiment_env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.9002222222222223
AUC 0.8852547024821248
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1635847249_d644d82b/ROC_1635847261.png
ROC_1635847261.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/10_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name experiment_env
Name AzureML-Triton
Name AzureML-tritonserver-21.02-py38-inference
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-PyTorch-1.6-GPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cuda11-gpu
Name AzureML-DeepSpeed-0.3-GPU
Name AzureML-pytorch-1.6-ubuntu18.04-py37-cpu-inference
Name AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference
Name AzureML-onnxruntime-1.6-ubuntu18.04-py37-cpu-inference
Name AzureML-xgboost-0.9-ubuntu18.04-py37-cpu-inference
Name AzureML-tensorflow-1.15-ubuntu18.04-py37-cpu-inference
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cpu-inference
Name AzureML-minimal-ubuntu18.04-py37-cpu-inference
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11.0.3-gpu-inference
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cpu-inference
Name AzureML-PyTorch-1.3-CPU
Name AzureML-sklearn-0.24-ubuntu18.04-py37-cpu
Name AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu
Name AzureML-Minimal
Name AzureML-pytorch-1.8-ubuntu18.04-py37-cuda11-gpu
Name AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu
Name AzureML-mlflow-ubuntu18.04-py37-cpu-inference
Name AzureML-Tutorial
Name AzureML-VowpalWabbit-8.8.0
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments). Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "anhldt-compute1"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
Found existing cluster, use it.
###Markdown
> **Note**: Compute instances and clusters are based on standard Azure virtual machine images. For this exercise, the *Standard_DS11_v2* image is recommended to achieve the optimal balance of cost and performance. If your subscription has a quota that does not include this image, choose an alternative image; but bear in mind that a larger image may incur higher cost and a smaller image may not be sufficient to complete the tasks. Alternatively, ask your Azure administrator to extend your quota. Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Keep an eye on the kernel indicator at the top right of the page, when it turns from **&9899;** to **&9711;**, the code has finished running.After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.34.0 to work with ict-915-02-jmdl
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment in a Conda specification file, adding packages by using **conda** or **pip** to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies.Run the following cell to create a Conda specification file named *experiment_env.yml* in the same folder as this notebook.
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
Writing diabetes_training_logistic/experiment_env.yml
###Markdown
Now you can use your custom conda specification file to create an environment for your experiment
###Code
from azureml.core import Environment
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Let Azure ML manage dependencies
experiment_env.python.user_managed_dependencies = False
# Print the environment details
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
experiment_env defined.
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **use_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568509052814499
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1633656341_09bd4c92/ROC_1633656526.png
ROC_1633656526.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/8_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'experiment_env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.9011111111111111
AUC 0.8854360861475082
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1633716151_860db17f/ROC_1633716176.png
ROC_1633716176.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/10_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name experiment_env
Name AzureML-tritonserver-21.02-py38-inference
Name AzureML-minimal-ubuntu18.04-py37-cpu-inference
Name AzureML-onnxruntime-1.6-ubuntu18.04-py37-cpu-inference
Name AzureML-tensorflow-1.15-ubuntu18.04-py37-cpu-inference
Name AzureML-pytorch-1.6-ubuntu18.04-py37-cpu-inference
Name AzureML-xgboost-0.9-ubuntu18.04-py37-cpu-inference
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cpu-inference
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cpu-inference
Name AzureML-sklearn-0.24-ubuntu18.04-py37-cpu
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11.0.3-gpu-inference
Name AzureML-mlflow-ubuntu18.04-py37-cpu-inference
Name AzureML-Tutorial
Name AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference
Name AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cuda11-gpu
Name AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu
Name AzureML-pytorch-1.8-ubuntu18.04-py37-cuda11-gpu
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-PyTorch-1.3-CPU
Name AzureML-Minimal
Name AzureML-Triton
Name AzureML-DeepSpeed-0.3-GPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-PyTorch-1.6-GPU
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments). Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "CT-915-02-JMDL"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
InProgress....
SucceededProvisioning operation finished, operation "Succeeded"
Succeeded
AmlCompute wait for completion finished
Minimum number of nodes requested have been provisioned
###Markdown
> **Note**: Compute instances and clusters are based on standard Azure virtual machine images. For this exercise, the *Standard_DS11_v2* image is recommended to achieve the optimal balance of cost and performance. If your subscription has a quota that does not include this image, choose an alternative image; but bear in mind that a larger image may incur higher cost and a smaller image may not be sufficient to complete the tasks. Alternatively, ask your Azure administrator to extend your quota. Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
Steady 1
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Keep an eye on the kernel indicator at the top right of the page, when it turns from **&9899;** to **&9711;**, the code has finished running.After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.898
AUC 0.8829290099725624
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1633717910_0e9948ed/ROC_1633718581.png
ROC_1633718581.png
azureml-logs/20_image_build_log.txt
azureml-logs/55_azureml-execution-tvmps_13d000eaee835ea03f14f43a3e2affd08e68274e332e7c82abf6f9e173f32635_d.txt
azureml-logs/65_job_prep-tvmps_13d000eaee835ea03f14f43a3e2affd08e68274e332e7c82abf6f9e173f32635_d.txt
azureml-logs/70_driver_log.txt
azureml-logs/75_job_post-tvmps_13d000eaee835ea03f14f43a3e2affd08e68274e332e7c82abf6f9e173f32635_d.txt
azureml-logs/process_info.json
azureml-logs/process_status.json
logs/azureml/94_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/job_prep_azureml.log
logs/azureml/job_release_azureml.log
outputs/diabetes_model.pkl
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8829290099725624
Accuracy : 0.898
diabetes_model version: 4
Training context : File dataset
AUC : 0.8568743524381947
Accuracy : 0.7891111111111111
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8483103636996865
Accuracy : 0.7746666666666666
diabetes_model version: 1
Training context : Script
AUC : 0.8483377282451863
Accuracy : 0.774
AutoMLc4345bc5e0 version: 1
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Install the Azure Machine Learning SDKThe Azure Machine Learning SDK is updated frequently. Run the following cell to upgrade to the latest release, along with the additional package to support notebook widgets.
###Code
!pip install --upgrade azureml-sdk azureml-widgets
###Output
_____no_output_____
###Markdown
Connect to your workspaceWith the latest version of the SDK installed, now you're ready to connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.22.0 to work with azure_ds_challenge
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Overwriting diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
diabetes-experiment-env defined.
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568509052814499
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1616668746_c9470a0d/ROC_1616668758.png
ROC_1616668758.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/16_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Overwriting diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8988888888888888
AUC 0.8834359994372682
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1616668767_3d19f230/ROC_1616668779.png
ROC_1616668779.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/14_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name diabetes-experiment-env
Name AzureML-Tutorial
Name AzureML-Minimal
Name AzureML-Chainer-5.1.0-GPU
Name AzureML-PyTorch-1.2-CPU
Name AzureML-TensorFlow-1.12-CPU
Name AzureML-TensorFlow-1.13-CPU
Name AzureML-PyTorch-1.1-CPU
Name AzureML-TensorFlow-1.10-CPU
Name AzureML-PyTorch-1.0-GPU
Name AzureML-TensorFlow-1.12-GPU
Name AzureML-TensorFlow-1.13-GPU
Name AzureML-Chainer-5.1.0-CPU
Name AzureML-PyTorch-1.0-CPU
Name AzureML-Scikit-learn-0.20.3
Name AzureML-PyTorch-1.2-GPU
Name AzureML-PyTorch-1.1-GPU
Name AzureML-TensorFlow-1.10-GPU
Name AzureML-PyTorch-1.3-GPU
Name AzureML-TensorFlow-2.0-CPU
Name AzureML-PyTorch-1.3-CPU
Name AzureML-TensorFlow-2.0-GPU
Name AzureML-PySpark-MmlSpark-0.15
Name AzureML-PyTorch-1.4-GPU
Name AzureML-PyTorch-1.4-CPU
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-Hyperdrive-ForecastDNN
Name AzureML-AutoML-GPU
Name AzureML-PyTorch-1.5-CPU
Name AzureML-PyTorch-1.5-GPU
Name AzureML-Designer-Score
Name AzureML-TensorFlow-2.1-GPU
Name AzureML-TensorFlow-2.1-CPU
Name AzureML-TensorFlow-2.2-GPU
Name AzureML-TensorFlow-2.2-CPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-PyTorch-1.6-GPU
Name AzureML-Triton
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-DeepSpeed-0.3-GPU
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
Name AzureML-Tutorial
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- azureml-widgets==1.21.0
- azureml-pipeline-core==1.21.0
- azureml-pipeline-steps==1.21.0
- azureml-opendatasets==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-train-automl==1.21.0
- azureml-train==1.21.0
- azureml-sdk==1.21.0
- azureml-interpret==1.21.0
- azureml-tensorboard==1.21.0
- azureml-mlflow==1.21.0
- mlflow
- sklearn-pandas
- pandas
- numpy
- tqdm
- scikit-learn
- matplotlib
name: azureml_df6ad66e80d4bc0030b6d046a4e46427
Name AzureML-Minimal
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
name: azureml_39d18bde647c9e3afa8a97c1b8e8468f
Name AzureML-Chainer-5.1.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- chainer==5.1.0
- cupy-cuda90==5.1.0
- mpi4py==3.0.0
name: azureml_ddd7019e826fef0c011fe2473301bad4
Name AzureML-PyTorch-1.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.2
- torchvision==0.4.0
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_7b29eb0faf69300b2c4353d784107d34
Name AzureML-TensorFlow-1.12-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.12
- horovod==0.15.2
name: azureml_935139c0a8e56a190fafce06d6edc3cd
Name AzureML-TensorFlow-1.13-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.13.1
- horovod==0.16.1
name: azureml_71d30d49ae0ea16ff794742485e953e5
Name AzureML-PyTorch-1.1-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-telemetry==1.19.0
- azureml-train-restclients-hyperdrive==1.19.0
- azureml-train-core==1.19.0
- torch==1.1
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_6e145d82f92c27509a9b9e457edff086
Name AzureML-TensorFlow-1.10-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.10
- horovod==0.15.2
name: azureml_3810220929dbc5cb90f19492d15e7151
Name AzureML-PyTorch-1.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.0
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
name: azureml_2e157ca425d4987ff14c3f307bed97cf
Name AzureML-TensorFlow-1.12-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.12.0
- horovod==0.15.2
name: azureml_f6491bb45aa53d4e966d894b801f618f
Name AzureML-TensorFlow-1.13-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.13.1
- horovod==0.16.1
name: azureml_08e699281a2ab6d3b68ab09f106952c4
Name AzureML-Chainer-5.1.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- chainer==5.1.0
- mpi4py==3.0.0
name: azureml_5beb73f5839a4cc0a61198ee0bfa449d
Name AzureML-PyTorch-1.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.0
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
name: azureml_2e157ca425d4987ff14c3f307bed97cf
Name AzureML-Scikit-learn-0.20.3
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- scikit-learn==0.20.3
- scipy==1.2.1
- joblib==0.13.2
name: azureml_3d6fa1d835846f1a28a18b506bcad70f
Name AzureML-PyTorch-1.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.2
- torchvision==0.4.0
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_7b29eb0faf69300b2c4353d784107d34
Name AzureML-PyTorch-1.1-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.1
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_89dbc5bca1a4bdc6fd62f99a3d6295e5
Name AzureML-TensorFlow-1.10-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.10.0
- horovod==0.15.2
name: azureml_1c4b6b5c3d2c6ddcf034838a695c12de
Name AzureML-PyTorch-1.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-TensorFlow-2.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.0
- horovod==0.18.1
name: azureml_1a75e67c0587456b4ca58af5ea7ce7f7
Name AzureML-PyTorch-1.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-TensorFlow-2.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.0.0
- horovod==0.18.1
name: azureml_65a7428a47e1ac7aed09e91b25d6e127
Name AzureML-PySpark-MmlSpark-0.15
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
name: azureml_ba04eb03753f110d643f552f15c3bb42
Name AzureML-PyTorch-1.4-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.4.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9478de1acf723bf396f36694a0291988
Name AzureML-PyTorch-1.4-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.4.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9478de1acf723bf396f36694a0291988
Name AzureML-VowpalWabbit-8.8.0
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-dataset-runtime[fuse,pandas]
name: azureml_769be4b756b756954fa484d1287d5153
Name AzureML-Hyperdrive-ForecastDNN
packages dependencies:
- python=3.7
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-pipeline-core==1.21.0
- azureml-telemetry==1.21.0
- azureml-defaults==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-contrib-automl-dnn-forecasting==1.21.0
name: azureml_551b0d285970bc512cb183aa28be2c7f
Name AzureML-AutoML-GPU
packages channels:
- anaconda
- conda-forge
- pytorch
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.24.0.post1
- azureml-pipeline-core==1.24.0
- azureml-telemetry==1.24.0
- azureml-defaults==1.24.0
- azureml-interpret==1.24.0
- azureml-automl-core==1.24.0
- azureml-automl-runtime==1.24.0
- azureml-train-automl-client==1.24.0
- azureml-train-automl-runtime==1.24.0
- azureml-dataset-runtime==1.24.0
- azureml-dataprep==2.11.2
- azureml-mlflow==1.24.0
- inference-schema
- py-cpuinfo==5.0.0
- boto3==1.15.18
- botocore==1.18.18
- numpy~=1.18.0
- scikit-learn==0.22.1
- pandas~=0.25.0
- fbprophet==0.5
- holidays==0.9.11
- setuptools-git
- psutil>5.0.0,<6.0.0
name: azureml_cde433fc51995440f5f84a38d2f2e6fd
Name AzureML-PyTorch-1.5-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_0763e70e313e54c3fca03e22ca1a1886
Name AzureML-PyTorch-1.5-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_0763e70e313e54c3fca03e22ca1a1886
Name AzureML-Designer-Score
packages channels:
- defaults
dependencies:
- python=3.6.8
- pip:
- azureml-designer-score-modules==0.0.16
name: azureml_18573b1d77e5ef62bcbe8903c11ceafe
Name AzureML-TensorFlow-2.1-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.1.0
- horovod==0.19.1
name: azureml_060b2dd5226b12c758ebdfc8056984b9
Name AzureML-TensorFlow-2.1-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.1.0
- horovod==0.19.1
name: azureml_12fcb82f6ee32ce4eecb8a52dcd60745
Name AzureML-TensorFlow-2.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.2.0
- horovod==0.19.5
name: azureml_bffd025ba247b2f6ba16288746ca76d1
Name AzureML-TensorFlow-2.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.2.0
- horovod==0.19.5
name: azureml_ff6c5e7cf1cbe3e8ae7acc2938177052
Name AzureML-PyTorch-1.6-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_140c1aa5004c5a4a803b984404272b7b
Name AzureML-PyTorch-1.6-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.18.0.post1
- azureml-defaults==1.18.0
- azureml-telemetry==1.18.0
- azureml-train-restclients-hyperdrive==1.18.0
- azureml-train-core==1.18.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9d2a515d5c77954f2d0562cc5eb8a1fc
Name AzureML-Triton
packages channels:
- conda-forge
dependencies:
- python=3.7.9
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults[async]
- azureml-contrib-services==1.21.0
- numpy
- inference-schema[numpy-support]
- grpcio-tools
- geventhttpclient
- https://developer.download.nvidia.com/compute/redist/tritonclient/tritonclient-2.4.0-py3-none-manylinux1_x86_64.whl
name: azureml_c12df398a0c995ce0030ed7e73c50b18
Name AzureML-TensorFlow-2.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_b18c1b901df407fc9b08209bb6771b6d
Name AzureML-TensorFlow-2.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.18.0.post1
- azureml-defaults==1.18.0
- azureml-telemetry==1.18.0
- azureml-train-restclients-hyperdrive==1.18.0
- azureml-train-core==1.18.0
- tensorflow-gpu==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_60ad88840fdbe40e31e03ddbbc134dec
Name AzureML-DeepSpeed-0.3-GPU
packages channels:
- pytorch
- conda-forge
dependencies:
- python=3.6.2
- cudatoolkit-dev=10.1.243
- cudatoolkit=10.1
- pytorch==1.6.0
- torchvision==0.7.0
- gxx_linux-64
- pip<=20.2
- pip:
- azureml-core==1.21.0.post2
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- azureml-mlflow==1.21.0
- azureml-dataprep
- cmake==3.18.2
- mkl==2018.0.3
- tensorboard==1.14.0
- future==0.17.1
- matplotlib
- boto3
- h5py
- sklearn
- scipy
- pillow
- tqdm
- cupy-cuda101
- mpi4py
- deepspeed==0.3.11
name: azureml_906c8afffa36ce16d94f224cc03d7c62
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "azure-ds-com"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
Found existing cluster, use it.
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
Steady 2
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8977777777777778
AUC 0.8832499705804543
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1616669335_7bf879bd/ROC_1616669393.png
ROC_1616669393.png
azureml-logs/55_azureml-execution-tvmps_db1bf0c856005713128f36e515b4ee1f65d1fb8874e3f9f4932fe20efa0c28f0_d.txt
azureml-logs/65_job_prep-tvmps_db1bf0c856005713128f36e515b4ee1f65d1fb8874e3f9f4932fe20efa0c28f0_d.txt
azureml-logs/70_driver_log.txt
azureml-logs/75_job_post-tvmps_db1bf0c856005713128f36e515b4ee1f65d1fb8874e3f9f4932fe20efa0c28f0_d.txt
azureml-logs/process_info.json
azureml-logs/process_status.json
logs/azureml/104_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/job_prep_azureml.log
logs/azureml/job_release_azureml.log
outputs/diabetes_model.pkl
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8832499705804543
Accuracy : 0.8977777777777778
diabetes_model version: 4
Training context : File dataset
AUC : 0.8568517900798176
Accuracy : 0.7891111111111111
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568595320655352
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8484357430717946
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.20.0 to work with training_dp100
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
diabetes-experiment-env defined.
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568595320655352
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1612271433_1d1606c0/ROC_1612271853.png
ROC_1612271853.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/9_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/dataprep/engine_spans_24227e32-17ea-4f09-b1e0-a72aa53522dd.jsonl
logs/azureml/dataprep/python_span_24227e32-17ea-4f09-b1e0-a72aa53522dd.jsonl
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.9
AUC 0.8854128601903556
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1612271933_ccffc89d/ROC_1612271946.png
ROC_1612271946.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/9_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/dataprep/engine_spans_d31b5c6f-03b6-402c-a9bd-6c4843d7eb58.jsonl
logs/azureml/dataprep/python_span_d31b5c6f-03b6-402c-a9bd-6c4843d7eb58.jsonl
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name diabetes-experiment-env
Name AzureML-AutoML
Name AzureML-PyTorch-1.0-GPU
Name AzureML-Scikit-learn-0.20.3
Name AzureML-TensorFlow-1.12-CPU
Name AzureML-PyTorch-1.2-GPU
Name AzureML-TensorFlow-2.0-GPU
Name AzureML-TensorFlow-2.0-CPU
Name AzureML-Chainer-5.1.0-GPU
Name AzureML-TensorFlow-1.13-CPU
Name AzureML-Minimal
Name AzureML-Chainer-5.1.0-CPU
Name AzureML-PyTorch-1.4-GPU
Name AzureML-PySpark-MmlSpark-0.15
Name AzureML-PyTorch-1.3-CPU
Name AzureML-PyTorch-1.1-GPU
Name AzureML-TensorFlow-1.10-GPU
Name AzureML-PyTorch-1.2-CPU
Name AzureML-TensorFlow-1.13-GPU
Name AzureML-TensorFlow-1.10-CPU
Name AzureML-PyTorch-1.3-GPU
Name AzureML-PyTorch-1.4-CPU
Name AzureML-Tutorial
Name AzureML-PyTorch-1.0-CPU
Name AzureML-PyTorch-1.1-CPU
Name AzureML-TensorFlow-1.12-GPU
Name AzureML-Designer-VowpalWabbit
Name AzureML-TensorFlow-2.2-GPU
Name AzureML-TensorFlow-2.2-CPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-PyTorch-1.6-GPU
Name AzureML-Triton
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-DeepSpeed-0.3-GPU
Name AzureML-Sidecar
Name AzureML-Dask-CPU
Name AzureML-Dask-GPU
Name AzureML-TensorFlow-2.1-GPU
Name AzureML-PyTorch-1.5-GPU
Name AzureML-PyTorch-1.5-CPU
Name AzureML-TensorFlow-2.1-CPU
Name AzureML-AutoML-DNN-Vision-GPU
Name AzureML-Hyperdrive-ForecastDNN
Name AzureML-AutoML-DNN
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-AutoML-DNN-GPU
Name AzureML-AutoML-GPU
Name AzureML-Designer-Score
Name AzureML-Designer-PyTorch-Train
Name AzureML-Designer-IO
Name AzureML-Designer-Transform
Name AzureML-Designer-Recommender
Name AzureML-Designer-CV
Name AzureML-Designer-NLP
Name AzureML-Designer-PyTorch
Name AzureML-Designer-CV-Transform
Name AzureML-Designer
Name AzureML-Designer-R
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
Name AzureML-AutoML
packages channels:
- anaconda
- conda-forge
- pytorch
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-pipeline-core==1.21.0
- azureml-telemetry==1.21.0
- azureml-defaults==1.21.0
- azureml-interpret==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-dataset-runtime==1.21.0
- inference-schema
- py-cpuinfo==5.0.0
- boto3==1.15.18
- botocore==1.18.18
- numpy~=1.18.0
- scikit-learn==0.22.1
- pandas~=0.25.0
- py-xgboost<=0.90
- fbprophet==0.5
- holidays==0.9.11
- setuptools-git
- psutil>5.0.0,<6.0.0
name: azureml_7ade26eb614f97df8030bc480da59236
Name AzureML-PyTorch-1.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.0
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
name: azureml_2e157ca425d4987ff14c3f307bed97cf
Name AzureML-Scikit-learn-0.20.3
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- scikit-learn==0.20.3
- scipy==1.2.1
- joblib==0.13.2
name: azureml_3d6fa1d835846f1a28a18b506bcad70f
Name AzureML-TensorFlow-1.12-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.12
- horovod==0.15.2
name: azureml_935139c0a8e56a190fafce06d6edc3cd
Name AzureML-PyTorch-1.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.2
- torchvision==0.4.0
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_7b29eb0faf69300b2c4353d784107d34
Name AzureML-TensorFlow-2.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.0.0
- horovod==0.18.1
name: azureml_65a7428a47e1ac7aed09e91b25d6e127
Name AzureML-TensorFlow-2.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.0
- horovod==0.18.1
name: azureml_1a75e67c0587456b4ca58af5ea7ce7f7
Name AzureML-Chainer-5.1.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- chainer==5.1.0
- cupy-cuda90==5.1.0
- mpi4py==3.0.0
name: azureml_ddd7019e826fef0c011fe2473301bad4
Name AzureML-TensorFlow-1.13-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.13.1
- horovod==0.16.1
name: azureml_71d30d49ae0ea16ff794742485e953e5
Name AzureML-Minimal
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
name: azureml_39d18bde647c9e3afa8a97c1b8e8468f
Name AzureML-Chainer-5.1.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- chainer==5.1.0
- mpi4py==3.0.0
name: azureml_5beb73f5839a4cc0a61198ee0bfa449d
Name AzureML-PyTorch-1.4-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.4.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9478de1acf723bf396f36694a0291988
Name AzureML-PySpark-MmlSpark-0.15
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
name: azureml_ba04eb03753f110d643f552f15c3bb42
Name AzureML-PyTorch-1.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-PyTorch-1.1-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.1
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_89dbc5bca1a4bdc6fd62f99a3d6295e5
Name AzureML-TensorFlow-1.10-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.10.0
- horovod==0.15.2
name: azureml_1c4b6b5c3d2c6ddcf034838a695c12de
Name AzureML-PyTorch-1.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.2
- torchvision==0.4.0
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_7b29eb0faf69300b2c4353d784107d34
Name AzureML-TensorFlow-1.13-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.13.1
- horovod==0.16.1
name: azureml_08e699281a2ab6d3b68ab09f106952c4
Name AzureML-TensorFlow-1.10-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.10
- horovod==0.15.2
name: azureml_3810220929dbc5cb90f19492d15e7151
Name AzureML-PyTorch-1.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-PyTorch-1.4-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.4.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9478de1acf723bf396f36694a0291988
Name AzureML-Tutorial
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- azureml-widgets==1.21.0
- azureml-pipeline-core==1.21.0
- azureml-pipeline-steps==1.21.0
- azureml-opendatasets==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-train-automl==1.21.0
- azureml-train==1.21.0
- azureml-sdk==1.21.0
- azureml-interpret==1.21.0
- azureml-tensorboard==1.21.0
- azureml-mlflow==1.21.0
- mlflow
- sklearn-pandas
- pandas
- numpy
- tqdm
- scikit-learn
- matplotlib
name: azureml_df6ad66e80d4bc0030b6d046a4e46427
Name AzureML-PyTorch-1.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.0
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
name: azureml_2e157ca425d4987ff14c3f307bed97cf
Name AzureML-PyTorch-1.1-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-telemetry==1.19.0
- azureml-train-restclients-hyperdrive==1.19.0
- azureml-train-core==1.19.0
- torch==1.1
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_6e145d82f92c27509a9b9e457edff086
Name AzureML-TensorFlow-1.12-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.12.0
- horovod==0.15.2
name: azureml_f6491bb45aa53d4e966d894b801f618f
Name AzureML-Designer-VowpalWabbit
packages channels:
- conda-forge
- defaults
dependencies:
- pip=20.2
- python=3.6.8
- vowpalwabbit=8.8.1
- pip:
- azureml-designer-vowpal-wabbit-modules==0.0.20
name: azureml_d97433672d774e08e0c8d1bb565b1902
Name AzureML-TensorFlow-2.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.2.0
- horovod==0.19.5
name: azureml_bffd025ba247b2f6ba16288746ca76d1
Name AzureML-TensorFlow-2.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.2.0
- horovod==0.19.5
name: azureml_ff6c5e7cf1cbe3e8ae7acc2938177052
Name AzureML-PyTorch-1.6-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_140c1aa5004c5a4a803b984404272b7b
Name AzureML-PyTorch-1.6-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.18.0.post1
- azureml-defaults==1.18.0
- azureml-telemetry==1.18.0
- azureml-train-restclients-hyperdrive==1.18.0
- azureml-train-core==1.18.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9d2a515d5c77954f2d0562cc5eb8a1fc
Name AzureML-Triton
packages channels:
- conda-forge
dependencies:
- python=3.7.9
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults[async]
- azureml-contrib-services==1.21.0
- numpy
- inference-schema[numpy-support]
- grpcio-tools
- geventhttpclient
- https://developer.download.nvidia.com/compute/redist/tritonclient/tritonclient-2.4.0-py3-none-manylinux1_x86_64.whl
name: azureml_c12df398a0c995ce0030ed7e73c50b18
Name AzureML-TensorFlow-2.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_b18c1b901df407fc9b08209bb6771b6d
Name AzureML-TensorFlow-2.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.18.0.post1
- azureml-defaults==1.18.0
- azureml-telemetry==1.18.0
- azureml-train-restclients-hyperdrive==1.18.0
- azureml-train-core==1.18.0
- tensorflow-gpu==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_60ad88840fdbe40e31e03ddbbc134dec
Name AzureML-DeepSpeed-0.3-GPU
packages channels:
- pytorch
- conda-forge
dependencies:
- python=3.6.2
- cudatoolkit-dev=10.1.243
- cudatoolkit=10.1
- pytorch==1.6.0
- torchvision==0.7.0
- gxx_linux-64
- pip<=20.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- azureml-mlflow==1.21.0
- azureml-dataprep
- cmake==3.18.2
- mkl==2018.0.3
- tensorboard==1.14.0
- future==0.17.1
- matplotlib
- boto3
- h5py
- sklearn
- scipy
- pillow
- tqdm
- cupy-cuda101
- mpi4py
- deepspeed==0.3.*
name: azureml_6d73e16af83bd4c0d450d36e7573a33c
Name AzureML-Sidecar
packages channels:
- conda-forge
dependencies:
- python=3.6.2
name: base
Name AzureML-Dask-CPU
packages channels:
- conda-forge
- pytorch
- defaults
dependencies:
- python=3.6.9
- pip:
- adlfs
- azureml-core==1.18.0.post1
- azureml-dataset-runtime==1.18.0
- dask[complete]
- dask-ml[complete]
- distributed
- fastparquet
- fsspec
- joblib
- jupyterlab
- lz4
- mpi4py
- notebook
- pyarrow
name: azureml_d407e2694bdeecd1113b9f2a6efdddf7
Name AzureML-Dask-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.9
- pip:
- azureml-defaults==1.18.0
- adlfs
- azureml-core==1.18.0.post1
- dask[complete]
- dask-ml[complete]
- distributed
- fastparquet
- fsspec
- joblib
- jupyterlab
- lz4
- mpi4py
- notebook
- pyarrow
- matplotlib
name: azureml_d093a03b8baffa8a67905fca27c6dbe0
Name AzureML-TensorFlow-2.1-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.1.0
- horovod==0.19.1
name: azureml_060b2dd5226b12c758ebdfc8056984b9
Name AzureML-PyTorch-1.5-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_0763e70e313e54c3fca03e22ca1a1886
Name AzureML-PyTorch-1.5-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_0763e70e313e54c3fca03e22ca1a1886
Name AzureML-TensorFlow-2.1-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==2.1.0
- horovod==0.19.1
name: azureml_12fcb82f6ee32ce4eecb8a52dcd60745
Name AzureML-AutoML-DNN-Vision-GPU
packages dependencies:
- python=3.7
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-dataset-runtime==1.21.0
- azureml-contrib-dataset==1.21.0
- azureml-telemetry==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-defaults==1.21.0
- azureml-interpret==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-train-automl==1.21.0
- azureml-contrib-automl-dnn-vision==1.21.0
name: azureml_29bda5378d55ad54a720ff8210ddf6e7
Name AzureML-Hyperdrive-ForecastDNN
packages dependencies:
- python=3.7
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-pipeline-core==1.21.0
- azureml-telemetry==1.21.0
- azureml-defaults==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-contrib-automl-dnn-forecasting==1.21.0
name: azureml_551b0d285970bc512cb183aa28be2c7f
Name AzureML-AutoML-DNN
packages channels:
- anaconda
- conda-forge
- pytorch
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-pipeline-core==1.21.0
- azureml-telemetry==1.21.0
- azureml-defaults==1.21.0
- azureml-interpret==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-dataset-runtime==1.21.0
- inference-schema
- pytorch-transformers==1.0.0
- spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- py-cpuinfo==5.0.0
- boto3==1.15.18
- botocore==1.18.18
- numpy~=1.18.0
- scikit-learn==0.22.1
- pandas~=0.25.0
- py-xgboost<=0.90
- fbprophet==0.5
- holidays==0.9.11
- setuptools-git
- pytorch=1.4.0
- cudatoolkit=10.0.130
- psutil>5.0.0,<6.0.0
name: azureml_eee1cd453258d854671f588fc3481cb1
Name AzureML-VowpalWabbit-8.8.0
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-dataset-runtime[fuse,pandas]
name: azureml_769be4b756b756954fa484d1287d5153
Name AzureML-AutoML-DNN-GPU
packages channels:
- anaconda
- conda-forge
- pytorch
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-pipeline-core==1.21.0
- azureml-telemetry==1.21.0
- azureml-defaults==1.21.0
- azureml-interpret==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-dataset-runtime==1.21.0
- inference-schema
- horovod==0.19.4
- fbprophet==0.5
- pytorch-transformers==1.0.0
- spacy==2.1.8
- https://aka.ms/automl-resources/packages/en_core_web_sm-2.1.0.tar.gz
- py-cpuinfo==5.0.0
- boto3==1.15.18
- botocore==1.18.18
- numpy~=1.18.0
- scikit-learn==0.22.1
- pandas~=0.25.0
- holidays==0.9.11
- setuptools-git
- pytorch=1.4.0
- cudatoolkit=10.0.130
- psutil>5.0.0,<6.0.0
name: azureml_5b3e210bfb9046ffbd7895960ffb0e7a
Name AzureML-AutoML-GPU
packages channels:
- anaconda
- conda-forge
- pytorch
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-pipeline-core==1.21.0
- azureml-telemetry==1.21.0
- azureml-defaults==1.21.0
- azureml-interpret==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-dataset-runtime==1.21.0
- inference-schema
- py-cpuinfo==5.0.0
- boto3==1.15.18
- botocore==1.18.18
- numpy~=1.18.0
- scikit-learn==0.22.1
- pandas~=0.25.0
- fbprophet==0.5
- holidays==0.9.11
- setuptools-git
- psutil>5.0.0,<6.0.0
name: azureml_739aa0d429978b2527ff150901f172df
Name AzureML-Designer-Score
packages channels:
- defaults
dependencies:
- python=3.6.8
- pip:
- azureml-designer-score-modules==0.0.16
name: azureml_18573b1d77e5ef62bcbe8903c11ceafe
Name AzureML-Designer-PyTorch-Train
packages channels:
- defaults
dependencies:
- pip=20.2
- python=3.6.8
- pip:
- azureml-designer-pytorch-modules==0.0.29
name: azureml_cf1d63b5c9fbab4d16057fd0f4014950
Name AzureML-Designer-IO
packages channels:
- defaults
dependencies:
- pip=20.2
- python=3.6.8
- pip:
- azureml-dataset-runtime>=1.6
- azureml-designer-dataio-modules==0.0.52
name: azureml_8c7ce67fef6eb2042c24657793bb0a21
Name AzureML-Designer-Transform
packages channels:
- defaults
dependencies:
- pip=20.2
- python=3.6.8
- pip:
- azureml-designer-datatransform-modules==0.0.69
name: azureml_0e93e9f6da005231086afd3d74fb7de3
Name AzureML-Designer-Recommender
packages channels:
- defaults
dependencies:
- pip=20.2
- python=3.6.8
- pip:
- azureml-designer-recommender-modules==0.0.25
name: azureml_b27dd62572cd5aec3e1845c4ac7902db
Name AzureML-Designer-CV
packages channels:
- defaults
dependencies:
- pip=20.2
- python=3.6.8
- pip:
- azureml-designer-cv-modules==0.0.26
name: azureml_c70979bdcb80dec6ae720cc98c5a3fa8
Name AzureML-Designer-NLP
packages channels:
- defaults
dependencies:
- python=3.6.8
- pip:
- azureml-designer-classic-modules==0.0.121
- https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.1.0/en_core_web_sm-2.1.0.tar.gz#egg=en_core_web_sm
- spacy==2.1.7
name: azureml_3bf90115af8eef18f3792699faaed002
Name AzureML-Designer-PyTorch
packages channels:
- defaults
dependencies:
- pip=20.2
- python=3.6.8
- pip:
- azureml-designer-pytorch-modules==0.0.29
name: azureml_cf1d63b5c9fbab4d16057fd0f4014950
Name AzureML-Designer-CV-Transform
packages channels:
- defaults
dependencies:
- pip=20.2
- python=3.6.8
- pip:
- azureml-designer-cv-modules[pytorch]==0.0.26
name: azureml_1d41bb5987aeb35a4ec83fed786649b5
Name AzureML-Designer
packages channels:
- conda-forge
dependencies:
- pip=20.2
- python=3.6.8
- scikit-surprise=1.0.6
- pip:
- azureml-designer-classic-modules==0.0.147
- https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.1.0/en_core_web_sm-2.1.0.tar.gz#egg=en_core_web_sm
- spacy==2.1.7
name: azureml_9b50686470a92ca74f0d62e2629faaec
Name AzureML-Designer-R
packages channels:
- conda-forge
dependencies:
- pip=20.2
- python=3.6.8
- r-caret=6.0
- r-catools=1.17.1
- r-cluster=2.1.0
- r-dplyr=0.8.5
- r-e1071=1.7
- r-forcats=0.5.0
- r-forecast=8.12
- r-glmnet=2.0
- r-igraph=1.2.4
- r-matrix=1.2
- r-mclust=5.4.6
- r-mgcv=1.8
- r-nlme=3.1
- r-nnet=7.3
- r-plyr=1.8.6
- r-randomforest=4.6
- r-reticulate=1.12
- r-rocr=1.0
- r-rodbc=1.3
- r-rpart=4.1
- r-stringr=1.4.0
- r-tidyverse=1.2.1
- r-timedate=3043.102
- r-tseries=0.10
- r=3.5.1
- pip:
- azureml-designer-classic-modules==0.0.147
name: azureml_6e67a5e746efc73b118144f753a41eb3
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "ClusterDP100"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
Found existing cluster, use it.
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
Steady 1
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8973333333333333
AUC 0.8814498483013199
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1612272315_93aee680/ROC_1612273220.png
ROC_1612273220.png
azureml-logs/20_image_build_log.txt
azureml-logs/55_azureml-execution-tvmps_069b24e0f87f46a00828c388324a9bc9db6fba9bd74e041ad13a2a78997176cd_d.txt
azureml-logs/65_job_prep-tvmps_069b24e0f87f46a00828c388324a9bc9db6fba9bd74e041ad13a2a78997176cd_d.txt
azureml-logs/70_driver_log.txt
azureml-logs/75_job_post-tvmps_069b24e0f87f46a00828c388324a9bc9db6fba9bd74e041ad13a2a78997176cd_d.txt
azureml-logs/process_info.json
azureml-logs/process_status.json
logs/azureml/103_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/dataprep/engine_spans_a1b1deb6-eb08-4d02-b759-f8eeb681f4af.jsonl
logs/azureml/dataprep/python_span_a1b1deb6-eb08-4d02-b759-f8eeb681f4af.jsonl
logs/azureml/job_prep_azureml.log
logs/azureml/job_release_azureml.log
outputs/diabetes_model.pkl
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8814498483013199
Accuracy : 0.8973333333333333
diabetes_model version: 4
Training context : File dataset
AUC : 0.8468497021067503
Accuracy : 0.7788888888888889
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568595320655352
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8483203144435048
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
AutoML4ce9669e80 version: 1
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Failure while loading azureml_run_type_providers. Failed to load entrypoint hyperdrive = azureml.train.hyperdrive:HyperDriveRun._from_run_dto with exception (azureml-telemetry 1.33.0 (c:\applications\anaconda\lib\site-packages), Requirement.parse('azureml-telemetry~=1.30.0')).
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['data/diabetes.csv', 'data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness',
'SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Overwriting diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment in a Conda specification file, adding packages by using **conda** or **pip** to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies.Run the following cell to create a Conda specification file named *experiment_env.yml* in the same folder as this notebook.
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
Overwriting diabetes_training_logistic/experiment_env.yml
###Markdown
Now you can use your custom conda specification file to create an environment for your experiment
###Code
from azureml.core import Environment
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Let Azure ML manage dependencies
experiment_env.python.user_managed_dependencies = False
# Print the environment details
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
experiment_env defined.
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **use_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568595320655352
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1629012640_a9936eef/ROC_1629012663.png
ROC_1629012663.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/8_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Overwriting diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'experiment_env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8973333333333333
AUC 0.8821010598999647
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1629012825_95d2d2a1/ROC_1629012842.png
ROC_1629012842.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/8_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name experiment_env
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-PyTorch-1.3-CPU
Name AzureML-Minimal
Name AzureML-Tutorial
Name AzureML-Triton
Name AzureML-PyTorch-1.6-CPU
Name AzureML-DeepSpeed-0.3-GPU
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-PyTorch-1.6-GPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-tritonserver-21.02-py38-inference
Name AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu
Name AzureML-pytorch-1.8-ubuntu18.04-py37-cuda11-gpu
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
Name AzureML-tensorflow-1.15-ubuntu18.04-py37-cpu-inference
Name AzureML-mlflow-ubuntu18.04-py37-cpu-inference
Name AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cpu-inference
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11.0.3-gpu-inference
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cpu-inference
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cuda11-gpu
Name AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference
Name AzureML-sklearn-0.24-ubuntu18.04-py37-cpu
Name AzureML-minimal-ubuntu18.04-py37-cpu-inference
Name AzureML-onnxruntime-1.6-ubuntu18.04-py37-cpu-inference
Name AzureML-xgboost-0.9-ubuntu18.04-py37-cpu-inference
Name AzureML-pytorch-1.6-ubuntu18.04-py37-cpu-inference
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
Name AzureML-VowpalWabbit-8.8.0
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-dataset-runtime[fuse,pandas]
name: azureml_769be4b756b756954fa484d1287d5153
Name AzureML-PyTorch-1.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-Minimal
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.33.0
- azureml-defaults==1.33.0
name: azureml_5404d9857361dbdd79108e74fdaeda6b
Name AzureML-Tutorial
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.33.0
- azureml-defaults==1.33.0
- azureml-telemetry==1.33.0
- azureml-train-restclients-hyperdrive==1.33.0
- azureml-train-core==1.33.0
- azureml-widgets==1.33.0
- azureml-pipeline-core==1.33.0
- azureml-pipeline-steps==1.33.0
- azureml-opendatasets==1.33.0
- azureml-automl-core==1.33.0
- azureml-automl-runtime==1.33.0
- azureml-train-automl-client==1.33.0
- azureml-train-automl-runtime==1.33.0
- azureml-train-automl==1.33.0
- azureml-train==1.33.0
- azureml-sdk==1.33.0
- azureml-interpret==1.33.0
- azureml-tensorboard==1.33.0
- azureml-mlflow==1.33.0
- mlflow
- sklearn-pandas
- pandas
- numpy
- tqdm
- scikit-learn
- matplotlib
name: azureml_eddf8d28a7fee0f7b15f1f82c34323da
Name AzureML-Triton
packages channels:
- conda-forge
dependencies:
- python=3.7.9
- pip:
- azureml-core==1.33.0
- azureml-defaults[async]
- azureml-contrib-services==1.33.0
- numpy
- inference-schema[numpy-support]
- grpcio-tools
- geventhttpclient
- https://developer.download.nvidia.com/compute/redist/tritonclient/tritonclient-2.7.0-py3-none-manylinux1_x86_64.whl
name: azureml_76a73d33fa571d816c194f57be66e284
Name AzureML-PyTorch-1.6-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.33.0
- azureml-defaults==1.33.0
- azureml-telemetry==1.33.0
- azureml-train-restclients-hyperdrive==1.33.0
- azureml-train-core==1.33.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.21.3
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9a80c1e51ee3bc159c49887413775b4b
Name AzureML-DeepSpeed-0.3-GPU
packages channels:
- pytorch
- conda-forge
dependencies:
- python=3.6.2
- cudatoolkit-dev=10.1.243
- cudatoolkit=10.1
- pytorch==1.6.0
- torchvision==0.7.0
- gxx_linux-64
- pip<=20.2
- pip:
- azureml-core==1.33.0
- azureml-defaults==1.33.0
- azureml-telemetry==1.33.0
- azureml-train-restclients-hyperdrive==1.33.0
- azureml-train-core==1.33.0
- azureml-mlflow==1.33.0
- azureml-dataprep
- cmake==3.18.2
- mkl==2018.0.3
- tensorboard==1.14.0
- future==0.17.1
- matplotlib
- boto3
- h5py
- sklearn
- scipy
- pillow
- tqdm
- cupy-cuda101
- mpi4py
- deepspeed==0.3.11
name: azureml_781a9b4c6d23322ff79fd21e2c6ad931
Name AzureML-TensorFlow-2.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.33.0
- azureml-defaults==1.33.0
- azureml-telemetry==1.33.0
- azureml-train-restclients-hyperdrive==1.33.0
- azureml-train-core==1.33.0
- tensorflow==2.3.0
- cmake==3.18.2
- horovod==0.21.3
name: azureml_8d64e8ad55988af7db0fe00878f34096
Name AzureML-PyTorch-1.6-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.33.0
- azureml-defaults==1.33.0
- azureml-telemetry==1.33.0
- azureml-train-restclients-hyperdrive==1.33.0
- azureml-train-core==1.33.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.21.3
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9a80c1e51ee3bc159c49887413775b4b
Name AzureML-TensorFlow-2.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.33.0
- azureml-defaults==1.33.0
- azureml-telemetry==1.33.0
- azureml-train-restclients-hyperdrive==1.33.0
- azureml-train-core==1.33.0
- tensorflow-gpu==2.3.0
- cmake==3.18.2
- horovod==0.21.3
name: azureml_2edba9065430a9451e403fcccebf00e6
Name AzureML-tritonserver-21.02-py38-inference
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.33.0
name: project_environment
Name AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu
packages name: project_environment
dependencies:
- python=3.6.2
- pip:
- azureml-defaults
channels:
- anaconda
- conda-forge
Name AzureML-pytorch-1.8-ubuntu18.04-py37-cuda11-gpu
packages name: project_environment
dependencies:
- python=3.6.2
- pip:
- azureml-defaults
channels:
- anaconda
- conda-forge
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "agcluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
Found existing cluster, use it.
###Markdown
> **Note**: Compute instances and clusters are based on standard Azure virtual machine images. For this exercise, the *Standard_DS11_v2* image is recommended to achieve the optimal balance of cost and performance. If your subscription has a quota that does not include this image, choose an alternative image; but bear in mind that a larger image may incur higher cost and a smaller image may not be sufficient to complete the tasks. Alternatively, ask your Azure administrator to extend your quota. Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
Steady 1
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Keep an eye on the kernel indicator at the top right of the page, when it turns from **&9899;** to **&9711;**, the code has finished running.After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.9008888888888889
AUC 0.8854314409560776
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1629013002_f9009e23/ROC_1629013213.png
ROC_1629013213.png
azureml-logs/55_azureml-execution-tvmps_06f4c1b042a540fb7a0af126e2880f0cb891db3d195165d5319bcd43b07c729a_d.txt
azureml-logs/65_job_prep-tvmps_06f4c1b042a540fb7a0af126e2880f0cb891db3d195165d5319bcd43b07c729a_d.txt
azureml-logs/70_driver_log.txt
azureml-logs/75_job_post-tvmps_06f4c1b042a540fb7a0af126e2880f0cb891db3d195165d5319bcd43b07c729a_d.txt
azureml-logs/process_info.json
azureml-logs/process_status.json
logs/azureml/92_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/job_prep_azureml.log
logs/azureml/job_release_azureml.log
outputs/diabetes_model.pkl
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 2
Training context : Compute cluster
AUC : 0.8854314409560776
Accuracy : 0.9008888888888889
diabetes_model version: 1
Training context : Compute cluster
AUC : 0.8844267524095271
Accuracy : 0.8995555555555556
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment in a Conda specification file, adding packages by using **conda** or **pip** to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies.Run the following cell to create a Conda specification file named *experiment_env.yml* in the same folder as this notebook.
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
_____no_output_____
###Markdown
Now you can use your custom conda specification file to create an environment for your experiment
###Code
from azureml.core import Environment
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Let Azure ML manage dependencies
experiment_env.python.user_managed_dependencies = False
# Print the environment details
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **use_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'experiment_env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments). Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
> **Note**: Compute instances and clusters are based on standard Azure virtual machine images. For this exercise, the *Standard_DS11_v2* image is recommended to achieve the optimal balance of cost and performance. If your subscription has a quota that does not include this image, choose an alternative image; but bear in mind that a larger image may incur higher cost and a smaller image may not be sufficient to complete the tasks. Alternatively, ask your Azure administrator to extend your quota. Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Keep an eye on the kernel indicator at the top right of the page, when it turns from **&9899;** to **&9711;**, the code has finished running.After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.34.0 to work with aizat
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment in a Conda specification file, adding packages by using **conda** or **pip** to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies.Run the following cell to create a Conda specification file named *experiment_env.yml* in the same folder as this notebook.
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
Writing diabetes_training_logistic/experiment_env.yml
###Markdown
Now you can use your custom conda specification file to create an environment for your experiment
###Code
from azureml.core import Environment
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Let Azure ML manage dependencies
experiment_env.python.user_managed_dependencies = False
# Print the environment details
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
experiment_env defined.
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **use_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568509052814499
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1637718144_72535d4c/ROC_1637718337.png
ROC_1637718337.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/9_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'experiment_env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8973333333333333
AUC 0.8821010598999647
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1637718364_0a6839b3/ROC_1637718372.png
ROC_1637718372.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/8_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name experiment_env
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-PyTorch-1.3-CPU
Name AzureML-Triton
Name AzureML-onnxruntime-1.6-ubuntu18.04-py37-cpu-inference
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cpu-inference
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cpu-inference
Name AzureML-xgboost-0.9-ubuntu18.04-py37-cpu-inference
Name AzureML-minimal-ubuntu18.04-py37-cpu-inference
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11.0.3-gpu-inference
Name AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference
Name AzureML-tensorflow-1.15-ubuntu18.04-py37-cpu-inference
Name AzureML-mlflow-ubuntu18.04-py37-cpu-inference
Name AzureML-pytorch-1.6-ubuntu18.04-py37-cpu-inference
Name AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu
Name AzureML-sklearn-0.24-ubuntu18.04-py37-cpu
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cuda11-gpu
Name AzureML-pytorch-1.9-ubuntu18.04-py37-cuda11-gpu
Name AzureML-pytorch-1.10-ubuntu18.04-py38-cuda11-gpu
Name AzureML-pytorch-1.8-ubuntu18.04-py37-cuda11-gpu
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments). Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
ComputeTargetException:
Message: Received bad response from Resource Provider:
Response Code: 400
Headers: {'Cache-Control': 'no-cache', 'Pragma': 'no-cache', 'Content-Length': '746', 'Content-Type': 'application/json; charset=utf-8', 'Expires': '-1', 'x-ms-ratelimit-remaining-subscription-writes': '1199', 'Request-Context': 'appId=cid-v1:67969c6a-972f-47a9-8267-e09d830cc328', 'x-ms-response-type': 'error', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains', 'X-Content-Type-Options': 'nosniff', 'x-request-time': '0.112', 'x-ms-request-id': '07f77af5-0ff9-4b29-bcad-1ef6b2093c6e', 'x-ms-correlation-request-id': '07f77af5-0ff9-4b29-bcad-1ef6b2093c6e', 'x-ms-routing-request-id': 'SOUTHEASTASIA:20211124T014624Z:07f77af5-0ff9-4b29-bcad-1ef6b2093c6e', 'Date': 'Wed, 24 Nov 2021 01:46:23 GMT'}
Content: b'{\n "error": {\n "code": "UserError",\n "severity": null,\n "message": "Compute name is invalid. It can include letters, digits and dashes. It must start with a letter, end with a letter or digit, and be between 2 and 16 characters in length.",\n "messageFormat": null,\n "messageParameters": null,\n "referenceCode": null,\n "detailsUri": null,\n "target": null,\n "details": [],\n "innerError": null,\n "debugInfo": null,\n "additionalInfo": null\n },\n "correlation": {\n "operation": "8581d83112a4bf4ca6ea1bf22b95b4a5",\n "request": "eaf4a6a7b1116245"\n },\n "environment": "southeastasia",\n "location": "southeastasia",\n "time": "2021-11-24T01:46:24.0180869+00:00",\n "componentName": "machinelearningcompute"\n}'
InnerException None
ErrorResponse
{
"error": {
"message": "Received bad response from Resource Provider:\nResponse Code: 400\nHeaders: {'Cache-Control': 'no-cache', 'Pragma': 'no-cache', 'Content-Length': '746', 'Content-Type': 'application/json; charset=utf-8', 'Expires': '-1', 'x-ms-ratelimit-remaining-subscription-writes': '1199', 'Request-Context': 'appId=cid-v1:67969c6a-972f-47a9-8267-e09d830cc328', 'x-ms-response-type': 'error', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains', 'X-Content-Type-Options': 'nosniff', 'x-request-time': '0.112', 'x-ms-request-id': '07f77af5-0ff9-4b29-bcad-1ef6b2093c6e', 'x-ms-correlation-request-id': '07f77af5-0ff9-4b29-bcad-1ef6b2093c6e', 'x-ms-routing-request-id': 'SOUTHEASTASIA:20211124T014624Z:07f77af5-0ff9-4b29-bcad-1ef6b2093c6e', 'Date': 'Wed, 24 Nov 2021 01:46:23 GMT'}\nContent: b'{\\n \"error\": {\\n \"code\": \"UserError\",\\n \"severity\": null,\\n \"message\": \"Compute name is invalid. It can include letters, digits and dashes. It must start with a letter, end with a letter or digit, and be between 2 and 16 characters in length.\",\\n \"messageFormat\": null,\\n \"messageParameters\": null,\\n \"referenceCode\": null,\\n \"detailsUri\": null,\\n \"target\": null,\\n \"details\": [],\\n \"innerError\": null,\\n \"debugInfo\": null,\\n \"additionalInfo\": null\\n },\\n \"correlation\": {\\n \"operation\": \"8581d83112a4bf4ca6ea1bf22b95b4a5\",\\n \"request\": \"eaf4a6a7b1116245\"\\n },\\n \"environment\": \"southeastasia\",\\n \"location\": \"southeastasia\",\\n \"time\": \"2021-11-24T01:46:24.0180869+00:00\",\\n \"componentName\": \"machinelearningcompute\"\\n}'"
}
}
###Markdown
> **Note**: Compute instances and clusters are based on standard Azure virtual machine images. For this exercise, the *Standard_DS11_v2* image is recommended to achieve the optimal balance of cost and performance. If your subscription has a quota that does not include this image, choose an alternative image; but bear in mind that a larger image may incur higher cost and a smaller image may not be sufficient to complete the tasks. Alternatively, ask your Azure administrator to extend your quota. Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
Keep an eye on the kernel indicator at the top right of the page, when it turns from **&9899;** to **&9711;**, the code has finished running.After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
컴퓨팅 대상 사용
스크립트를 Azure Machine Learning 실험으로 실행할 때는 실험 실행용 실행 컨텍스트를 정의해야 합니다. 실행 컨텍스트는 다음 항목으로 구성됩니다.
* 스크립트용 Python 환경. 스크립트에서 사용되는 모든 Python 패키지가 포함되어 있어야 합니다.
* 스크립트를 실행할 컴퓨팅 대상. 실험 실행이 시작되는 로컬 워크스테이션일 수도 있고 요청에 따라 프로비전되는 학습 클러스터 등의 원격 컴퓨팅 대상일 수도 있습니다.
이 Notebook에서는 실험용 *환경* 및 *컴퓨팅 대상*을 살펴봅니다. 작업 영역에 연결
이 Notebook의 작업을 시작하려면 먼저 작업 영역에 연결합니다.
> **참고**: Azure 구독에 인증된 세션을 아직 설정하지 않은 경우에는 링크를 클릭하고 인증 코드를 입력한 다음 Azure에 로그인하여 인증하라는 메시지가 표시됩니다.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
실험용 데이터 준비
이 Notebook에서는 당뇨병 환자의 세부 정보가 포함된 데이터 세트를 사용합니다. 아래 셀을 실행하여 이 데이터 세트를 만듭니다. 데이터 세트가 이미 있으면 코드가 기존 버전을 찾습니다.
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
학습 스크립트 만들기
아래의 두 셀을 실행하여 다음 항목을 만듭니다.
1. 새 실험용 폴더
2. **scikit-learn**을 사용하여 모델을 학습시키고 **matplotlib**를 사용하여 ROC 곡선을 그리는 학습 스크립트 파일
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
환경 정의
Azure Machine Learning에서 Python 스크립트를 실험으로 실행하면 스크립트용 실행 컨텍스트를 정의하는 Conda 환경이 작성됩니다. Azure Machine Learning은 여러 공통 패키지가 포함된 기본 환경을 제공합니다. 이러한 패키지로는 실험 실행 사용에 필요한 라이브러리가 포함된 **azureml-defaults** 패키지, 그리고 **pandas**/**numpy** 등의 널리 사용되는 패키지 등이 있습니다.
**conda** 또는 **pip**를 사용하여 패키지를 추가해 Conda 사양 파일에 자체 환경을 정의할 수도 있습니다. 그러면 실험에서 필요한 모든 라이브러리에 액세스할 수 있습니다.
> **참고**: conda 종속성이 먼저 설치된 후 pip 종속성이 설치됩니다. pip 종속성을 설치하려면 **pip** 패키지가 필요하므로 conda 종속성에 이 패키지를 포함하는 것이 좋습니다.
다음 셀의 명령을 실행하여 이 Notebook과 동일한 폴더에 *experiment_env.yml*이라는 Conda 사양 파일을 만듭니다.
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
_____no_output_____
###Markdown
이제 사용자 지정 conda 사양 파일을 사용하여 실험용 환경을 만들 수 있습니다.
###Code
from azureml.core import Environment
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Let Azure ML manage dependencies
experiment_env.python.user_managed_dependencies = False
# Print the environment details
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
이제 환경을 사용하여 스크립트를 실험으로 실행할 수 있습니다.
다음 코드는 작성된 환경을 ScriptRunConfig에 할당하고 실험을 제출합니다. 실험이 실행되면 위젯과 **azureml_logs/60_control_log.txt** 출력 로그에서 실행 세부 정보를 관찰합니다. 그러면 conda 환경이 작성되고 있음을 확인할 수 있습니다.
> **참고**: 아래 코드는 스크립트 실행을 위해 **DockerConfiguration**을 만듭니다. 이때 스크립트 환경을 Docker 컨테이너에서 호스트하기 위해 **use_docker** 특성을 **True**로 설정합니다. 이는 기본 동작이므로 생략할 수 있지만, 여기서는 명확한 설명을 위해 포함하려고 합니다.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
실험에서는 필요한 모든 패키지가 포함된 환경을 정상적으로 사용했습니다. 아래 코드를 실행하거나 Azure Machine Learning Studio를 통해 실험 실행의 출력과 메트릭을 확인할 수 있습니다. 예를 들어 **scikit-learn**을 사용하여 학습시킨 모델, **matplotlib**를 사용하여 생성된 ROC 차트 이미지 등을 확인할 수 있습니다.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
환경 등록
필요한 패키지가 포함된 환경을 정의한 후에는 작업 영역에 해당 환경을 등록할 수 있습니다.
###Code
# Register the environment
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
환경은 처음 만들 때 할당한 이름(여기서는 *diabetes-experiment-env*)으로 등록됩니다.
환경을 등록하면 요구 사항이 같은 모든 스크립트에 해당 환경을 재사용할 수 있습니다. 여기서는 환경 재사용의 예를 확인하기 위해 다른 알고리즘을 사용하여 당뇨병 모델을 학습시키는 스크립트와 폴더를 만들어 보겠습니다.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
이제 등록된 환경을 검색한 다음 대체 학습 스크립트를 실행하는 새 실험에서 사용할 수 있습니다. 단, 의사 결정 트리 분류기에는 정규화 매개 변수가 필요하지 않으므로 이번에는 정규화 매개 변수가 없습니다.
###Code
# get the registered environment
registered_env = Environment.get(ws, 'experiment_env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
이번에는 실험이 더 빠르게 실행됩니다. 이전 실행에서 일치하는 환경이 캐시되었으므로 로컬 컴퓨팅에서 환경을 다시 만들 필요가 없기 때문입니다. 하지만 컴퓨팅 대상이 다르더라도 같은 환경이 작성되어 사용됩니다. 그러므로 실험 스크립트 실행 컨텍스트의 일관성이 유지됩니다.
이번에는 실험의 메트릭과 출력을 살펴보겠습니다.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
등록된 환경 확인
직접 만든 환경을 등록할 수도 있고, 일반 실험 유형에는 미리 작성된 "큐레이트" 환경을 활용할 수도 있습니다. 다음 코드는 등록된 모든 환경의 목록을 표시합니다.
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
모든 큐레이트 환경의 이름은 ***AzureML-***로 시작됩니다. 직접 만든 환경에는 이 접두사를 사용할 수 없습니다. 컴퓨팅 클러스터 만들기
대부분의 경우 로컬 컴퓨팅 리소스만으로는 대량의 데이터를 처리해야 하는 복잡한 실험이나 장기 실행 실험을 처리하기 어려울 수 있습니다. 이러한 상황에서는 클라우드에서 컴퓨팅 리소스를 동적으로 만들고 사용하는 기능을 활용할 수 있습니다. Azure Machine Learning은 광범위한 컴퓨팅 대상을 지원합니다. 이러한 컴퓨팅 대상은 작업 영역에서 정의하고 실험을 실행하는 데 사용할 수 있으며, 리소스 사용 시에만 비용을 지불하면 됩니다.
컴퓨팅 클러스터는 [Azure Machine Learning Studio](https://ml.azure.com)에서 만들 수도 있고 Azure Machine Learning SDK를 사용하여 만들 수도 있습니다. 다음 코드 셀은 작업 영역에 지정된 이름의 컴퓨팅 클러스터가 있는지를 확인한 후 해당 클러스터가 없으면 만듭니다.
> **중요**: 컴퓨팅 클러스터를 실행하기 전에 아래 코드에서 *your-compute-cluster*를 실제 컴퓨팅 클러스터에 적합한 이름으로 변경합니다. 기존 클러스터가 있으면 해당 클러스터의 이름을 지정할 수 있습니다. 클러스터 이름은 2~16자 사이의 전역으로 고유한 이름이어야 합니다. 유효한 문자는 영문자, 숫자 및 문자입니다.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
> **참고**: 컴퓨팅 인스턴스와 클러스터는 표준 Azure 가상 머신 이미지를 기반으로 합니다. 이 연습에서는 비용과 성능 간 최적의 균형을 달성하기 위해 *Standard_DS11_v2* 이미지를 사용하는 것이 좋습니다. 구독의 할당량이 적어 이 이미지를 포함할 수 없는 경우 대체 이미지를 선택할 수 있습니다. 그러나 큰 이미지는 높은 비용을 야기할 수 있고 작은 이미지는 작업을 완료하는 데 충분하지 않을 수 있으므로 신중히 선택하는 것이 좋습니다. Azure 관리자에게 요청하여 할당량을 늘릴 수도 있습니다.
원격 컴퓨팅 대상에서 실험 실행
이제 이전에 실행했던 실험을 다시 실행할 수 있습니다. 이번에는 방금 직접 만든 컴퓨팅 클러스터에서 실험을 실행합니다.
> **참고**: 이번에는 실험을 실행하는 데 시간이 훨씬 오래 걸립니다. conda 환경이 포함된 이미지를 작성한 다음 클러스터 노드를 시작하고 이미지를 배포해야 스크립트를 실행할 수 있기 때문입니다. 따라서 당뇨병 학습 스크립트와 같은 간단한 실험에서는 효율적이지 않을 수도 있습니다. 하지만 몇 시간이 걸리는 훨씬 복잡한 실험을 실행해야 하는 경우 확장성이 더 높은 컴퓨팅을 동적으로 만들면 전체 실험 시간을 크게 줄일 수도 있습니다.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
실험이 실행되는 동안 위의 위젯이나 [Azure Machine Learning Studio](https://ml.azure.com)에서 컴퓨팅 상태를 확인할 수 있습니다. 아래 코드를 사용하여 컴퓨팅 상태를 확인할 수도 있습니다.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
상태가 *steady*에서 *resizing*으로 바뀌려면 시간이 오래 걸리므로 잠시 휴식을 취하셔도 됩니다. 실행이 완료될 때까지 커널을 차단하려면 아래 셀을 실행합니다.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
페이지 오른쪽 위에 있는 커널 표시기를 주의 깊게 살펴보세요. **&9899;**에서 **&9711;**로 바뀌면 코드 실행이 완료된 것입니다.
실험이 완료되고 나면 실험 실행에서 생성된 파일과 메트릭을 가져올 수 있습니다. 이번에는 이미지 작성 및 컴퓨팅 관리용 로그가 파일에 포함됩니다.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
이제 실험을 통해 학습된 모델을 등록할 수 있습니다.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
コンピューティングを操作するAzure Machine Learning 実験としてスクリプトを実行する場合は、実験実行の実行コンテキストを定義する必要があります。実行コンテキストは以下で構成されます。* スクリプト向けの Python 環境。スクリプトで使用するすべての Python パッケージを含める必要があります。* スクリプトが実行されるコンピューティング ターゲット。実験実行を開始するローカル ワークステーション、またはオンデマンドで提供されるトレーニング クラスターなどのリモート コンピューター先になります。このノートブックでは、実験の*環境*と*コンピューティング ターゲット*について学びます。 ワークスペースに接続する作業を開始するには、ワークスペースに接続します。> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
###Code
import azureml.core
from azureml.core import Workspace
# 保存された構成ファイルからワークスペースを読み込む
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
実験用データを準備するこのノートブックでは、糖尿病患者の詳細を含むデータセットを使用します。次のセルを実行してこのデータセットを作成します (すでに存在する場合は、コードによって既存のバージョンを検索します)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # 糖尿病 CSV ファイルを /data にアップロードする
target_path='diabetes-data/', # データストアのフォルダー パスに入れる
overwrite=True, # 同じ名前の既存のファイルを置き換える
show_progress=True)
#データストア上のパスから表形式のデータセットを作成する (しばらく時間がかかる場合があります)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# 表形式のデータセットを登録する
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
トレーニング スクリプトを作成する次の 2 つのセルを実行して作成します。1. 新しい実験用のフォルダー2. **SCIkit-learn を使用** してモデルをトレーニングし、**matplotlib** を使用して ROC 曲線をプロットするトレーニング スクリプト ファイル。
###Code
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# スクリプト引数を取得する
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 正規化ハイパーパラメーターを設定する
reg = args.reg_rate
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データを読み込む (入力データセットとして渡される)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# ロジスティック回帰モデルをトレーニングする
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 曲線をプロットする
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 対角 50% ラインをプロットする
plt.plot([0, 1], [0, 1], 'k--')
# モデルによって達成された FPR と TPR をプロットする
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
環境を定義するAzure Machine Learning で実験として Python スクリプトを実行すると、Conda 環境が作成され、スクリプトの実行コンテキストが定義されます。Azure Machine Learning には、多くの共通パッケージを含む既定の環境を提供します。これには、実験実行の操作に必要なライブラリを含む **azureml-defaults** パッケージ、**Pandas** や **numpy** などの一般なパッケージが含まれます。また、**Conda** または **PIP** を使用して独自の環境を定義し、パッケージを追加して、実験が必要なすべてのライブラリにアクセスできるようにすることもできます。> **注**: Conda 依存関係を最初にインストールした後、pip 依存関係をインストールします。**pip** パッケージでは pip 依存関係をインストールする必要があるため、これを Conda 依存関係にも含めるようお勧めします (忘れた場合でも Azure ML がインストールしますが、ログに警告が表示されます!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# 実験用 Python 環境を作成する
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # 依存関係を Azure ML に管理させる
diabetes_env.docker.enabled = True # ドッカー コンテナーを使用する
# パッケージの依存関係のセットを作成する (必要に応じて Conda または PIP)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# 環境に依存関係を追加する
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
これで、環境を使用し、スクリプトを実験として実行することができます。次のコードでは、作成した環境を ScriptRunConfig に割り当て、実験を送信します。実験の実行中に、ウィジェットおよび **azureml_logs/60_control_log.txt** 出力ログで実行の詳細を確認すると、Conda 環境が構築されていることがわかります。
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # 正則化率パラメーター
'--input-data', diabetes_ds.as_named_input('training_data')], # データセットへの参照
environment=diabetes_env)
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
実験では、必要なすべてのパッケージを含む環境が正常に使用されました - Azure Machine Learning Studio で実行された実験のメトリックと出力を表示するか、以下のコード (**Scikit-learn** を使用してトレーニングされたモデルや **matplotlib を**使用して生成された ROC チャート イメージを含む) を実行して表示できます。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
環境を登録する必要なパッケージを使用して環境を定義する手間が省けたので、ワークスペースに登録できます。
###Code
# 環境を登録する
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
環境は、最初に作成したときに割り当てた名前 (この場合、*diabetes-experiment-env*) で登録されていることに注意してください。環境が登録されている場合、同じ要件を持つすべてのスクリプトに再利用できます。たとえば、別のアルゴリズムを使用して糖尿病モデルをトレーニングするフォルダーとスクリプトを作成してみましょう。
###Code
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# スクリプト引数を取得する
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データを読み込む (入力データセットとして渡される)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# デシジョン ツリー モデルをトレーニングする
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 曲線をプロットする
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 対角 50% ラインをプロットする
plt.plot([0, 1], [0, 1], 'k--')
# モデルによって達成された FPR と TPR をプロットする
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
これで、登録された環境を取得し、代替トレーニング スクリプトを実行する新しい実験でこれを使用できます (デシジョン ツリー分類子は正規化パラメーターを必要としないため、ここでは使われていません)。
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# 登録済みの環境を取得する
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # データセットへの参照
environment=registered_env)
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
今回は、一致する環境が前回の実行からキャッシュされており、ローカル コンピューティングで再作成する必要がないため、実験の実行速度が速くなります。ただし、異なるコンピューティング ターゲットでも、同じ環境が作成および使用され、実験スクリプトの実行コンテキストの一貫性が確保されます。実験のメトリックと出力を見てみましょう。
###Code
# 指標の記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
登録済み環境を表示する独自の環境を登録するだけでなく、一般的な実験タイプに対して事前構築され「選別された」環境を活用できます。次のコードは、登録されているすべての環境を一覧表示します。
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
すべての選別された環境には、***AzureML-*** で始まる名前が付いています (このプレフィックスは、独自の環境では使用できません)。選別された環境を詳しく調べ、それぞれのパッケージに含まれているパッケージを確認しましょう。
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
コンピューティング クラスターを作成する多くの場合、ローカル コンピューティングリソースでは、大量のデータを処理する必要がある複雑な実験や長時間実行される実験を処理するには十分でない場合があります。また、クラウドでコンピューティング リソースを動的に作成して使用する機能を活用する場合もあります。Azure Machine Learning は、さまざまなコンピューティング ターゲットをサポートしており、これをワークスペースで定義し、実験の実行に使用できます。リソースの支払いは使用時にのみ行われます。コンピューティング クラスターは、[Azure Machine Learning Studio](https://ml.azure.com) で作成するか、Azure Machine Learning SDK を使用して作成できます。以下のコード セルは指定された名前を使ってコンピューティング クラスターがあるかどうかワークスペースを確認し、ない場合は作成します。> **重要**: 実行する前に、以下のコードで *your-compute-cluster* をコンピューティング クラスターに適した名前に変更してください。既存のクラスターがある場合はその名前を指定できます。クラスター名は、長さが 2 〜 16 文字のグローバルに一意の名前である必要があります。英字、数字、- の文字が有効です。
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# 既存のコンピューティング先を確認する
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# まだ存在しない場合は、作成します
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
リモート コンピューティングで実験を実行するこれで、以前に実行した実験を再実行する準備が整いましたが、今回は作成したコンピューティング クラスターで実行します。 > **注**: コンテナー イメージは Conda 環境で構築する必要があり、スクリプトを実行する前にクラスター ノードを起動してイメージをデプロイする必要があるため、実験にはかなり時間がかかります。糖尿病トレーニング スクリプトのような簡単な実験では、これは非効率的に見えるかもしれません。しかし、数時間かかるより複雑な実験を実行する必要があると想像してください - よりスケーラブルな計算を動的に作成すると、全体の時間が大幅に短縮される可能性があります。
###Code
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
実験の実行を待っている間に、上のウィジェットまたは [Azure Machine Learning Studio](https://ml.azure.com) でコンピューティングの状態を確認できます。次のコマンドを使用して、コンピューティングの状態を確認することもできます。
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
状態が*安定*から*サイズ変更中*に変わるまでにはしばらく時間がかかることに注意してください (コーヒーブレイクをするのによいタイミングです)。実行が完了するまでカーネルをブロックするには、下のセルを実行します。
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
実験が完了したら、実験の実行によって生成されたメトリックとファイルを取得できます。今回は、ファイルには、イメージを構築し、コンピューティングを管理するためのログが含まれます。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
これで、実験によってトレーニングされたモデルを登録できます。
###Code
from azureml.core import Model
# モデルを登録する
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# 登録済みモデルを一覧表示する
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.27.0 to work with aml-revision
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
diabetes-experiment-env defined.
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **user_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568509052814499
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1622313798_c8c4e644/ROC_1622314131.png
ROC_1622314131.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/10_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.896
AUC 0.8806079626544304
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1622314194_7c51621e/ROC_1622314205.png
ROC_1622314205.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/10_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name diabetes-experiment-env
Name AzureML-PyTorch-1.3-CPU
Name AzureML-Minimal
Name AzureML-PyTorch-1.5-CPU
Name AzureML-PyTorch-1.5-GPU
Name AzureML-Tutorial
Name AzureML-Dask-CPU
Name AzureML-Dask-GPU
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-Pytorch1.7-Cuda11-OpenMpi4.1.0-py36
Name AzureML-Scikit-learn0.24-Cuda11-OpenMpi4.1.0-py36
Name AzureML-TensorFlow2.4-Cuda11-OpenMpi4.1.0-py36
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-Triton
Name AzureML-Minimal-Inference-CPU
Name AzureML-TensorFlow-1.15-Inference-CPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-PyTorch-1.6-GPU
Name AzureML-XGBoost-0.9-Inference-CPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-TensorFlow-2.2-CPU
Name AzureML-TensorFlow-2.2-GPU
Name AzureML-PyTorch-1.6-Inference-CPU
Name AzureML-DeepSpeed-0.3-GPU
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
Name AzureML-sklearn-0.24-ubuntu18.04-py37-cuda11-gpu
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cuda11-gpu
Name AzureML-tensorflow-1.15-ubuntu18.04-py37-cpu-inference
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cpu-inference
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11.0.3-gpu-inference
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cpu-inference
Name AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference
Name AzureML-minimal-ubuntu18.04-py37-cpu-inference
Name AzureML-onnxruntime-1.6-ubuntu18.04-py37-cpu-inference
Name AzureML-xgboost-0.9-ubuntu18.04-py37-cpu-inference
Name AzureML-pytorch-1.6-ubuntu18.04-py37-cpu-inference
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
Name AzureML-PyTorch-1.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-Minimal
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
name: azureml_89df5be14924cc857aa5f46d9a70f519
Name AzureML-PyTorch-1.5-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
- azureml-telemetry==1.29.0
- azureml-train-restclients-hyperdrive==1.29.0
- azureml-train-core==1.29.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_6d371ecf182c2188eea6bd5c6baef664
Name AzureML-PyTorch-1.5-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
- azureml-telemetry==1.29.0
- azureml-train-restclients-hyperdrive==1.29.0
- azureml-train-core==1.29.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_6d371ecf182c2188eea6bd5c6baef664
Name AzureML-Tutorial
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
- azureml-telemetry==1.29.0
- azureml-train-restclients-hyperdrive==1.29.0
- azureml-train-core==1.29.0
- azureml-widgets==1.29.0
- azureml-pipeline-core==1.29.0
- azureml-pipeline-steps==1.29.0
- azureml-opendatasets==1.29.0
- azureml-automl-core==1.29.0
- azureml-automl-runtime==1.29.0
- azureml-train-automl-client==1.29.0
- azureml-train-automl-runtime==1.29.0
- azureml-train-automl==1.29.0
- azureml-train==1.29.0
- azureml-sdk==1.29.0
- azureml-interpret==1.29.0
- azureml-tensorboard==1.29.0
- azureml-mlflow==1.29.0
- mlflow
- sklearn-pandas
- pandas
- numpy
- tqdm
- scikit-learn
- matplotlib
name: azureml_0d4deb0f28bf775febfa2352d7a6a562
Name AzureML-Dask-CPU
packages channels:
- conda-forge
- pytorch
- defaults
dependencies:
- python=3.6.9
- pip:
- adlfs
- azureml-core==1.18.0.post1
- azureml-dataset-runtime==1.18.0
- dask[complete]
- dask-ml[complete]
- distributed
- fastparquet
- fsspec
- joblib
- jupyterlab
- lz4
- mpi4py
- notebook
- pyarrow
name: azureml_d407e2694bdeecd1113b9f2a6efdddf7
Name AzureML-Dask-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.9
- pip:
- azureml-defaults==1.18.0
- adlfs
- azureml-core==1.18.0.post1
- dask[complete]
- dask-ml[complete]
- distributed
- fastparquet
- fsspec
- joblib
- jupyterlab
- lz4
- mpi4py
- notebook
- pyarrow
- matplotlib
name: azureml_d093a03b8baffa8a67905fca27c6dbe0
Name AzureML-VowpalWabbit-8.8.0
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-dataset-runtime[fuse,pandas]
name: azureml_769be4b756b756954fa484d1287d5153
Name AzureML-Pytorch1.7-Cuda11-OpenMpi4.1.0-py36
packages channels:
- anaconda
- pytorch
- conda-forge
dependencies:
- python=3.6.9
- pip>=21.0,<22
- pytorch==1.7.1
- torchvision==0.8.2
- torchaudio==0.7.2
- cudatoolkit=11.0
- nvidia-apex==0.1.0
- pip:
- matplotlib>=3.3,<3.4
- psutil>=5.8,<5.9
- tqdm>=4.59,<4.60
- pandas>=1.1,<1.2
- theano>=1.0,<1.1
- scipy>=1.5,<1.6
- numpy>=1.10,<1.20
- azureml-core==1.26.0
- azureml-defaults==1.26.0
- azureml-mlflow==1.26.0
- azureml-telemetry==1.26.0
- azureml-train-restclients-hyperdrive==1.26.0
- azureml-train-core==1.26.0
- tensorboard==2.4.0
- horovod==0.20.0
- onnxruntime-gpu>=1.7,<1.8
- future==0.17.1
name: azureml_a3e3434bd3fec67ad455dfe16747f230
Name AzureML-Scikit-learn0.24-Cuda11-OpenMpi4.1.0-py36
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.9
- pip>=21.0,<22
- pip:
- matplotlib>=3.3,<3.4
- psutil>=5.8,<5.9
- tqdm>=4.59,<4.60
- pandas>=1.1,<1.2
- theano>=1.0,<1.1
- scipy>=1.5,<1.6
- numpy>=1.10,<1.20
- azureml-core==1.26.0
- azureml-defaults==1.26.0
- azureml-mlflow==1.26.0
- azureml-telemetry==1.26.0
- azureml-train-restclients-hyperdrive==1.26.0
- azureml-train-core==1.26.0
- scikit-learn==0.24.1
name: azureml_5fe324eb2cc5d6ab8afd92822e495375
Name AzureML-TensorFlow2.4-Cuda11-OpenMpi4.1.0-py36
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip>=21.0,<22
- pip:
- matplotlib>=3.3,<3.4
- psutil>=5.8,<5.9
- tqdm>=4.59,<4.60
- pandas>=1.1,<1.2
- theano>=1.0,<1.1
- scipy>=1.5,<1.6
- numpy>=1.10,<1.20
- azureml-core==1.26.0
- azureml-defaults==1.26.0
- azureml-mlflow==1.26.0
- azureml-telemetry==1.26.0
- azureml-train-restclients-hyperdrive==1.26.0
- azureml-train-core==1.26.0
- tensorflow-gpu==2.4.0
- tensorboard==2.4.0
- horovod==0.20.0
- onnxruntime-gpu>=1.7,<1.8
name: azureml_271cb08c51c462a516ec0a90ed30bd29
Name AzureML-TensorFlow-2.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
- azureml-telemetry==1.29.0
- azureml-train-restclients-hyperdrive==1.29.0
- azureml-train-core==1.29.0
- tensorflow==2.3.0
- cmake==3.18.2
- horovod==0.21.3
name: azureml_7e6ff5b1eacba077b859e5351382eaa7
Name AzureML-Triton
packages channels:
- conda-forge
dependencies:
- python=3.7.9
- pip:
- azureml-core==1.29.0
- azureml-defaults[async]
- azureml-contrib-services==1.29.0
- numpy
- inference-schema[numpy-support]
- grpcio-tools
- geventhttpclient
- https://developer.download.nvidia.com/compute/redist/tritonclient/tritonclient-2.7.0-py3-none-manylinux1_x86_64.whl
name: azureml_0ef4b35fb5b4e60fde8248d77264364e
Name AzureML-Minimal-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-TensorFlow-1.15-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-PyTorch-1.6-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
- azureml-telemetry==1.29.0
- azureml-train-restclients-hyperdrive==1.29.0
- azureml-train-core==1.29.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.21.3
- tensorboard==1.14.0
- future==0.17.1
name: azureml_8923046c715dcc2c3c3a0c06686202e8
Name AzureML-PyTorch-1.6-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
- azureml-telemetry==1.29.0
- azureml-train-restclients-hyperdrive==1.29.0
- azureml-train-core==1.29.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.21.3
- tensorboard==1.14.0
- future==0.17.1
name: azureml_8923046c715dcc2c3c3a0c06686202e8
Name AzureML-XGBoost-0.9-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-TensorFlow-2.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
- azureml-telemetry==1.29.0
- azureml-train-restclients-hyperdrive==1.29.0
- azureml-train-core==1.29.0
- tensorflow-gpu==2.3.0
- cmake==3.18.2
- horovod==0.21.3
name: azureml_73f8463ce52c2ab913223a4859ca9849
Name AzureML-TensorFlow-2.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
- azureml-telemetry==1.29.0
- azureml-train-restclients-hyperdrive==1.29.0
- azureml-train-core==1.29.0
- tensorflow==2.2.0
- horovod==0.19.5
name: azureml_a14be41cb4a9c73176b86ce9712c0f03
Name AzureML-TensorFlow-2.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
- azureml-telemetry==1.29.0
- azureml-train-restclients-hyperdrive==1.29.0
- azureml-train-core==1.29.0
- tensorflow-gpu==2.2.0
- horovod==0.19.5
name: azureml_fc9d97769dd8859e4e735fe7da8f63d3
Name AzureML-PyTorch-1.6-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-DeepSpeed-0.3-GPU
packages channels:
- pytorch
- conda-forge
dependencies:
- python=3.6.2
- cudatoolkit-dev=10.1.243
- cudatoolkit=10.1
- pytorch==1.6.0
- torchvision==0.7.0
- gxx_linux-64
- pip<=20.2
- pip:
- azureml-core==1.29.0
- azureml-defaults==1.29.0
- azureml-telemetry==1.29.0
- azureml-train-restclients-hyperdrive==1.29.0
- azureml-train-core==1.29.0
- azureml-mlflow==1.29.0
- azureml-dataprep
- cmake==3.18.2
- mkl==2018.0.3
- tensorboard==1.14.0
- future==0.17.1
- matplotlib
- boto3
- h5py
- sklearn
- scipy
- pillow
- tqdm
- cupy-cuda101
- mpi4py
- deepspeed==0.3.11
name: azureml_34f5972128c84000a87304aace598d3e
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11-gpu
packages name: project_environment
dependencies:
- python=3.6.2
- pip:
- azureml-defaults
channels:
- anaconda
- conda-forge
Name AzureML-sklearn-0.24-ubuntu18.04-py37-cuda11-gpu
packages name: project_environment
dependencies:
- python=3.6.2
- pip:
- azureml-defaults
channels:
- anaconda
- conda-forge
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cuda11-gpu
packages name: project_environment
dependencies:
- python=3.6.2
- pip:
- azureml-defaults
channels:
- anaconda
- conda-forge
Name AzureML-tensorflow-1.15-ubuntu18.04-py37-cpu-inference
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cpu-inference
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-tensorflow-2.4-ubuntu18.04-py37-cuda11.0.3-gpu-inference
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-pytorch-1.7-ubuntu18.04-py37-cpu-inference
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-sklearn-0.24.1-ubuntu18.04-py37-cpu-inference
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-minimal-ubuntu18.04-py37-cpu-inference
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-onnxruntime-1.6-ubuntu18.04-py37-cpu-inference
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-xgboost-0.9-ubuntu18.04-py37-cpu-inference
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
Name AzureML-pytorch-1.6-ubuntu18.04-py37-cpu-inference
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.29.0
name: project_environment
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "azCompCluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2, min_nodes=0)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
Found existing cluster, use it.
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
Steady 1
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8962222222222223
AUC 0.8806126078458612
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1622314407_212e1e59/ROC_1622315283.png
ROC_1622315283.png
azureml-logs/20_image_build_log.txt
azureml-logs/55_azureml-execution-tvmps_8e8931313399e15aad043ba23f4d364df051dd5dff24838c9bb43abdfbf5b685_d.txt
azureml-logs/65_job_prep-tvmps_8e8931313399e15aad043ba23f4d364df051dd5dff24838c9bb43abdfbf5b685_d.txt
azureml-logs/70_driver_log.txt
azureml-logs/75_job_post-tvmps_8e8931313399e15aad043ba23f4d364df051dd5dff24838c9bb43abdfbf5b685_d.txt
azureml-logs/process_info.json
azureml-logs/process_status.json
logs/azureml/106_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/job_prep_azureml.log
logs/azureml/job_release_azureml.log
outputs/diabetes_model.pkl
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 7
Training context : Compute cluster
AUC : 0.8806126078458612
Accuracy : 0.8962222222222223
diabetes_model version: 6
Training context : File dataset
AUC : 0.8468331741963582
Accuracy : 0.7793333333333333
diabetes_model version: 5
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 4
Training context : Parameterized script
AUC : 0.8483198169063138
Accuracy : 0.774
diabetes_model version: 3
Training context : Script
AUC : 0.8484929598487486
Accuracy : 0.774
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8483198169063138
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8484929598487486
Accuracy : 0.774
amlstudio-designer-predict-dia version: 1
CreatedByAMLStudio : true
PipelineTrainedClass version: 1
CreatedByAMLStudio : true
AutoMLcde5d93451 version: 1
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.27.0 to work with oneweek
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **user_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **use_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
コンピューティングを操作する
Azure Machine Learning 実験としてスクリプトを実行する場合は、実験実行の実行コンテキストを定義する必要があります。実行コンテキストは以下で構成されます。
* スクリプト向けの Python 環境。スクリプトで使用するすべての Python パッケージを含める必要があります。
* スクリプトが実行されるコンピューティング ターゲット。実験実行を開始するローカル ワークステーション、またはオンデマンドで提供されるトレーニング クラスターなどのリモート コンピューター先になります。
このノートブックでは、実験の*環境*と*コンピューティング ターゲット*について学びます。 ワークスペースに接続する
作業を開始するには、ワークスペースに接続します。
> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
###Code
import azureml.core
from azureml.core import Workspace
# 保存された構成ファイルからワークスペースを読み込む
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
実験用データを準備する
このノートブックでは、糖尿病患者の詳細を含むデータセットを使用します。次のセルを実行してこのデータセットを作成します (すでに存在する場合は、コードによって既存のバージョンを検索します)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
トレーニング スクリプトを作成する
次の 2 つのセルを実行して作成します。
1. 新しい実験用のフォルダー
2. **SCIkit-learn を使用** してモデルをトレーニングし、**matplotlib** を使用して ROC 曲線をプロットするトレーニング スクリプト ファイル。
###Code
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# スクリプト引数を取得する
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 正規化ハイパーパラメーターを設定する
reg = args.reg_rate
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データを読み込む (入力データセットとして渡される)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# ロジスティック回帰モデルをトレーニングする
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 曲線をプロットする
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 対角 50% ラインをプロットする
plt.plot([0, 1], [0, 1], 'k--')
# モデルによって達成された FPR と TPR をプロットする
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
環境を定義する
Azure Machine Learning で実験として Python スクリプトを実行すると、Conda 環境が作成され、スクリプトの実行コンテキストが定義されます。Azure Machine Learning には、多くの共通パッケージを含む既定の環境を提供します。これには、実験実行の操作に必要なライブラリを含む **azureml-defaults** パッケージ、**Pandas** や **numpy** などの一般なパッケージが含まれます。
また、**Conda** または **PIP** を使用して Conda 仕様ファイルで独自の環境を定義し、パッケージを追加して、実験が必要なすべてのライブラリにアクセスできるようにすることもできます。
> **注**: Conda 依存関係を最初にインストールした後、pip 依存関係をインストールします。**pip** パッケージは pip 依存関係をインストールするために必要になるため、これを conda 依存関係に含めることが推薦されます。
次のセルを実行して、このノートブックと同じフォルダーに *experiment_env.yml* という名前の Conda 仕様ファイルを作成します。
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
_____no_output_____
###Markdown
これで、カスタム conda 仕様ファイルを使用して、実験用の環境を作成することが可能になります。
###Code
from azureml.core import Environment
# 実験用の Python 環境を作成する(.yml ファイルから)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# 依存関係を Azure ML に管理させる
experiment_env.python.user_managed_dependencies = False
# 環境の詳細を印刷する
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
これで、環境を使用し、スクリプトを実験として実行することができます。
次のコードでは、作成した環境を ScriptRunConfig に割り当て、実験を送信します。実験の実行中に、ウィジェットおよび **azureml_logs/60_control_log.txt** 出力ログで実行の詳細を確認すると、Conda 環境が構築されていることがわかります。
> **注**: 以下のコードは、スクリプト実行に使用する **DockerConfiguration** を作成し、スクリプトの環境を Docker コンテナーでホストするために、その **use_docker** 属性を **True** に設定します。これはデフォルトの動作であるため、省略して構いませんが、明示する目的でここに含めています。
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
実験では、必要なすべてのパッケージを含む環境が正常に使用されました - Azure Machine Learning Studio で実行された実験のメトリックと出力を表示するか、以下のコード (**Scikit-learn** を使用してトレーニングされたモデルや **matplotlib を**使用して生成された ROC チャート イメージを含む) を実行して表示できます。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
環境を登録する
必要なパッケージを使用して環境を定義する手間が省けたので、ワークスペースに登録できます。
###Code
# 環境を登録する
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
環境は、最初に作成したときに割り当てた名前 (この場合、*diabetes-experiment-env*) で登録されていることに注意してください。
環境が登録されている場合、同じ要件を持つすべてのスクリプトに再利用できます。たとえば、別のアルゴリズムを使用して糖尿病モデルをトレーニングするフォルダーとスクリプトを作成してみましょう。
###Code
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# スクリプト引数を取得する
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データを読み込む (入力データセットとして渡される)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# デシジョン ツリー モデルをトレーニングする
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 曲線をプロットする
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 対角 50% ラインをプロットする
plt.plot([0, 1], [0, 1], 'k--')
# モデルによって達成された FPR と TPR をプロットする
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
これで、登録された環境を取得し、代替トレーニング スクリプトを実行する新しい実験でこれを使用できます (デシジョン ツリー分類子は正規化パラメーターを必要としないため、ここでは使われていません)。
###Code
# 登録済みの環境を取得する
registered_env = Environment.get(ws, 'experiment_env')
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
今回は、一致する環境が前回の実行からキャッシュされており、ローカル コンピューティングで再作成する必要がないため、実験の実行速度が速くなります。ただし、異なるコンピューティング ターゲットでも、同じ環境が作成および使用され、実験スクリプトの実行コンテキストの一貫性が確保されます。
実験のメトリックと出力を見てみましょう。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
登録済み環境を表示する
独自の環境を登録するだけでなく、一般的な実験タイプに対して事前構築され「選別された」環境を活用できます。次のコードは、登録されているすべての環境を一覧表示します。
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
すべての選別された環境には、***AzureML-*** で始まる名前が付いています (このプレフィックスは、独自の環境では使用できません)。
選別された環境を詳しく調べ、それぞれのパッケージに含まれているパッケージを確認しましょう。
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
コンピューティング クラスターを作成する
多くの場合、ローカル コンピューティングリソースでは、大量のデータを処理する必要がある複雑な実験や長時間実行される実験を処理するには十分でない場合があります。また、クラウドでコンピューティング リソースを動的に作成して使用する機能を活用する場合もあります。Azure Machine Learning は、さまざまなコンピューティング ターゲットをサポートしており、これをワークスペースで定義し、実験の実行に使用できます。リソースの支払いは使用時にのみ行われます。
コンピューティング クラスターは、[Azure Machine Learning Studio](https://ml.azure.com) で作成するか、Azure Machine Learning SDK を使用して作成できます。以下のコード セルは指定された名前を使ってコンピューティング クラスターがあるかどうかワークスペースを確認し、ない場合は作成します。
> **重要**: 実行する前に、以下のコードで *your-compute-cluster* をコンピューティング クラスターに適した名前に変更してください。既存のクラスターがある場合はその名前を指定できます。クラスター名は、長さが 2 〜 16 文字のグローバルに一意の名前である必要があります。英字、数字、- の文字が有効です。
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
> **注**: コンピューティング インスタンスとクラスターは、スタンダードの Azure 仮想マシンのイメージに基づいています。この演習では、コストとパフォーマンスの最適なバランスを実現するために、*Standard_DS11_v2* イメージが推薦されます。サブスクリプションにこのイメージを含まないクォータがある場合は、別のイメージを選択してください。 ただし、画像が大きいほどコストが高くなり、小さすぎるとタスクが完了できない場合があることに注意してください。Azure 管理者にクォータを拡張するように依頼していただくことも可能です。
リモート コンピューティングで実験を実行する
これで、以前に実行した実験を再実行する準備が整いましたが、今回は作成したコンピューティング クラスターで実行します。
> **注**: コンテナー イメージは Conda 環境で構築する必要があり、スクリプトを実行する前にクラスター ノードを起動してイメージをデプロイする必要があるため、実験にはかなり時間がかかります。糖尿病トレーニング スクリプトのような簡単な実験では、これは非効率的に見えるかもしれません。しかし、数時間かかるより複雑な実験を実行する必要があると想像してください - よりスケーラブルな計算を動的に作成すると、全体の時間が大幅に短縮される可能性があります。
###Code
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
実験の実行を待っている間に、上のウィジェットまたは [Azure Machine Learning Studio](https://ml.azure.com) でコンピューティングの状態を確認できます。次のコマンドを使用して、コンピューティングの状態を確認することもできます。
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
状態が*安定*から*サイズ変更中*に変わるまでにはしばらく時間がかかることに注意してください (コーヒーブレイクをするのによいタイミングです)。実行が完了するまでカーネルをブロックするには、下のセルを実行します。
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
ページの右上にあるカーネル インジケータに注目してください。**&9899;** から **&9711;** に変わると、コードの実行が終了します。
実験が完了したら、実験の実行によって生成されたメトリックとファイルを取得できます。今回は、ファイルには、イメージを構築し、コンピューティングを管理するためのログが含まれます。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
これで、実験によってトレーニングされたモデルを登録できます。
###Code
from azureml.core import Model
# モデルを登録する
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# 登録済みモデルを一覧表示する
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **use_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.22.0 to work with mba_dp100
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['Users/mikkel.ahlgren/data/diabetes.csv', 'Users/mikkel.ahlgren/data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'Users/mikkel.ahlgren/07_diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Overwriting Users/mikkel.ahlgren/07_diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
diabetes-experiment-env defined.
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1618144195_88973897/ROC_1618144222.png
Accuracy 0.7891111111111111
AUC 0.8568509052814499
ROC_1618144222.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/10_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'Users/mikkel.ahlgren/071_diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Overwriting Users/mikkel.ahlgren/071_diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8951111111111111
AUC 0.8791241557917574
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1618144231_d40bdc6e/ROC_1618144243.png
ROC_1618144243.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/9_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name diabetes-experiment-env
Name AzureML-Tutorial
Name AzureML-PyTorch-1.3-CPU
Name AzureML-Minimal
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-AutoML-GPU
Name AzureML-PyTorch-1.5-CPU
Name AzureML-PyTorch-1.5-GPU
Name AzureML-Designer-Score
Name AzureML-TensorFlow-2.2-GPU
Name AzureML-TensorFlow-2.2-CPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-PyTorch-1.6-GPU
Name AzureML-Triton
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-DeepSpeed-0.3-GPU
Name AzureML-XGBoost-0.9-Inference-CPU
Name AzureML-PyTorch-1.6-Inference-CPU
Name AzureML-Minimal-Inference-CPU
Name AzureML-TensorFlow-1.15-Inference-CPU
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
Name AzureML-Tutorial
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- azureml-widgets==1.25.0
- azureml-pipeline-core==1.25.0
- azureml-pipeline-steps==1.25.0
- azureml-opendatasets==1.25.0
- azureml-automl-core==1.25.0
- azureml-automl-runtime==1.25.0.post1
- azureml-train-automl-client==1.25.0
- azureml-train-automl-runtime==1.25.0
- azureml-train-automl==1.25.0
- azureml-train==1.25.0
- azureml-sdk==1.25.0
- azureml-interpret==1.25.0
- azureml-tensorboard==1.25.0
- azureml-mlflow==1.25.0
- mlflow
- sklearn-pandas
- pandas
- numpy
- tqdm
- scikit-learn
- matplotlib
name: azureml_220813aa74c252741cc887dcbeb01c68
Name AzureML-PyTorch-1.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-Minimal
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
name: azureml_d8a588ff2e406566dfa4b87bad6fb795
Name AzureML-VowpalWabbit-8.8.0
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-dataset-runtime[fuse,pandas]
name: azureml_769be4b756b756954fa484d1287d5153
Name AzureML-AutoML-GPU
packages channels:
- anaconda
- conda-forge
- pytorch
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-pipeline-core==1.25.0
- azureml-telemetry==1.25.0
- azureml-defaults==1.25.0
- azureml-interpret==1.25.0
- azureml-automl-core==1.25.0
- azureml-automl-runtime==1.25.0.post1
- azureml-train-automl-client==1.25.0
- azureml-train-automl-runtime==1.25.0
- azureml-dataset-runtime==1.25.0
- azureml-mlflow==1.25.0
- inference-schema
- py-cpuinfo==5.0.0
- boto3==1.15.18
- botocore==1.18.18
- numpy~=1.18.0
- scikit-learn==0.22.1
- pandas~=0.25.0
- fbprophet==0.5
- holidays==0.9.11
- setuptools-git
- psutil>5.0.0,<6.0.0
name: azureml_78bd14dfeefbfaa73eeef13fc3e3cc1c
Name AzureML-PyTorch-1.5-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a0839646d50ace28a8758be3e7363044
Name AzureML-PyTorch-1.5-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a0839646d50ace28a8758be3e7363044
Name AzureML-Designer-Score
packages channels:
- defaults
dependencies:
- python=3.6.8
- pip:
- azureml-designer-score-modules==0.0.16
name: azureml_18573b1d77e5ef62bcbe8903c11ceafe
Name AzureML-TensorFlow-2.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- tensorflow-gpu==2.2.0
- horovod==0.19.5
name: azureml_8767a37bc436bb9800ce6c34cc7772c5
Name AzureML-TensorFlow-2.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- tensorflow==2.2.0
- horovod==0.19.5
name: azureml_2ef7ea6075be6eec3d785912da5909d8
Name AzureML-PyTorch-1.6-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_fd0a713fe25275c9186878f4b7a6698c
Name AzureML-PyTorch-1.6-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.18.0.post1
- azureml-defaults==1.18.0
- azureml-telemetry==1.18.0
- azureml-train-restclients-hyperdrive==1.18.0
- azureml-train-core==1.18.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9d2a515d5c77954f2d0562cc5eb8a1fc
Name AzureML-Triton
packages channels:
- conda-forge
dependencies:
- python=3.7.9
- pip:
- azureml-core==1.25.0
- azureml-defaults[async]
- azureml-contrib-services==1.25.0
- numpy
- inference-schema[numpy-support]
- grpcio-tools
- geventhttpclient
- https://developer.download.nvidia.com/compute/redist/tritonclient/tritonclient-2.4.0-py3-none-manylinux1_x86_64.whl
name: azureml_f14f58afccac32d7d9284aceb5afe95b
Name AzureML-TensorFlow-2.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- tensorflow==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_4c567a28a1ff3c83693c442e0588dbbb
Name AzureML-TensorFlow-2.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- tensorflow-gpu==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_2bae673ebe2ee2381dcc57c1481deff0
Name AzureML-DeepSpeed-0.3-GPU
packages channels:
- pytorch
- conda-forge
dependencies:
- python=3.6.2
- cudatoolkit-dev=10.1.243
- cudatoolkit=10.1
- pytorch==1.6.0
- torchvision==0.7.0
- gxx_linux-64
- pip<=20.2
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- azureml-mlflow==1.25.0
- azureml-dataprep
- cmake==3.18.2
- mkl==2018.0.3
- tensorboard==1.14.0
- future==0.17.1
- matplotlib
- boto3
- h5py
- sklearn
- scipy
- pillow
- tqdm
- cupy-cuda101
- mpi4py
- deepspeed==0.3.11
name: azureml_0fe27d937dd50f935be7288c00937ea4
Name AzureML-XGBoost-0.9-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.25.0
name: project_environment
Name AzureML-PyTorch-1.6-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.25.0
name: project_environment
Name AzureML-Minimal-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.25.0
name: project_environment
Name AzureML-TensorFlow-1.15-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.25.0
name: project_environment
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "mbaCompute1"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
Creating
Succeeded
AmlCompute wait for completion finished
Minimum number of nodes requested have been provisioned
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
Steady 0
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8966666666666666
AUC 0.8811103069277058
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1618144781_08c833bb/ROC_1618145694.png
ROC_1618145694.png
azureml-logs/20_image_build_log.txt
azureml-logs/55_azureml-execution-tvmps_70cb6584bb43d4ba663989cb381cd58ff5561008d3f596ee9dc4b38cad60f528_d.txt
azureml-logs/65_job_prep-tvmps_70cb6584bb43d4ba663989cb381cd58ff5561008d3f596ee9dc4b38cad60f528_d.txt
azureml-logs/70_driver_log.txt
azureml-logs/75_job_post-tvmps_70cb6584bb43d4ba663989cb381cd58ff5561008d3f596ee9dc4b38cad60f528_d.txt
azureml-logs/process_info.json
azureml-logs/process_status.json
logs/azureml/103_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/job_prep_azureml.log
logs/azureml/job_release_azureml.log
outputs/diabetes_model.pkl
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl'.format(output_path), model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.8811103069277058
Accuracy : 0.8966666666666666
diabetes_model version: 4
Training context : File dataset
AUC : 0.8568743524381947
Accuracy : 0.7891111111111111
06_diabetes_model.pkl version: 1
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 3
Training context : Parameterized script
AUC : 0.8482685705756505
Accuracy : 0.7736666666666666
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8483014080302502
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
###Markdown
コンピューティングを操作する
Azure Machine Learning 実験としてスクリプトを実行する場合は、実験実行の実行コンテキストを定義する必要があります。実行コンテキストは以下で構成されます。
* スクリプト向けの Python 環境。スクリプトで使用するすべての Python パッケージを含める必要があります。
* スクリプトが実行されるコンピューティング ターゲット。実験実行を開始するローカル ワークステーション、またはオンデマンドで提供されるトレーニング クラスターなどのリモート コンピューター先になります。
このノートブックでは、実験の*環境*と*コンピューティング ターゲット*について学びます。 ワークスペースに接続する
作業を開始するには、ワークスペースに接続します。
> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
###Code
import azureml.core
from azureml.core import Workspace
# 保存された構成ファイルからワークスペースを読み込む
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
実験用データを準備する
このノートブックでは、糖尿病患者の詳細を含むデータセットを使用します。次のセルを実行してこのデータセットを作成します (すでに存在する場合は、コードによって既存のバージョンを検索します)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
トレーニング スクリプトを作成する
次の 2 つのセルを実行して作成します。
1.新しい実験用のフォルダー
2.**SCIkit-learn を使用** してモデルをトレーニングし、**matplotlib** を使用して ROC 曲線をプロットするトレーニング スクリプト ファイル。
###Code
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# スクリプト引数を取得する
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 正規化ハイパーパラメーターを設定する
reg = args.reg_rate
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データを読み込む (入力データセットとして渡される)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# ロジスティック回帰モデルをトレーニングする
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 曲線をプロットする
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 対角 50% ラインをプロットする
plt.plot([0, 1], [0, 1], 'k--')
# モデルによって達成された FPR と TPR をプロットする
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
環境を定義する
Azure Machine Learning で実験として Python スクリプトを実行すると、Conda 環境が作成され、スクリプトの実行コンテキストが定義されます。Azure Machine Learning には、多くの共通パッケージを含む既定の環境を提供します。これには、実験実行の操作に必要なライブラリを含む **azureml-defaults** パッケージ、**Pandas** や **numpy** などの一般なパッケージが含まれます。
また、**Conda** または **PIP** を使用して Conda 仕様ファイルで独自の環境を定義し、パッケージを追加して、実験が必要なすべてのライブラリにアクセスできるようにすることもできます。
> **注**: Conda 依存関係を最初にインストールした後、pip 依存関係をインストールします。**pip** パッケージは pip 依存関係をインストールするために必要になるため、これを conda 依存関係に含めることが推薦されます。
次のセルを実行して、このノートブックと同じフォルダーに *experiment_env.yml* という名前の Conda 仕様ファイルを作成します。
###Code
%%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
# The python interpreter version.
# Currently Azure ML only supports 3.5.2 and later.
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
###Output
_____no_output_____
###Markdown
これで、カスタム conda 仕様ファイルを使用して、実験用の環境を作成することが可能になります。
###Code
from azureml.core import Environment
# 実験用の Python 環境を作成する(.yml ファイルから)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# 依存関係を Azure ML に管理させる
experiment_env.python.user_managed_dependencies = False
# 環境の詳細を印刷する
print(experiment_env.name, 'defined.')
print(experiment_env.python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
これで、環境を使用し、スクリプトを実験として実行することができます。
次のコードでは、作成した環境を ScriptRunConfig に割り当て、実験を送信します。実験の実行中に、ウィジェットおよび **azureml_logs/60_control_log.txt** 出力ログで実行の詳細を確認すると、Conda 環境が構築されていることがわかります。
> **注**: 以下のコードは、スクリプト実行に使用する **DockerConfiguration** を作成し、スクリプトの環境を Docker コンテナーでホストするために、その **use_docker** 属性を **True** に設定します。これはデフォルトの動作であるため、省略して構いませんが、明示する目的でここに含めています。
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=experiment_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
実験では、必要なすべてのパッケージを含む環境が正常に使用されました - Azure Machine Learning Studio で実行された実験のメトリックと出力を表示するか、以下のコード (**Scikit-learn** を使用してトレーニングされたモデルや **matplotlib を**使用して生成された ROC チャート イメージを含む) を実行して表示できます。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
環境を登録する
必要なパッケージを使用して環境を定義する手間が省けたので、ワークスペースに登録できます。
###Code
# 環境を登録する
experiment_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
環境は、最初に作成したときに割り当てた名前 (この場合、*diabetes-experiment-env*) で登録されていることに注意してください。
環境が登録されている場合、同じ要件を持つすべてのスクリプトに再利用できます。たとえば、別のアルゴリズムを使用して糖尿病モデルをトレーニングするフォルダーとスクリプトを作成してみましょう。
###Code
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# スクリプト引数を取得する
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データを読み込む (入力データセットとして渡される)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# デシジョン ツリー モデルをトレーニングする
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 曲線をプロットする
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 対角 50% ラインをプロットする
plt.plot([0, 1], [0, 1], 'k--')
# モデルによって達成された FPR と TPR をプロットする
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
これで、登録された環境を取得し、代替トレーニング スクリプトを実行する新しい実験でこれを使用できます (デシジョン ツリー分類子は正規化パラメーターを必要としないため、ここでは使われていません)。
###Code
# 登録済みの環境を取得する
registered_env = Environment.get(ws, 'experiment_env')
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
今回は、一致する環境が前回の実行からキャッシュされており、ローカル コンピューティングで再作成する必要がないため、実験の実行速度が速くなります。ただし、異なるコンピューティング ターゲットでも、同じ環境が作成および使用され、実験スクリプトの実行コンテキストの一貫性が確保されます。
実験のメトリックと出力を見てみましょう。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
登録済み環境を表示する
独自の環境を登録するだけでなく、一般的な実験タイプに対して事前構築され「選別された」環境を活用できます。次のコードは、登録されているすべての環境を一覧表示します。
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
すべての選別された環境には、***AzureML-*** で始まる名前が付いています (このプレフィックスは、独自の環境では使用できません)。
選別された環境を詳しく調べ、それぞれのパッケージに含まれているパッケージを確認しましょう。
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
コンピューティング クラスターを作成する
多くの場合、ローカル コンピューティングリソースでは、大量のデータを処理する必要がある複雑な実験や長時間実行される実験を処理するには十分でない場合があります。また、クラウドでコンピューティング リソースを動的に作成して使用する機能を活用する場合もあります。Azure Machine Learning は、さまざまなコンピューティング ターゲットをサポートしており、これをワークスペースで定義し、実験の実行に使用できます。リソースの支払いは使用時にのみ行われます。
コンピューティング クラスターは、[Azure Machine Learning Studio](https://ml.azure.com) で作成するか、Azure Machine Learning SDK を使用して作成できます。以下のコード セルは指定された名前を使ってコンピューティング クラスターがあるかどうかワークスペースを確認し、ない場合は作成します。
> **重要**: 実行する前に、以下のコードで *your-compute-cluster* をコンピューティング クラスターに適した名前に変更してください。既存のクラスターがある場合はその名前を指定できます。クラスター名は、長さが 2 〜 16 文字のグローバルに一意の名前である必要があります。英字、数字、- の文字が有効です。
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
> **注**: コンピューティング インスタンスとクラスターは、スタンダードの Azure 仮想マシンのイメージに基づいています。この演習では、コストとパフォーマンスの最適なバランスを実現するために、*Standard_DS11_v2* イメージが推薦されます。サブスクリプションにこのイメージを含まないクォータがある場合は、別のイメージを選択してください。 ただし、画像が大きいほどコストが高くなり、小さすぎるとタスクが完了できない場合があることに注意してください。Azure 管理者にクォータを拡張するように依頼していただくことも可能です。
リモート コンピューティングで実験を実行する
これで、以前に実行した実験を再実行する準備が整いましたが、今回は作成したコンピューティング クラスターで実行します。
> **注**: コンテナー イメージは Conda 環境で構築する必要があり、スクリプトを実行する前にクラスター ノードを起動してイメージをデプロイする必要があるため、実験にはかなり時間がかかります。糖尿病トレーニング スクリプトのような簡単な実験では、これは非効率的に見えるかもしれません。しかし、数時間かかるより複雑な実験を実行する必要があると想像してください - よりスケーラブルな計算を動的に作成すると、全体の時間が大幅に短縮される可能性があります。
###Code
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
実験の実行を待っている間に、上のウィジェットまたは [Azure Machine Learning Studio](https://ml.azure.com) でコンピューティングの状態を確認できます。次のコマンドを使用して、コンピューティングの状態を確認することもできます。
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
状態が*安定*から*サイズ変更中*に変わるまでにはしばらく時間がかかることに注意してください (コーヒーブレイクをするのによいタイミングです)。実行が完了するまでカーネルをブロックするには、下のセルを実行します。
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
ページの右上にあるカーネル インジケータに注目してください。**&9899;** から **&9711;** に変わると、コードの実行が終了します。
実験が完了したら、実験の実行によって生成されたメトリックとファイルを取得できます。今回は、ファイルには、イメージを構築し、コンピューティングを管理するためのログが含まれます。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
これで、実験によってトレーニングされたモデルを登録できます。
###Code
from azureml.core import Model
# モデルを登録する
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# 登録済みモデルを一覧表示する
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
コンピューティングを操作する
Azure Machine Learning 実験としてスクリプトを実行する場合は、実験実行の実行コンテキストを定義する必要があります。実行コンテキストは以下で構成されます。
* スクリプト向けの Python 環境。スクリプトで使用するすべての Python パッケージを含める必要があります。
* スクリプトが実行されるコンピューティング ターゲット。実験実行を開始するローカル ワークステーション、またはオンデマンドで提供されるトレーニング クラスターなどのリモート コンピューター先になります。
このノートブックでは、実験の*環境*と*コンピューティング ターゲット*について学びます。 ワークスペースに接続する
作業を開始するには、ワークスペースに接続します。
> **注**: Azure サブスクリプションでまだ認証済みのセッションを確立していない場合は、リンクをクリックして認証コードを入力し、Azure にサインインして認証するよう指示されます。
###Code
import azureml.core
from azureml.core import Workspace
# 保存された構成ファイルからワークスペースを読み込む
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
実験用データを準備する
このノートブックでは、糖尿病患者の詳細を含むデータセットを使用します。次のセルを実行してこのデータセットを作成します (すでに存在する場合は、コードによって既存のバージョンを検索します)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
トレーニング スクリプトを作成する
次の 2 つのセルを実行して作成します。
1. 新しい実験用のフォルダー
2. **SCIkit-learn を使用**してモデルをトレーニングし、**matplotlib** を使用して ROC 曲線をプロットするトレーニング スクリプト ファイル。
###Code
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# スクリプト引数を取得する
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 正規化ハイパーパラメーターを設定する
reg = args.reg_rate
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データを読み込む (入力データセットとして渡される)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# ロジスティック回帰モデルをトレーニングする
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 曲線をプロットする
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 対角 50% ラインをプロットする
plt.plot([0, 1], [0, 1], 'k--')
# モデルによって達成された FPR と TPR をプロットする
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
環境を定義する
Azure Machine Learning で実験として Python スクリプトを実行すると、Conda 環境が作成され、スクリプトの実行コンテキストが定義されます。Azure Machine Learning には、多くの共通パッケージを含む既定の環境を提供します。これには、実験実行の操作に必要なライブラリを含む **azureml-defaults** パッケージ、**Pandas** や **numpy** などの一般なパッケージが含まれます。
また、**Conda** または **PIP** を使用して独自の環境を定義し、パッケージを追加して、実験が必要なすべてのライブラリにアクセスできるようにすることもできます。
> **注**: Conda 依存関係を最初にインストールした後、pip 依存関係をインストールします。**pip** パッケージでは pip 依存関係をインストールする必要があるため、これを Conda 依存関係にも含めるようお勧めします (忘れた場合でも Azure ML がインストールしますが、ログに警告が表示されます!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# 実験用 Python 環境を作成する
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
# パッケージの依存関係のセットを作成する (必要に応じて Conda または PIP)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# 環境に依存関係を追加する
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
これで、環境を使用し、スクリプトを実験として実行することができます。
次のコードでは、作成した環境を ScriptRunConfig に割り当て、実験を送信します。実験の実行中に、ウィジェットおよび **azureml_logs/60_control_log.txt** 出力ログで実行の詳細を確認すると、Conda 環境が構築されていることがわかります。
> **注**: 下のコードは、Docker コンテナーにスクリプトの環境をホストするために、スクリプト実行用に **DockerConfiguration** を作成し、その **user_docker** 属性を **True** に設定します。これは既定の動作です。これを省略できますが、ここに明らかにするために含めています。
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
実験では、必要なすべてのパッケージを含む環境が正常に使用されました - Azure Machine Learning Studio で実行された実験のメトリックと出力を表示するか、以下のコード (**Scikit-learn** を使用してトレーニングされたモデルや **matplotlib を**使用して生成された ROC チャート イメージを含む) を実行して表示できます。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
環境を登録する
必要なパッケージを使用して環境を定義する手間が省けたので、ワークスペースに登録できます。
###Code
# 環境を登録する
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
環境は、最初に作成したときに割り当てた名前 (この場合、*diabetes-experiment-env*) で登録されていることに注意してください。
環境が登録されている場合、同じ要件を持つすべてのスクリプトに再利用できます。たとえば、別のアルゴリズムを使用して糖尿病モデルをトレーニングするフォルダーとスクリプトを作成してみましょう。
###Code
import os
# 実験ファイル用フォルダーを作成する
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# ライブラリをインポートする
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# スクリプト引数を取得する
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# 実験実行コンテキストを取得する
run = Run.get_context()
# 糖尿病データを読み込む (入力データセットとして渡される)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# 特徴とラベルを分離する
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# データをトレーニング セットとテスト セットに分割する
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# デシジョン ツリー モデルをトレーニングする
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# 精度を計算する
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# AUC を計算する
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# ROC 曲線をプロットする
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# 対角 50% ラインをプロットする
plt.plot([0, 1], [0, 1], 'k--')
# モデルによって達成された FPR と TPR をプロットする
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# 出力フォルダーに保存されたファイルは、自動的に実験レコードにアップロードされます
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
これで、登録された環境を取得し、代替トレーニング スクリプトを実行する新しい実験でこれを使用できます (デシジョン ツリー分類子は正規化パラメーターを必要としないため、ここでは使われていません)。
###Code
# 登録済みの環境を取得する
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# トレーニング データセットを取得する
diabetes_ds = ws.datasets.get("diabetes dataset")
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
今回は、一致する環境が前回の実行からキャッシュされており、ローカル コンピューティングで再作成する必要がないため、実験の実行速度が速くなります。ただし、異なるコンピューティング ターゲットでも、同じ環境が作成および使用され、実験スクリプトの実行コンテキストの一貫性が確保されます。
実験のメトリックと出力を見てみましょう。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
登録済み環境を表示する
独自の環境を登録するだけでなく、一般的な実験タイプに対して事前構築され「選別された」環境を活用できます。次のコードは、登録されているすべての環境を一覧表示します。
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
すべての選別された環境には、***AzureML-*** で始まる名前が付いています (このプレフィックスは、独自の環境では使用できません)。
選別された環境を詳しく調べ、それぞれのパッケージに含まれているパッケージを確認しましょう。
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
コンピューティング クラスターを作成する
多くの場合、ローカル コンピューティングリソースでは、大量のデータを処理する必要がある複雑な実験や長時間実行される実験を処理するには十分でない場合があります。また、クラウドでコンピューティング リソースを動的に作成して使用する機能を活用する場合もあります。Azure Machine Learning は、さまざまなコンピューティング ターゲットをサポートしており、これをワークスペースで定義し、実験の実行に使用できます。リソースの支払いは使用時にのみ行われます。
コンピューティング クラスターは、[Azure Machine Learning Studio](https://ml.azure.com) で作成するか、Azure Machine Learning SDK を使用して作成できます。以下のコード セルは指定された名前を使ってコンピューティング クラスターがあるかどうかワークスペースを確認し、ない場合は作成します。
> **重要**: 実行する前に、以下のコードで *your-compute-cluster* をコンピューティング クラスターに適した名前に変更してください。既存のクラスターがある場合はその名前を指定できます。クラスター名は、長さが 2 〜 16 文字のグローバルに一意の名前である必要があります。英字、数字、- の文字が有効です。
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
リモート コンピューティングで実験を実行する
これで、以前に実行した実験を再実行する準備が整いましたが、今回は作成したコンピューティング クラスターで実行します。
> **注**: コンテナー イメージは Conda 環境で構築する必要があり、スクリプトを実行する前にクラスター ノードを起動してイメージをデプロイする必要があるため、実験にはかなり時間がかかります。糖尿病トレーニング スクリプトのような簡単な実験では、これは非効率的に見えるかもしれません。しかし、数時間かかるより複雑な実験を実行する必要があると想像してください - よりスケーラブルな計算を動的に作成すると、全体の時間が大幅に短縮される可能性があります。
###Code
# スクリプト構成を作成する
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# 実験を送信する
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
実験の実行を待っている間に、上のウィジェットまたは [Azure Machine Learning Studio](https://ml.azure.com) でコンピューティングの状態を確認できます。次のコマンドを使用して、コンピューティングの状態を確認することもできます。
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
状態が*安定*から*サイズ変更中*に変わるまでにはしばらく時間がかかることに注意してください (コーヒーブレイクをするのによいタイミングです)。実行が完了するまでカーネルをブロックするには、下のセルを実行します。
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
実験が完了したら、実験の実行によって生成されたメトリックとファイルを取得できます。今回は、ファイルには、イメージを構築し、コンピューティングを管理するためのログが含まれます。
###Code
# メトリックの記録を取得する
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
これで、実験によってトレーニングされたモデルを登録できます。
###Code
from azureml.core import Model
# モデルを登録する
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# 登録済みモデルを一覧表示する
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.22.0 to work with dp100
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
diabetes-experiment-env defined.
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568509052814499
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1616520193_bd1a4b8f/ROC_1616520612.png
ROC_1616520612.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/10_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8966666666666666
AUC 0.8811103069277058
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1616521235_3ab82b5b/ROC_1616521250.png
ROC_1616521250.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/10_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name diabetes-experiment-env
Name AzureML-AutoML-GPU
Name AzureML-PyTorch-1.1-CPU
Name AzureML-TensorFlow-1.12-GPU
Name AzureML-Chainer-5.1.0-CPU
Name AzureML-TensorFlow-1.13-CPU
Name AzureML-Minimal
Name AzureML-PyTorch-1.4-GPU
Name AzureML-PyTorch-1.0-CPU
Name AzureML-Tutorial
Name AzureML-Scikit-learn-0.20.3
Name AzureML-PyTorch-1.2-GPU
Name AzureML-PyTorch-1.1-GPU
Name AzureML-Hyperdrive-ForecastDNN
Name AzureML-TensorFlow-1.13-GPU
Name AzureML-TensorFlow-1.10-CPU
Name AzureML-TensorFlow-1.12-CPU
Name AzureML-PySpark-MmlSpark-0.15
Name AzureML-PyTorch-1.3-CPU
Name AzureML-TensorFlow-2.0-GPU
Name AzureML-PyTorch-1.0-GPU
Name AzureML-PyTorch-1.4-CPU
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-PyTorch-1.2-CPU
Name AzureML-TensorFlow-2.0-CPU
Name AzureML-PyTorch-1.3-GPU
Name AzureML-Chainer-5.1.0-GPU
Name AzureML-TensorFlow-1.10-GPU
Name AzureML-TensorFlow-2.2-GPU
Name AzureML-TensorFlow-2.1-CPU
Name AzureML-TensorFlow-2.2-CPU
Name AzureML-TensorFlow-2.1-GPU
Name AzureML-Triton
Name AzureML-DeepSpeed-0.3-GPU
Name AzureML-PyTorch-1.5-CPU
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-PyTorch-1.5-GPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-Designer-Score
Name AzureML-PyTorch-1.6-GPU
Name AzureML-TensorFlow-2.3-GPU
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
Name AzureML-AutoML-GPU
packages channels:
- anaconda
- conda-forge
- pytorch
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.24.0.post1
- azureml-pipeline-core==1.24.0
- azureml-telemetry==1.24.0
- azureml-defaults==1.24.0
- azureml-interpret==1.24.0
- azureml-automl-core==1.24.0
- azureml-automl-runtime==1.24.0
- azureml-train-automl-client==1.24.0
- azureml-train-automl-runtime==1.24.0
- azureml-dataset-runtime==1.24.0
- azureml-dataprep==2.11.2
- azureml-mlflow==1.24.0
- inference-schema
- py-cpuinfo==5.0.0
- boto3==1.15.18
- botocore==1.18.18
- numpy~=1.18.0
- scikit-learn==0.22.1
- pandas~=0.25.0
- fbprophet==0.5
- holidays==0.9.11
- setuptools-git
- psutil>5.0.0,<6.0.0
name: azureml_cde433fc51995440f5f84a38d2f2e6fd
Name AzureML-PyTorch-1.1-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-telemetry==1.19.0
- azureml-train-restclients-hyperdrive==1.19.0
- azureml-train-core==1.19.0
- torch==1.1
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_6e145d82f92c27509a9b9e457edff086
Name AzureML-TensorFlow-1.12-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.12.0
- horovod==0.15.2
name: azureml_f6491bb45aa53d4e966d894b801f618f
Name AzureML-Chainer-5.1.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- chainer==5.1.0
- mpi4py==3.0.0
name: azureml_5beb73f5839a4cc0a61198ee0bfa449d
Name AzureML-TensorFlow-1.13-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.13.1
- horovod==0.16.1
name: azureml_71d30d49ae0ea16ff794742485e953e5
Name AzureML-Minimal
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
name: azureml_39d18bde647c9e3afa8a97c1b8e8468f
Name AzureML-PyTorch-1.4-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.4.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9478de1acf723bf396f36694a0291988
Name AzureML-PyTorch-1.0-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.0
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
name: azureml_2e157ca425d4987ff14c3f307bed97cf
Name AzureML-Tutorial
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- azureml-widgets==1.21.0
- azureml-pipeline-core==1.21.0
- azureml-pipeline-steps==1.21.0
- azureml-opendatasets==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-train-automl==1.21.0
- azureml-train==1.21.0
- azureml-sdk==1.21.0
- azureml-interpret==1.21.0
- azureml-tensorboard==1.21.0
- azureml-mlflow==1.21.0
- mlflow
- sklearn-pandas
- pandas
- numpy
- tqdm
- scikit-learn
- matplotlib
name: azureml_df6ad66e80d4bc0030b6d046a4e46427
Name AzureML-Scikit-learn-0.20.3
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- scikit-learn==0.20.3
- scipy==1.2.1
- joblib==0.13.2
name: azureml_3d6fa1d835846f1a28a18b506bcad70f
Name AzureML-PyTorch-1.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.2
- torchvision==0.4.0
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_7b29eb0faf69300b2c4353d784107d34
Name AzureML-PyTorch-1.1-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.1
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_89dbc5bca1a4bdc6fd62f99a3d6295e5
Name AzureML-Hyperdrive-ForecastDNN
packages dependencies:
- python=3.7
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-pipeline-core==1.21.0
- azureml-telemetry==1.21.0
- azureml-defaults==1.21.0
- azureml-automl-core==1.21.0
- azureml-automl-runtime==1.21.0
- azureml-train-automl-client==1.21.0
- azureml-train-automl-runtime==1.21.0.post1
- azureml-contrib-automl-dnn-forecasting==1.21.0
name: azureml_551b0d285970bc512cb183aa28be2c7f
Name AzureML-TensorFlow-1.13-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==1.13.1
- horovod==0.16.1
name: azureml_08e699281a2ab6d3b68ab09f106952c4
Name AzureML-TensorFlow-1.10-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.10
- horovod==0.15.2
name: azureml_3810220929dbc5cb90f19492d15e7151
Name AzureML-TensorFlow-1.12-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow==1.12
- horovod==0.15.2
name: azureml_935139c0a8e56a190fafce06d6edc3cd
Name AzureML-PySpark-MmlSpark-0.15
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
name: azureml_ba04eb03753f110d643f552f15c3bb42
Name AzureML-PyTorch-1.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-TensorFlow-2.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- tensorflow-gpu==2.0.0
- horovod==0.18.1
name: azureml_65a7428a47e1ac7aed09e91b25d6e127
Name AzureML-PyTorch-1.0-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.0
- torchvision==0.2.1
- mkl==2018.0.3
- horovod==0.16.1
name: azureml_2e157ca425d4987ff14c3f307bed97cf
Name AzureML-PyTorch-1.4-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.4.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9478de1acf723bf396f36694a0291988
Name AzureML-VowpalWabbit-8.8.0
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-dataset-runtime[fuse,pandas]
name: azureml_769be4b756b756954fa484d1287d5153
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "compute-07"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
Creating
Succeeded
AmlCompute wait for completion finished
Minimum number of nodes requested have been provisioned
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
Steady 0
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.9008888888888889
AUC 0.8859198496550613
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1616521696_a112ef00/ROC_1616523114.png
ROC_1616523114.png
azureml-logs/20_image_build_log.txt
azureml-logs/55_azureml-execution-tvmps_07ba58714004401f7550a5e155cfaeea29029d786b7d8ca33ac6003729dbcfd2_d.txt
azureml-logs/65_job_prep-tvmps_07ba58714004401f7550a5e155cfaeea29029d786b7d8ca33ac6003729dbcfd2_d.txt
azureml-logs/70_driver_log.txt
azureml-logs/75_job_post-tvmps_07ba58714004401f7550a5e155cfaeea29029d786b7d8ca33ac6003729dbcfd2_d.txt
azureml-logs/process_info.json
azureml-logs/process_status.json
logs/azureml/103_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/job_prep_azureml.log
logs/azureml/job_release_azureml.log
outputs/diabetes_model.pkl
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 3
Training context : Compute cluster
AUC : 0.8859198496550613
Accuracy : 0.9008888888888889
diabetes_model version: 2
Training context : File dataset
AUC : 0.8568743524381947
Accuracy : 0.7891111111111111
diabetes_model version: 1
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
amlstudio-designer-predict-dia version: 1
CreatedByAMLStudio : true
AutoMLd7268af350 version: 1
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
_____no_output_____
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
_____no_output_____
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
_____no_output_____
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.> **Note**: The code below creates a **DockerConfiguration** for the script run, and setting its **use_docker** attribute to **True** in order to host the script's environment in a Docker container. This is the default behavior, so you can omit this; but we're including it here to be explicit.
###Code
from azureml.core import Experiment, ScriptRunConfig
from azureml.core.runconfig import DockerConfiguration
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
_____no_output_____
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env,
docker_runtime_config=DockerConfiguration(use_docker=True)) # Use docker to host environment
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
_____no_output_____
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
_____no_output_____
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "your-compute-cluster"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
_____no_output_____
###Markdown
> **Note**: Compute instances and clusters are based on standard Azure virtual machine images. For this exercise, the *Standard_DS11_v2* image is recommended to achieve the optimal balance of cost and performance. If your subscription has a quota that does not include this image, choose an alternative image; but bear in mind that a larger image may incur higher cost and a smaller image may not be sufficient to complete the tasks. Alternatively, ask your Azure administrator to extend your quota. Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
_____no_output_____
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
_____no_output_____
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
_____no_output_____
###Markdown
Work with ComputeWhen you run a script as an Azure Machine Learning experiment, you need to define the execution context for the experiment run. The execution context is made up of:* The Python environment for the script, which must include all Python packages used in the script.* The compute target on which the script will be run. This could be the local workstation from which the experiment run is initiated, or a remote compute target such as a training cluster that is provisioned on-demand.In this notebook, you'll explore *environments* and *compute targets* for experiments. Connect to your workspaceTo get started, connect to your workspace.> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
###Code
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
###Output
Ready to use Azure ML 1.26.0 to work with mls-dp100
###Markdown
Prepare data for an experimentIn this notebook, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if it already exists, the code will find the existing version)
###Code
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
###Output
Dataset already registered.
###Markdown
Create a training scriptRun the following two cells to create:1. A folder for a new experiment2. An training script file that uses **scikit-learn** to train a model and **matplotlib** to plot a ROC curve.
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_logistic'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument('--regularization', type=float, dest='reg_rate', default=0.01, help='regularization rate')
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Set regularization hyperparameter
reg = args.reg_rate
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a logistic regression model
print('Training a logistic regression model with regularization rate of', reg)
run.log('Regularization Rate', np.float(reg))
model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_logistic/diabetes_training.py
###Markdown
Define an environmentWhen you run a Python script as an experiment in Azure Machine Learning, a Conda environment is created to define the execution context for the script. Azure Machine Learning provides a default environment that includes many common packages; including the **azureml-defaults** package that contains the libraries necessary for working with an experiment run, as well as popular packages like **pandas** and **numpy**.You can also define your own environment and add packages by using **conda** or **pip**, to ensure your experiment has access to all the libraries it requires.> **Note**: The conda dependencies are installed first, followed by the pip dependencies. Since the **pip** package is required to install the pip dependencies, it's good practice to include it in the conda dependencies (Azure ML will install it for you if you forget, but you'll see a warning in the log!)
###Code
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
diabetes_env = Environment("diabetes-experiment-env")
diabetes_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
diabetes_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies (conda or pip as required)
diabetes_packages = CondaDependencies.create(conda_packages=['scikit-learn','ipykernel','matplotlib','pandas','pip'],
pip_packages=['azureml-sdk','pyarrow'])
# Add the dependencies to the environment
diabetes_env.python.conda_dependencies = diabetes_packages
print(diabetes_env.name, 'defined.')
###Output
'enabled' is deprecated. Please use the azureml.core.runconfig.DockerConfiguration object with the 'use_docker' param instead.
###Markdown
Now you can use the environment to run a script as an experiment.The following code assigns the environment you created to a ScriptRunConfig, and submits an experiment. As the experiment runs, observe the run details in the widget and in the **azureml_logs/60_control_log.txt** output log, you'll see the conda environment being built.
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--regularization', 0.1, # Regularizaton rate parameter
'--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=diabetes_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
The experiment successfully used the environment, which included all of the packages it required - you can view the metrics and outputs from the experiment run in Azure Machine Learning Studio, or by running the code below - including the model trained using **scikit-learn** and the ROC chart image generated using **matplotlib**.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Regularization Rate 0.1
Accuracy 0.7891111111111111
AUC 0.8568509052814499
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1617853665_6725dba5/ROC_1617853815.png
ROC_1617853815.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/18942_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
Register the environmentHaving gone to the trouble of defining an environment with the packages you need, you can register it in the workspace.
###Code
# Register the environment
diabetes_env.register(workspace=ws)
###Output
_____no_output_____
###Markdown
Note that the environment is registered with the name you assigned when you first created it (in this case, *diabetes-experiment-env*).With the environment registered, you can reuse it for any scripts that have the same requirements. For example, let's create a folder and script to train a diabetes model using a different algorithm:
###Code
import os
# Create a folder for the experiment files
experiment_folder = 'diabetes_training_tree'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder, 'folder created')
%%writefile $experiment_folder/diabetes_training.py
# Import libraries
import argparse
from azureml.core import Run
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get script arguments
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='training_dataset_id', help='training dataset')
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
# load the diabetes data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train a decision tree model
print('Training a decision tree model')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
os.makedirs('outputs', exist_ok=True)
# note file saved in the outputs folder is automatically uploaded into experiment record
joblib.dump(value=model, filename='outputs/diabetes_model.pkl')
run.complete()
###Output
Writing diabetes_training_tree/diabetes_training.py
###Markdown
Now you can retrieve the registered environment and use it in a new experiment that runs the alternative training script (there is no regularization parameter this time because a Decision Tree classifier doesn't require it).
###Code
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails
# get the registered environment
registered_env = Environment.get(ws, 'diabetes-experiment-env')
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
environment=registered_env)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
This time the experiment runs more quickly because a matching environment has been cached from the previous run, so it doesn't need to be recreated on the local compute. However, even on a different compute target, the same environment would be created and used - ensuring consistency for your experiment script execution context.Let's look at the metrics and outputs from the experiment.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.8993333333333333
AUC 0.8836080927197905
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1617853869_87101c92/ROC_1617853880.png
ROC_1617853880.png
azureml-logs/60_control_log.txt
azureml-logs/70_driver_log.txt
logs/azureml/19304_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
outputs/diabetes_model.pkl
###Markdown
View registered environmentsIn addition to registering your own environments, you can leverage pre-built "curated" environments for common experiment types. The following code lists all registered environments:
###Code
from azureml.core import Environment
envs = Environment.list(workspace=ws)
for env in envs:
print("Name",env)
###Output
Name diabetes-experiment-env
Name AzureML-Tutorial
Name AzureML-Minimal
Name AzureML-PyTorch-1.5-CPU
Name AzureML-PyTorch-1.5-GPU
Name AzureML-Designer-Score
Name AzureML-TensorFlow-2.2-GPU
Name AzureML-TensorFlow-2.2-CPU
Name AzureML-PyTorch-1.6-CPU
Name AzureML-PyTorch-1.6-GPU
Name AzureML-Triton
Name AzureML-TensorFlow-2.3-CPU
Name AzureML-TensorFlow-2.3-GPU
Name AzureML-DeepSpeed-0.3-GPU
Name AzureML-XGBoost-0.9-Inference-CPU
Name AzureML-PyTorch-1.6-Inference-CPU
Name AzureML-Minimal-Inference-CPU
Name AzureML-TensorFlow-1.15-Inference-CPU
Name AzureML-VowpalWabbit-8.8.0
Name AzureML-PyTorch-1.3-CPU
Name AzureML-AutoML-GPU
###Markdown
All curated environments have names that begin ***AzureML-*** (you can't use this prefix for your own environments).Let's explore the curated environments in more depth and see what packages are included in each of them.
###Code
for env in envs:
if env.startswith("AzureML"):
print("Name",env)
print("packages", envs[env].python.conda_dependencies.serialize_to_string())
###Output
Name AzureML-Tutorial
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- azureml-widgets==1.25.0
- azureml-pipeline-core==1.25.0
- azureml-pipeline-steps==1.25.0
- azureml-opendatasets==1.25.0
- azureml-automl-core==1.25.0
- azureml-automl-runtime==1.25.0.post1
- azureml-train-automl-client==1.25.0
- azureml-train-automl-runtime==1.25.0
- azureml-train-automl==1.25.0
- azureml-train==1.25.0
- azureml-sdk==1.25.0
- azureml-interpret==1.25.0
- azureml-tensorboard==1.25.0
- azureml-mlflow==1.25.0
- mlflow
- sklearn-pandas
- pandas
- numpy
- tqdm
- scikit-learn
- matplotlib
name: azureml_220813aa74c252741cc887dcbeb01c68
Name AzureML-Minimal
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
name: azureml_d8a588ff2e406566dfa4b87bad6fb795
Name AzureML-PyTorch-1.5-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a0839646d50ace28a8758be3e7363044
Name AzureML-PyTorch-1.5-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- torch==1.5.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.19.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a0839646d50ace28a8758be3e7363044
Name AzureML-Designer-Score
packages channels:
- defaults
dependencies:
- python=3.6.8
- pip:
- azureml-designer-score-modules==0.0.16
name: azureml_18573b1d77e5ef62bcbe8903c11ceafe
Name AzureML-TensorFlow-2.2-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- tensorflow-gpu==2.2.0
- horovod==0.19.5
name: azureml_8767a37bc436bb9800ce6c34cc7772c5
Name AzureML-TensorFlow-2.2-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- tensorflow==2.2.0
- horovod==0.19.5
name: azureml_2ef7ea6075be6eec3d785912da5909d8
Name AzureML-PyTorch-1.6-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_fd0a713fe25275c9186878f4b7a6698c
Name AzureML-PyTorch-1.6-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.18.0.post1
- azureml-defaults==1.18.0
- azureml-telemetry==1.18.0
- azureml-train-restclients-hyperdrive==1.18.0
- azureml-train-core==1.18.0
- cmake==3.18.2
- torch==1.6.0
- torchvision==0.5.0
- mkl==2018.0.3
- horovod==0.20.0
- tensorboard==1.14.0
- future==0.17.1
name: azureml_9d2a515d5c77954f2d0562cc5eb8a1fc
Name AzureML-Triton
packages channels:
- conda-forge
dependencies:
- python=3.7.9
- pip:
- azureml-core==1.25.0
- azureml-defaults[async]
- azureml-contrib-services==1.25.0
- numpy
- inference-schema[numpy-support]
- grpcio-tools
- geventhttpclient
- https://developer.download.nvidia.com/compute/redist/tritonclient/tritonclient-2.4.0-py3-none-manylinux1_x86_64.whl
name: azureml_f14f58afccac32d7d9284aceb5afe95b
Name AzureML-TensorFlow-2.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- tensorflow==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_4c567a28a1ff3c83693c442e0588dbbb
Name AzureML-TensorFlow-2.3-GPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- tensorflow-gpu==2.3.0
- cmake==3.18.2
- horovod==0.20.0
name: azureml_2bae673ebe2ee2381dcc57c1481deff0
Name AzureML-DeepSpeed-0.3-GPU
packages channels:
- pytorch
- conda-forge
dependencies:
- python=3.6.2
- cudatoolkit-dev=10.1.243
- cudatoolkit=10.1
- pytorch==1.6.0
- torchvision==0.7.0
- gxx_linux-64
- pip<=20.2
- pip:
- azureml-core==1.25.0
- azureml-defaults==1.25.0
- azureml-telemetry==1.25.0
- azureml-train-restclients-hyperdrive==1.25.0
- azureml-train-core==1.25.0
- azureml-mlflow==1.25.0
- azureml-dataprep
- cmake==3.18.2
- mkl==2018.0.3
- tensorboard==1.14.0
- future==0.17.1
- matplotlib
- boto3
- h5py
- sklearn
- scipy
- pillow
- tqdm
- cupy-cuda101
- mpi4py
- deepspeed==0.3.11
name: azureml_0fe27d937dd50f935be7288c00937ea4
Name AzureML-XGBoost-0.9-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.25.0
name: project_environment
Name AzureML-PyTorch-1.6-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.25.0
name: project_environment
Name AzureML-Minimal-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.25.0
name: project_environment
Name AzureML-TensorFlow-1.15-Inference-CPU
packages channels:
- anaconda
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-defaults==1.25.0
name: project_environment
Name AzureML-VowpalWabbit-8.8.0
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip:
- azureml-core==1.19.0
- azureml-defaults==1.19.0
- azureml-dataset-runtime[fuse,pandas]
name: azureml_769be4b756b756954fa484d1287d5153
Name AzureML-PyTorch-1.3-CPU
packages channels:
- conda-forge
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.21.0.post1
- azureml-defaults==1.21.0
- azureml-telemetry==1.21.0
- azureml-train-restclients-hyperdrive==1.21.0
- azureml-train-core==1.21.0
- torch==1.3
- torchvision==0.4.1
- mkl==2018.0.3
- horovod==0.18.1
- tensorboard==1.14.0
- future==0.17.1
name: azureml_a02f4fa469cd8066bd6e2f219433318d
Name AzureML-AutoML-GPU
packages channels:
- anaconda
- conda-forge
- pytorch
dependencies:
- python=3.6.2
- pip=20.2.4
- pip:
- azureml-core==1.25.0
- azureml-pipeline-core==1.25.0
- azureml-telemetry==1.25.0
- azureml-defaults==1.25.0
- azureml-interpret==1.25.0
- azureml-automl-core==1.25.0
- azureml-automl-runtime==1.25.0.post1
- azureml-train-automl-client==1.25.0
- azureml-train-automl-runtime==1.25.0
- azureml-dataset-runtime==1.25.0
- azureml-mlflow==1.25.0
- inference-schema
- py-cpuinfo==5.0.0
- boto3==1.15.18
- botocore==1.18.18
- numpy~=1.18.0
- scikit-learn==0.22.1
- pandas~=0.25.0
- fbprophet==0.5
- holidays==0.9.11
- setuptools-git
- psutil>5.0.0,<6.0.0
name: azureml_78bd14dfeefbfaa73eeef13fc3e3cc1c
###Markdown
Create a compute clusterIn many cases, your local compute resources may not be sufficient to process a complex or long-running experiment that needs to process a large volume of data; and you may want to take advantage of the ability to dynamically create and use compute resources in the cloud. Azure Machine Learning supports a range of compute targets, which you can define in your workpace and use to run experiments; paying for the resources only when using them.You can create a compute cluster in [Azure Machine Learning studio](https://ml.azure.com), or by using the Azure Machine Learning SDK. The following code cell checks your workspace for the existance of a compute cluster with a specified name, and if it doesn't exist, creates it.> **Important**: Change *your-compute-cluster* to a suitable name for your compute cluster in the code below before running it - you can specify the name of an existing cluster if you have one. Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "alazureml-cc0408"
try:
# Check for existing compute target
training_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
training_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
training_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
###Output
Found existing cluster, use it.
###Markdown
Run an experiment on remote computeNow you're ready to re-run the experiment you ran previously, but this time on the compute cluster you created. > **Note**: The experiment will take quite a lot longer because a container image must be built with the conda environment, and then the cluster nodes must be started and the image deployed before the script can be run. For a simple experiment like the diabetes training script, this may seem inefficient; but imagine you needed to run a more complex experiment that takes several hours - dynamically creating more scalable compute may reduce the overall time significantly.
###Code
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_training.py',
arguments = ['--input-data', diabetes_ds.as_named_input('training_data')],
environment=registered_env,
compute_target=cluster_name)
# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
While you're waiting for the experiment to run, you can check on the status of the compute in the widget above or in [Azure Machine Learning studio](https://ml.azure.com). You can also check the status of the compute using the code below.
###Code
cluster_state = training_cluster.get_status()
print(cluster_state.allocation_state, cluster_state.current_node_count)
###Output
Steady 0
###Markdown
Note that it will take a while before the status changes from *steady* to *resizing* (now might be a good time to take a coffee break!). To block the kernel until the run completes, run the cell below.
###Code
run.wait_for_completion()
###Output
_____no_output_____
###Markdown
After the experiment has finished, you can get the metrics and files generated by the experiment run. This time, the files will include logs for building the image and managing the compute.
###Code
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
###Output
Accuracy 0.9011111111111111
AUC 0.886087297746153
ROC aml://artifactId/ExperimentRun/dcid.mslearn-train-diabetes_1617854076_6b914c72/ROC_1617854899.png
ROC_1617854899.png
azureml-logs/20_image_build_log.txt
azureml-logs/55_azureml-execution-tvmps_fee0f0918aff0509db9b76e938ea0d74db1deaf1cd0eb2464fa2e0b640114c10_d.txt
azureml-logs/65_job_prep-tvmps_fee0f0918aff0509db9b76e938ea0d74db1deaf1cd0eb2464fa2e0b640114c10_d.txt
azureml-logs/70_driver_log.txt
azureml-logs/75_job_post-tvmps_fee0f0918aff0509db9b76e938ea0d74db1deaf1cd0eb2464fa2e0b640114c10_d.txt
azureml-logs/process_info.json
azureml-logs/process_status.json
logs/azureml/103_azureml.log
logs/azureml/dataprep/backgroundProcess.log
logs/azureml/dataprep/backgroundProcess_Telemetry.log
logs/azureml/job_prep_azureml.log
logs/azureml/job_release_azureml.log
outputs/diabetes_model.pkl
###Markdown
Now you can register the model that was trained by the experiment.
###Code
from azureml.core import Model
# Register the model
run.register_model(model_path='outputs/diabetes_model.pkl', model_name='diabetes_model',
tags={'Training context':'Compute cluster'}, properties={'AUC': run.get_metrics()['AUC'], 'Accuracy': run.get_metrics()['Accuracy']})
# List registered models
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
###Output
diabetes_model version: 5
Training context : Compute cluster
AUC : 0.886087297746153
Accuracy : 0.9011111111111111
diabetes_model version: 4
Training context : File dataset
AUC : 0.8468331741963582
Accuracy : 0.7793333333333333
diabetes_model version: 3
Training context : Tabular dataset
AUC : 0.8568509052814499
Accuracy : 0.7891111111111111
diabetes_model version: 2
Training context : Parameterized script
AUC : 0.8484357430717946
Accuracy : 0.774
diabetes_model version: 1
Training context : Script
AUC : 0.8483203144435048
Accuracy : 0.774
amlstudio-designer-predict-dia version: 1
CreatedByAMLStudio : true
AutoML29253f2ad0 version: 1
|
Aircondition_MRO/HNA_Survival_Analysis/10-HNA-Stephen-1851M36P.ipynb | ###Markdown
Part1
###Code
from __future__ import unicode_literals
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import font_manager
from matplotlib.font_manager import FontProperties
font = FontProperties(fname=r"/root/anaconda2/envs/python3/lib/python3.6/site-packages/matplotlib/mpl-data/fonts/ttf/msyh.ttf")
import numpy as np
from sksurv.nonparametric import kaplan_meier_estimator
from sksurv.preprocessing import OneHotEncoder
from sksurv.linear_model import CoxnetSurvivalAnalysis#CoxPHSurvivalAnalysis
from sksurv.linear_model import CoxPHSurvivalAnalysis
from sksurv.metrics import concordance_index_censored
from sksurv.metrics import concordance_index_ipcw
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV
data1 = pd.read_csv("1851M36P.csv", encoding = "GB2312")
#data1 = data1[data1["部件装上使用小时数"]!="00:00"]
data1["部件本次装机使用小时"] = data1["部件本次装机使用小时"].str.split(':').str[0].astype(int)
data1 = data1[data1["部件本次装机使用小时"]>0]
data1["IsPlanned"] = data1["拆换原因"]!="FAIL"
print(data1["IsPlanned"].value_counts())
data_y = data1[["IsPlanned", "部件本次装机使用小时"]]
data_y["部件本次装机使用小时"].hist(bins=12, range=(0,60000))
#data1["IsPlaneNew"] = data1["部件装上飞行小时数"]=="00:00"
data1["IsPartNew"] = data1["部件装上使用小时数"]=="00:00"
def CheckNew(p1):
if p1:
return "PartNew"
elif not p1:
return "PartOld"
#print([CheckNew(row["IsPlaneNew"], row["IsPartNew"]) for idx, row in data1.iterrows()])
data1["PlanePartType"] = [CheckNew(row["IsPartNew"]) for idx, row in data1.iterrows()]
data1["安装日期"] = pd.to_datetime(data1["安装日期"])
data1["安装年度"] = data1["安装日期"].dt.year
di = {"霍尼韦尔": "HONEYWELL"}
data1.replace({"最近送修公司": di}, inplace=True)
data1["最近送修公司"].fillna("Unknown", inplace=True)
#data1["FH TSN"].fillna("00:00", inplace=True)
#data1["部件装上飞行小时数"] = data1["部件装上飞行小时数"].str.split(':').str[0].astype(int)
data1["部件装上使用小时数"] = data1["部件装上使用小时数"].str.split(':').str[0].astype(int)
#data1["部件装上飞行小时数-Range"] = pd.cut(data1['部件装上飞行小时数'], 8)
#data1["部件装上飞行循环数-Range"] = pd.cut(data1['部件装上飞行循环数'], 8)
data1["部件装上使用小时数-Range"] = pd.cut(data1['部件装上使用小时数'], 4)
#data1["部件装上使用循环数-Range"] = pd.cut(data1['部件装上使用循环数'], 8)
#data1["CY TSN-Range"] = pd.cut(data1['CY TSN'], 8)
#data1["FH TSN-Range"] = pd.cut(data1['FH TSN'], 8)
#data_x = data1[["机型","制造序列号","机号","参考类型","指令类型","序号","拆换原因","部件装上飞行循环数","部件装上使用循环数",
# "部件拆下飞行循环数","部件拆下使用循环数","装上序号","最近送修公司","CY TSN","FH TSN"]]
#data_x = data1[["机型","参考类型","指令类型","拆换原因","部件装上飞行循环数","部件装上使用循环数",
# "部件拆下飞行循环数","部件拆下使用循环数","CY TSN","FH TSN"]]
data_x = data1[["机型","安装年度","部件装上使用小时数-Range", "最近送修公司","PlanePartType"]]
time, survival_prob = kaplan_meier_estimator(data_y["IsPlanned"], data_y["部件本次装机使用小时"])
plt.step(time, survival_prob, where="post")
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
# "机型","拆换年度","部件装上飞行小时数-Range","部件装上飞行循环数-Range","部件装上使用小时数-Range","部件装上使用循环数-Range","CY TSN-Range","FH TSN-Range", "最近送修公司"
#col = "机型"
#col = "参考类型"
col = "PlanePartType"
#col = "安装年度"
#col = "机型"
#print((data_x["最近送修公司"]!="上海航新") & (data_x["最近送修公司"]!="PP"))
y = data_y
x = data_x
for value in x[col].unique():
mask = x[col] == value
time_cell, survival_prob_cell = kaplan_meier_estimator(y["IsPlanned"][mask],
y["部件本次装机使用小时"][mask])
plt.step(time_cell, survival_prob_cell, where="post", label="%s (n = %d)" % (value, mask.sum()))
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="upper right", prop=font)
# "机型","拆换年度","部件装上飞行小时数-Range","部件装上飞行循环数-Range","部件装上使用小时数-Range","部件装上使用循环数-Range","CY TSN-Range","FH TSN-Range", "最近送修公司"
#col = "机型"
#col = "参考类型"
col = "最近送修公司"
#col = "安装年度"
#col = "机型"
#print((data_x["最近送修公司"]!="上海航新") & (data_x["最近送修公司"]!="PP"))
filter1 = (data_x["最近送修公司"]!="上海航新") & (data_x["最近送修公司"]!="PP") & (data_x["最近送修公司"]!="海航技术")
y = data_y[filter1]
x = data_x[filter1]
for value in x[col].unique():
mask = x[col] == value
time_cell, survival_prob_cell = kaplan_meier_estimator(y["IsPlanned"][mask],
y["部件本次装机使用小时"][mask])
plt.step(time_cell, survival_prob_cell, where="post", label="%s (n = %d)" % (value, mask.sum()))
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="upper right", prop=font)
#data_x.select_dtypes(exclude=['int','int64' 'float']).columns
data_x.describe()
#"部件装上飞行小时数-Range","部件装上飞行循环数-Range","部件装上使用小时数-Range","部件装上使用循环数-Range","CY TSN-Range","FH TSN-Range",
#
x = data_x.copy()
cat_features = ["机型", "安装年度","部件装上飞行小时数-Range","部件装上使用小时数-Range","FH TSN-Range", "最近送修公司","PlanePartType"]
for col in cat_features:
x[col] = x[col].astype('category')
data_x_numeric = OneHotEncoder().fit_transform(x[cat_features])
data_x_numeric.head()
null_columns=data1.columns[data1.isnull().any()]
data1[null_columns].isnull().sum()
#data_y = data_y.as_matrix()
y = data_y.to_records(index=False)
estimator = CoxPHSurvivalAnalysis() #CoxnetSurvivalAnalysis()
estimator.fit(data_x_numeric, y)
#pd.Series(estimator.coef_, index=data_x_numeric.columns)
prediction = estimator.predict(data_x_numeric)
result = concordance_index_censored(y["IsPlanned"], y["部件本次装机使用小时"], prediction)
print(result[0])
result = concordance_index_ipcw(y, y, prediction)
print(result[0])
def fit_and_score_features(X, y):
n_features = X.shape[1]
scores = np.empty(n_features)
m = CoxnetSurvivalAnalysis()
for j in range(n_features):
Xj = X[:, j:j+1]
m.fit(Xj, y)
scores[j] = m.score(Xj, y)
return scores
scores = fit_and_score_features(data_x_numeric.values, y)
pd.Series(scores, index=data_x_numeric.columns).sort_values(ascending=False)
x_new = data_x_numeric.loc[[46,77,200,593]]
#print(x_new)
data_x.loc[[46,77,200,593]]
y[[46,77,200,593]]
pred_surv = estimator.predict_survival_function(x_new)
for i, c in enumerate(pred_surv):
plt.step(c.x, c.y, where="post", label="Sample %d" % (i + 1))
plt.ylabel("est. probability of survival $\hat{S}(t)$")
plt.xlabel("time $t$")
plt.legend(loc="best")
pipe = Pipeline([('encode', OneHotEncoder()),
('select', SelectKBest(fit_and_score_features, k=3)),
('model', CoxPHSurvivalAnalysis())])
param_grid = {'select__k': np.arange(1, data_x_numeric.shape[1] -3)}
gcv = GridSearchCV(pipe, param_grid=param_grid, return_train_score=True, cv=3, iid=True)
gcv.fit(x, y)
pd.DataFrame(gcv.cv_results_).sort_values(by='mean_test_score', ascending=False)
pipe.set_params(**gcv.best_params_)
pipe.fit(x, y)
encoder, transformer, final_estimator = [s[1] for s in pipe.steps]
pd.Series(final_estimator.coef_, index=encoder.encoded_columns_[transformer.get_support()])
###Output
_____no_output_____
###Markdown
Part2
###Code
from sklearn.model_selection import train_test_split
from sksurv.metrics import (concordance_index_censored,
concordance_index_ipcw,
cumulative_dynamic_auc)
data_x = data1[["安装年度","部件装上飞行小时数","部件装上使用小时数","FH TSN"]]
def df_to_sarray(df):
"""
Convert a pandas DataFrame object to a numpy structured array.
This is functionally equivalent to but more efficient than
np.array(df.to_array())
:param df: the data frame to convert
:return: a numpy structured array representation of df
"""
v = df.values
cols = df.columns
if False: # python 2 needs .encode() but 3 does not
types = [(cols[i].encode(), df[k].dtype.type) for (i, k) in enumerate(cols)]
else:
types = [(cols[i], df[k].dtype.type) for (i, k) in enumerate(cols)]
dtype = np.dtype(types)
z = np.zeros(v.shape, dtype)
for (i, k) in enumerate(z.dtype.names):
z[:,i] = v[:, i]
return z
y = data_y.to_records(index=False)
x_train, x_test, y_train, y_test = train_test_split(data_x, y, test_size=0.2)#, random_state=1)
x_train = x_train.values
x_test = x_test.values
y_events_train = y_train[y_train['IsPlanned']==False]
train_min, train_max = y_events_train["部件本次装机使用小时"].min(), y_events_train["部件本次装机使用小时"].max()
y_events_test = y_test[y_test['IsPlanned']==False]
test_min, test_max = y_events_test["部件本次装机使用小时"].min(), y_events_test["部件本次装机使用小时"].max()
assert train_min <= test_min < test_max < train_max, \
"time range or test data is not within time range of training data."
times = np.percentile(data_y["部件本次装机使用小时"], np.linspace(5, 95, 15))
print(times)
import matplotlib
matplotlib.matplotlib_fname()
num_columns = ["安装年度","部件装上飞行小时数","部件装上使用小时数","FH TSN"]
def plot_cumulative_dynamic_auc(risk_score, label, color=None):
auc, mean_auc = cumulative_dynamic_auc(y_train, y_test, risk_score, times)
plt.plot(times, auc, marker="o", color=color, label=label)
plt.legend(prop = font)
plt.xlabel("time时间",fontproperties=font)
plt.ylabel("time-dependent AUC")
plt.axhline(mean_auc, color=color, linestyle="--")
for i, col in enumerate(num_columns):
plot_cumulative_dynamic_auc(x_test[:, i], col, color="C{}".format(i))
ret = concordance_index_ipcw(y_train, y_test, x_test[:, i], tau=times[-1])
###Output
_____no_output_____
###Markdown
Part3
###Code
data_x = data1[["机型","安装年度","部件装上飞行小时数","部件装上使用小时数","FH TSN", "最近送修公司","PlanePartType"]]
cat_features = ["机型", "安装年度", "最近送修公司","PlanePartType"]
for col in cat_features:
data_x[col] =data_x[col].astype('category')
times = np.percentile(data_y["部件本次装机使用小时"], np.linspace(5, 95, 15))
print(times)
estimator = CoxPHSurvivalAnalysis() #CoxnetSurvivalAnalysis()
estimator.fit(data_x_numeric, y)
from sklearn.pipeline import make_pipeline
y = data_y.to_records(index=False)
x_train, x_test, y_train, y_test = train_test_split(data_x, y, test_size=0.2)#, random_state=1)
cph = make_pipeline(OneHotEncoder(), CoxPHSurvivalAnalysis())
cph.fit(x_train, y_train)
result = concordance_index_censored(y_test["IsPlanned"], y_test["部件本次装机使用小时"], cph.predict(x_test))
print(result[0])
# estimate performance on training data, thus use `va_y` twice.
va_auc, va_mean_auc = cumulative_dynamic_auc(y_train, y_test, cph.predict(x_test), times)
plt.plot(times, va_auc, marker="o")
plt.axhline(va_mean_auc, linestyle="--")
plt.xlabel("time from enrollment")
plt.ylabel("time-dependent AUC")
plt.grid(True)
print(y_test["部件本次装机使用小时"])
print(cph.predict_survival_function(x_test))
print(y_test["部件本次装机使用小时"] - cph.predict(x_test))
###Output
0.756783634038926
[24341 10021 11228 6 162 925 8692 13401 6 1736 94 3197
7054 13193 24324 1243 2810 8395 12582 16474 452 15863 32718 32618
15790 22 10 14395 5263 26845 13 5123 7 7852 6 13400
15 6855 4764 5727 15920 13955 2382 15848 2098 40 8193 3853
18057 5836 6109 17069 2205 315 98 2489 3099 13996 2281 30424
609 65 15869 3877 1647 1935 3166 12358 4369 25760 537 23217
21621 19 681 16516 24324 11413 37029 19146 17661 15757 2080 66
170 7419 12465 18203 17153 12 93 8 2757 10922 2500 15018
6041 2393 11133 28173 807 26479 2229 8509 3175 10559 25 4369
5032 15454 38 3904 8059 7452 11100 2680 16 26662 18 4047
1971 10175 8266 328 7472 20799 22579 23477 6925 32025 473 7593
14913 38822 4118 15451 391 2984 5399 290 64 57 18 5916
1930 4227 3023 6833 18239 35806 253 3149 244 29731 15255 605
10 10321 427 15017 3450 6942 10 19081 6 226 5940 695
8 11548 5614 396 8241 4705 12 14125 6925 192 135 121
2614 869 10946 16951 14163 806 17077 3387 467 28105 147 189
10448 3045 6148 5677 5772 47 3629 11951 1427 8 27694 16668
366 2042 39129 457 12882 20070 9522 7810 11 34692 1750 529
24 1002 19601 8089 20000]
###Markdown
Part4
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas
import seaborn as sns
from sklearn.model_selection import ShuffleSplit, GridSearchCV
from sksurv.datasets import load_veterans_lung_cancer
from sksurv.column import encode_categorical
from sksurv.metrics import concordance_index_censored
from sksurv.svm import FastSurvivalSVM
sns.set_style("whitegrid")
data_x = data1[["机型","安装年度","部件装上飞行小时数","部件装上使用小时数","FH TSN", "最近送修公司","PlanePartType"]]
cat_features = ["机型", "安装年度", "最近送修公司","PlanePartType"]
for col in cat_features:
data_x[col] = data_x[col].astype('category')
x = OneHotEncoder().fit_transform(data_x)#encode_categorical(data_x)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3)#, random_state=1)
estimator = FastSurvivalSVM(optimizer="rbtree",rank_ratio=0.0, max_iter=1000, tol=1e-6, random_state=0, alpha=2.**-6)
estimator.fit(x_train, y_train)
prediction = estimator.predict(x_test)
result = concordance_index_censored(y_test["IsPlanned"], y_test["部件本次装机使用小时"], prediction)
print(result[0])
estimator.predict(x_train)
estimator = FastSurvivalSVM(optimizer="rbtree", max_iter=1000, tol=1e-6, random_state=0)
def score_survival_model(model, X, y):
prediction = model.predict(X)
result = concordance_index_censored(y['IsPlanned'], y['部件本次装机使用小时'], prediction)
return result[0]
param_grid = {'alpha': 2. ** np.arange(-12, 13, 2)}
cv = ShuffleSplit(n_splits=20, test_size=0.4, random_state=0)
gcv = GridSearchCV(estimator, param_grid, scoring=score_survival_model,
n_jobs=12, iid=False, refit=False,
cv=cv)
param_grid
import warnings
y = data_y.to_records(index=False)
warnings.filterwarnings("ignore", category=UserWarning)
gcv = gcv.fit(x, y)
gcv.best_score_, gcv.best_params_
def plot_performance(gcv):
n_splits = gcv.cv.n_splits
cv_scores = {"alpha": [], "test_score": [], "split": []}
order = []
for i, params in enumerate(gcv.cv_results_["params"]):
name = "%.5f" % params["alpha"]
order.append(name)
for j in range(n_splits):
vs = gcv.cv_results_["split%d_test_score" % j][i]
cv_scores["alpha"].append(name)
cv_scores["test_score"].append(vs)
cv_scores["split"].append(j)
df = pandas.DataFrame.from_dict(cv_scores)
_, ax = plt.subplots(figsize=(11, 6))
sns.boxplot(x="alpha", y="test_score", data=df, order=order, ax=ax)
_, xtext = plt.xticks()
for t in xtext:
t.set_rotation("vertical")
plot_performance(gcv)
from sksurv.svm import FastKernelSurvivalSVM
from sksurv.kernels import clinical_kernel
x_train, x_test, y_train, y_test = train_test_split(data_x, y, test_size=0.5)#, random_state=1)
kernel_matrix = clinical_kernel(x_train)
kssvm = FastKernelSurvivalSVM(optimizer="rbtree", kernel="precomputed", random_state=0, alpha=2.**-6)
kssvm.fit(kernel_matrix, y_train)
x_test.shape
kernel_matrix = clinical_kernel(x_test[0:552])
prediction = kssvm.predict(kernel_matrix)
result = concordance_index_censored(y_test[0:552]["IsPlanned"], y_test[0:552]["部件本次装机使用小时"], prediction)
print(result[0])
kernel_matrix = clinical_kernel(data_x)
kssvm = FastKernelSurvivalSVM(optimizer="rbtree", kernel="precomputed", random_state=0, alpha=2.**-12)
kgcv = GridSearchCV(kssvm, param_grid, score_survival_model,
n_jobs=12, iid=False, refit=False,
cv=cv)
import warnings
warnings.filterwarnings("ignore", category=UserWarning)
kgcv = kgcv.fit(kernel_matrix, y)
kgcv.best_score_, kgcv.best_params_
plot_performance(kgcv)
###Output
_____no_output_____ |
pymoli_final.ipynb | ###Markdown
Total Number of Players and Player Count
###Code
df.SN.nunique()
###Output
_____no_output_____
###Markdown
Purchasing Analysis Number of Unique Items
###Code
df['Item ID'].nunique()
###Output
_____no_output_____
###Markdown
Average Purchase Price
###Code
'${:.2f}'.format(df.Price.mean())
###Output
_____no_output_____
###Markdown
Total Number of Purchases
###Code
df.shape[0]
###Output
_____no_output_____
###Markdown
Total Revenue
###Code
df.Price.sum()
###Output
_____no_output_____
###Markdown
Gender Demographics Count of players by gender
###Code
df.groupby(['SN', 'Gender']).count().reset_index()['Gender'].value_counts()
###Output
_____no_output_____
###Markdown
Percentage of players by gender
###Code
ax2 = df.groupby(['SN', 'Gender']).count().reset_index()['Gender'].value_counts().plot(kind='bar', figsize=(10,7),
color="indigo", fontsize=13);
ax2.set_alpha(0.8)
ax2.set_ylabel("Gender Count", fontsize=18);
ax2.set_yticks([i for i in range(0,500,100)])
# create a list to collect the plt.patches data
totals = []
# find the values and append to list
for i in ax2.patches:
totals.append(i.get_height())
# set individual bar lables using above list
total = sum(totals)
# set individual bar lables using above list
for i in ax2.patches:
# get_x pulls left or right; get_height pushes up or down
ax2.text(i.get_x()+.12, i.get_height(), \
str(round((i.get_height()/total)*100, 2))+'%', fontsize=22,
color='black')
###Output
_____no_output_____
###Markdown
Gender Purchasing Analysis
###Code
df.groupby(['Gender', 'SN']).count().reset_index()['Gender'].value_counts()
normed = df.groupby(['Gender', 'SN']).count().reset_index()['Gender'].value_counts(normalize=True)
absolute = df.groupby(['Gender', 'SN']).count().reset_index()['Gender'].value_counts(normalize=False)
gdf = pd.concat([normed, absolute], axis=1)
df_gender = df.groupby('Gender').agg(['sum', 'mean', 'count'])
df_gender.index
level0 = df_gender.columns.get_level_values(0)
level1 = df_gender.columns.get_level_values(1)
df_gender.columns = level0 + ' ' + level1
# df_gender = df_gender[['sum', 'mean', 'count']]
df_gender
df_gender = df_gender[['Price sum', 'Price mean', 'Price count']]
df_gender
df_gender = pd.concat([df_gender,absolute], axis=1)
df_gender['Normalized'] = df_gender['Price sum'] / df_gender.Gender
df_gender
###Output
_____no_output_____
###Markdown
Age Demographics
###Code
import seaborn as sns
age_df = df[['Age', 'SN']].drop_duplicates()
age_df.shape
sns.distplot(age_df['Age'], bins=10, kde=False)
ages = [0, 9.9, 14.9, 19.9, 24.9, 29.90, 34.90, 39.90, 99999]
age_groups = ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
df['Age_Group'] = pd.cut(df['Age'], ages, labels = age_groups)
age_df['Age_Group'] = pd.cut(age_df['Age'], ages, labels=age_groups)
age_out = pd.concat([age_df.Age_Group.value_counts(normalize=True),\
age_df.Age_Group.value_counts()], axis=1)
age_out.to_dict()['Age_Group']
age_norm = df.groupby('Age_Group').agg(['sum', 'mean', 'count'])['Price']
age_norm.reset_index(inplace=True)
age_norm["unique_buyers"] = age_norm["Age_Group"].map(lambda x: age_out.to_dict()['Age_Group'].get(x))
age_norm['normed_mean'] = age_norm['sum'] / age_norm['unique_buyers'].astype('float')
age_norm.rename(columns={'count': 'total_purchase_count', 'mean': 'ave_purchase_price','sum': 'total_purchase_value'})
###Output
_____no_output_____
###Markdown
Top Spenders
###Code
df['SN'].value_counts().head(15).plot.bar();
###Output
_____no_output_____
###Markdown
**Since the value count is the same for the 2nd[1] item and the 6th[5] spenders, I included all of those spenders.**
###Code
top_spenders = list(df['SN'].value_counts()[:6].to_dict().keys())
mask_spend = df['SN'].isin(top_spenders)
top_spenders_df = df[mask_spend]
top_spender_purchase_analysis = top_spenders_df.groupby('SN').Price.agg(['count', 'mean', 'sum'])
top_spender_purchase_analysis = top_spender_purchase_analysis.rename(columns={\
'count': 'Purchase Count', 'mean': 'Ave Purchase Price','sum': 'Total Purchase Value'})
top_spender_purchase_analysis
###Output
_____no_output_____
###Markdown
Most Popular Items
###Code
df['Item Name'].value_counts().head(15).plot.bar();
###Output
_____no_output_____
###Markdown
**Since the value count is the same for the 5th item and the 8th items, I included those in top items.**
###Code
top_items = list(df['Item Name'].value_counts()[:8].to_dict().keys())
top_items
mask = df['Item Name'].isin(top_items)
top_items_df = df[mask]
top_items_df.sort_values(['Item Name']).head()
item_purchase_analysis = top_items_df.groupby('Item Name').Price.agg(['count', 'mean', 'sum']).sort_values\
(by='count', ascending=False)
item_purchase_analysis = item_purchase_analysis.rename(columns={\
'count': 'Purchase Count', 'mean': 'Ave Purchase Price','sum': 'Total Purchase Value'})
item_purchase_analysis
#sort by purchase count
###Output
_____no_output_____
###Markdown
Most Profitable Items
###Code
most_profitable = df.groupby('Item Name')['Price'].agg(['sum', 'count']).\
sort_values(by='sum', ascending=False).nlargest(5, 'sum')
most_profitable.head()
#why don't Final Critic and Stormcaller show up in this group??
most_profitable.loc['Stormcaller']
most_profitable.loc['Final Critic']
most_profitable = most_profitable.rename(columns={\
'count': 'Purchase Count', 'sum': 'Total Purchase Value'})
most_profitable
###Output
_____no_output_____ |
Python - Hands-on Introduction to Python And Machine Learning.ipynb | ###Markdown
Our programs are getting rich enough now, that we could do many different things with them. Let's clean this up in one really useful way. There are three main choices here, so let's define a function for each of those items. This way, our menu code remains really simple even as we add more complicated code to the actions of riding a bicycle, going for a run, or climbing a mountain.
###Code
# Define the actions for each choice we want to offer.
def ride_bicycle():
print("\nHere's a bicycle. Have fun!\n")
def go_running():
print("\nHere are some running shoes. Run fast!\n")
def climb_mountain():
print("\nHere's a map. Can you leave a trip plan for us?\n")
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
ride_bicycle()
elif choice == '2':
go_running()
elif choice == '3':
climb_mountain()
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
_____no_output_____
###Markdown
This is much cleaner code, and it gives us space to separate the details of taking an action from the act of choosing that action. Using while loops to process items in a listIn the section on Lists, you saw that we can `pop()` items from a list. You can use a while list to pop items one at a time from one list, and work with them in whatever way you need. Let's look at an example where we process a list of unconfirmed users.
###Code
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop()
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
_____no_output_____
###Markdown
This works, but let's make one small improvement. The current program always works with the most recently added user. If users are joining faster than we can confirm them, we will leave some users behind. If we want to work on a 'first come, first served' model, or a 'first in first out' model, we can pop the first item in the list each time.
###Code
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop(0)
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
_____no_output_____
###Markdown
Hands-on Introduction to Python And Machine Learning Instructor: Tak-Kei Lam(Readers are assumed to have a little bit programming background.) Getting started with Python(adapted from [this github repository](https://github.com/ehmatthes/intro_programming)) VariablesA variable holds a value of various types such as string, integer, real number and boolean.
###Code
message = "Hello world!" # message is a variable of type string, it holds 'Hello world!'
print(message)
###Output
_____no_output_____
###Markdown
In Python, the value of a variable can be modified.
###Code
message = "Hello world!"
print(message)
message = 'Hello world! I love Python!' # message's value is changed here
print(message)
###Output
_____no_output_____
###Markdown
Naming rules+ Variables can contain only letters, numbers, and underscores. Variable names can start with a letter or an underscore, but can not start with a number+ Spaces are not allowed in variable names, so we use underscores instead of spaces. For example, use student_name instead of "student name"+ You cannot use [Python keywords](http://docs.python.org/3/reference/lexical_analysis.htmlkeywords) as variable names+ Variable names should be descriptive, without being too long+ *Case sensitive*+ The naming rules actually also apply to other Python constructs If you don't follow the rules, the Python interpreter will shout at you...
###Code
1lovehk = 'I love HK'
i love hk = 'I love HK'
for='Hong kong forever! (so does Wakanda)'
###Output
_____no_output_____
###Markdown
If you attempt to use variables that have not been defined...
###Code
message = 'What are the differences between a python and an anaconda?'
print(mesage)
###Output
_____no_output_____
###Markdown
Beware of typing mistakes! ** Exercise **:- Try to create a variable of any kind, name it in whatever way and see whether there are errors- And then type:type(your variable) StringsStrings are sets of characters. Single and double quotesStrings are contained by either single or double quotes.
###Code
my_string = "This is a double-quoted string."
my_string = 'This is a single-quoted string.' # use single quote if you are lazy
###Output
_____no_output_____
###Markdown
This lets us make strings that contain quotations without the need of _escape characters_. By the way, the inventor of another programming language, Perl, stated the *three virtues of a great programmer*:> Laziness: The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful and document what you wrote so you don't have to answer so many questions about it.>> Impatience: The anger you feel when the computer is being lazy. This makes you write programs that don't just react to your needs, but actually anticipate them. Or at least pretend to.>> Hubris: The quality that makes you write (and maintain) programs that other people won't want to say bad things about.
###Code
quote = "Linus Torvalds once said, 'Any program is only as good as it is useful.'"
print(quote)
###Output
_____no_output_____
###Markdown
Changing caseYou can easily change the case of a string, to present it the way you want it to look.
###Code
name = 'ada wong '
print(name)
print(name.title())
first_name = 'ada'
print(first_name)
print(first_name.title())
print(first_name.upper())
first_name = 'Ada'
print(first_name.lower())
###Output
_____no_output_____
###Markdown
You will see this syntax quite often, where a variable name is followed by a dot and then the name of an action, followed by a set of parentheses. The parentheses may be empty, or they may contain some values.variable_name.action()In this example, the word "action" is the name of a method. A method is something that can be done to a variable. The methods 'lower', 'title', and 'upper' are all functions that have been written into the Python language, which do something to strings. Later on, you will learn to write your own methods. Combining strings (concatenation)It is often very useful to be able to combine strings into a message or page element that we want to display. Again, this is easier to understand through an example.
###Code
first_name = 'ada'
last_name = 'wong'
full_name = first_name + ' ' + last_name
print(full_name.title())
###Output
_____no_output_____
###Markdown
The plus sign combines two strings into one, which is called "concatenation". You can use as many plus signs as you want in composing messages.
###Code
first_name = 'ada'
last_name = 'lovelace'
full_name = first_name + ' ' + last_name
message = full_name.title() + ' ' + "was considered the world's first computer programmer."
print(message)
###Output
_____no_output_____
###Markdown
WhitespaceThe term "whitespace" refers to characters that the computer is aware of, but are invisible to readers. The most common whitespace characters are spaces, tabs, and newlines.A space is just " ". The two-character sequence "\t" makes a tab appear in a string. Tabs can be used anywhere you like in a string. Similarly, newlines are created by a two-character sequence "\n".
###Code
print('Hello everyone!')
print('\tHello everyone!')
print('Hello \teveryone!')
###Output
_____no_output_____
###Markdown
The combination "\n" makes a newline appear in a string. You can use newlines anywhere you like in a string.
###Code
print('Hello everyone!')
print('\nHello everyone!'')
print('Hello \neveryone!')
print('\n\n\nHello everyone!')
###Output
_____no_output_____
###Markdown
Stripping whitespaceMany times you will allow users to enter text into a box, and then you will read that text and use it. It is really easy for people to include extra whitespace at the beginning or end of their text. Whitespace includes spaces, tabs, and newlines.It is often a good idea to strip this whitespace from strings before you start working with them. In Python, it is really easy to strip whitespace from the left side, the right side, or both sides of a string.
###Code
name = ' ada '
print(name.lstrip()) # strip the spaces on the left hand side
print(name.rstrip()) # strip the spaces on the right hand side
print(name.strip()) # strip the spaces on both sides
###Output
_____no_output_____
###Markdown
It's hard to see exactly what is happening, so maybe the following will make it a little more clear:
###Code
name = ' ada '
print('-' + name.lstrip() + '-')
print('-' + name.rstrip() + '-')
print('-' + name.strip() + '-')
###Output
_____no_output_____
###Markdown
** Exercise **:- Try to print the following lines using only one print() (excluding the s):
###Code
#********************************************************
#* *
#* I'm loving Python *
#* Let's make programming GREAT again *
#* *
#********************************************************
###Output
_____no_output_____
###Markdown
NumbersDealing with simple numerical data is fairly straightforward in Python, but there are a few things you should know about. IntegersYou can do all of the basic arithmetic operations with integers, and everything should behave as you expect.
###Code
print(3+2)
print(3-2)
print(3*2)
print(3/2)
print(3**2)
###Output
_____no_output_____
###Markdown
Arithmetic Operators| Symbol | Task Performed ||----|---|| + | Addition || - | Subtraction || * | multiplication || ** | to the power of || / | division || // | floor division (divide and then round down to the nearest integer)|| % | mod | You can use parenthesis to modify the standard order of operations.
###Code
standard_order = 2+3*4
print(standard_order)
my_order = (2+3)*4
print(my_order)
###Output
_____no_output_____
###Markdown
Floating-point numbersFloating-point numbers refer to any number with a decimal point. Most of the time, you can think of floating point numbers as decimals, and they will behave as you expect them to. All the arithematic operators also apply to them.
###Code
print(0.1+0.1)
###Output
_____no_output_____
###Markdown
However, sometimes you will get an answer with an unexpectly long decimal part:
###Code
print(0.1+0.2)
###Output
_____no_output_____
###Markdown
This happens because of the way computers represent numbers internally; this has nothing to do with Python itself. Basically, we are used to working in powers of ten, where one tenth plus two tenths is just three tenths. But computers work in powers of two. So your computer has to represent 0.1 in a power of two, and then 0.2 as a power of two, and express their sum as a power of two. There is no exact representation for 0.3 in powers of two, and we see that in the answer to 0.1+0.2.Python tries to hide this kind of stuff when possible. Don't worry about it much for now; just don't be surprised by it, and know that we will learn to clean up our results a little later on.You can also get the same kind of result with other operations.
###Code
print(3*0.1)
###Output
_____no_output_____
###Markdown
Floating-point division
###Code
print(4/2)
# Note: the behaviour of Python 3 and Python 2 regarding floating-point division is different.
# In Python 2, the result will be 2.
# If you are getting numerical results that you don't expect, or that don't make sense,
# check if the version of Python you are using is treating integers differently than you expect.
print(3/2)
# Note: the behaviour of Python 3 and Python 2 regarding floating-point division is different.
# In Python 2, the result will be 2.
# If you are getting numerical results that you don't expect, or that don't make sense,
# check if the version of Python you are using is treating integers differently than you expect.
###Output
_____no_output_____
###Markdown
** Exercise **:- Write some code that calculates the roots of a quadratic function given the variable coefficients:a, b, cThe formula is: $ \frac{-b \pm \sqrt{b^2-4ac}}{2a}$.If there are no roots, print "I'm groot!"; print the roots otherwise. CommentsAs you begin to write more complicated code, you will have to spend more time thinking about how to code solutions to the problems you want to solve. Once you come up with an idea, you will spend a fair amount of time troubleshooting your code, and revising your overall approach.Comments allow you to write more detailed and more human readable explanations about your program. In Python, any line that starts with a pound () symbol is ignored by the Python interpreter and is known as a line of comment.
###Code
# This line is a comment.
print('# This line is not a comment, it is code.')
###Output
_____no_output_____
###Markdown
What makes a good comment?- It is short and to the point, but a complete thought. Most comments should be written in complete sentences- It explains your thinking, so that when you return to the code later you will understand how you were approaching the problem- It explains your thinking, so that others who work with your code will understand your overall approach to a problem- It explains particularly difficult sections of code in detail When should you write comments?- When you have to think about code before writing it- When you are likely to forget later exactly how you were approaching a problem- When there is more than one way to solve a problem- When others are unlikely to anticipate your way of thinking about a problemWriting good comments is one of the clear signs of a good programmer. If you have any real interest in taking programming seriously, start using comments now. Lists A list is a collection of items, that is stored in a variable. The items should be related in some way, but there are no restrictions on what can be stored in a list. Here is a simple example of a list, and how we can quickly access each item in the list.
###Code
students = ['bernice', 'aaron', 'cody']
for student in students: # Hey this is a for-loop. We'll study it later.
print("Hello, " + student.title() + "!")
###Output
_____no_output_____
###Markdown
Naming and defining a listSince lists are collection of objects, it is good practice to give them a plural name. If each item in your list is a car, call the list 'cars'. If each item is a dog, call your list 'dogs'. This gives you a straightforward way to refer to the entire list ('dogs'), and to a single item in the list ('dog').In Python, square brackets designate a list. To define a list, you give the name of the list, the equals sign, and the values you want to include in your list within square brackets. ** Exercise **:- Declare a list of numbers
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
###Output
_____no_output_____
###Markdown
Accessing one item in a listItems in a list are identified by their position in the list, **starting with zero**. This will almost certainly trip you up at some point. Believe it or not, programmers even joke about how often we all make "off-by-one" errors, so don't feel bad when you make this kind of error.To access the first element in a list, you give the name of the list, followed by a zero in parentheses.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[0]
print(dog.title())
###Output
_____no_output_____
###Markdown
The number in parentheses is called the _index_ of the item. Because lists start at zero, the index of an item is always one less than its position in the list. So to get the second item in the list, we need to use an index of 1.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[1]
print(dog.title())
###Output
_____no_output_____
###Markdown
Accessing the last items in a listYou can probably see that to get the last item in this list, we would use an index of 2. This works, but it would only work because our list has exactly three items. To get the last item in a list, no matter how long the list is, you can use an index of -1. (Negative index are not quite common in programming languages. )
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[-1]
print(dog.title())
###Output
_____no_output_____
###Markdown
This syntax also works for the second to last item, the third to last, and so forth.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[-2]
print(dog.title())
###Output
_____no_output_____
###Markdown
You cannot use a number larger than the length of the list.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[3]
print(dog.title())
###Output
_____no_output_____
###Markdown
Similarly, you can't use a negative number larger than the length of the list.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[-4]
print(dog.title())
###Output
_____no_output_____
###Markdown
Lists and Looping Accessing all elements in a listThis is one of the most important concepts related to lists. You can have a list with a million items in it, and in three lines of code you can write a sentence for each of those million items. If you want to understand lists, and become a competent programmer, make sure you take the time to understand this section.We use a loop to access all the elements in a list. A loop is a block of code that repeats itself until it runs out of items to work with, or until a certain condition is met. In this case, our loop will run once for every item in our list. With a list that is three items long, our loop will run three times.Let's take a look at how we access all the items in a list, and then try to understand how it works.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print(dog) # hey, why is this line indented?
###Output
_____no_output_____
###Markdown
We have already seen how to create a list, so we are really just trying to understand how the last two lines work. These last two lines make up a loop, and the language here can help us see what is happening:```python for dog in dogs:```- The keyword "for" tells Python to get ready to use a loop.- The variable "dog", with no "s" on it, is a temporary placeholder variable. This is the variable that Python will place each item in the list into, one at a time.- The first time through the loop, the value of "dog" will be 'border collie'.- The second time through the loop, the value of "dog" will be 'australian cattle dog'.- The third time through, "dog" will be 'labrador retriever'.- After this, there are no more items in the list, and the loop will end. Doing more with each itemWe can do whatever we want with the value of "dog" inside the loop. In this case, we just print the name of the dog.```python print(dog)```We are not limited to just printing the word dog. We can do whatever we want with this value, and this action will be carried out for every item in the list. Let's say something about each dog in our list.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print('I like ' + dog + 's.')
###Output
_____no_output_____
###Markdown
Inside and outside the loopPython uses **indentation** to decide what is inside the loop and what is outside the loop. Code that is inside the loop will be run for every item in the list. Code that is not indented, which comes after the loop, will be run once just like regular code.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
# are we doing two or three things per iteration?
print('I like ' + dog + 's.')
print('No, I really really like ' + dog +'s!\n')
print("\nThat's just how I feel about dogs.")
# how about this version?
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
# how about writing the code in this way?
print('I like ' + dog + 's.')
print('No, I really really like ' + dog +'s!\n')
print("\nThat's just how I feel about dogs.")
###Output
_____no_output_____
###Markdown
By the way, indentation in Python really matters. Please pay attention to it when writing Python. We should be consistent: if we use two spaces for one level of indentation on one line, don't use three or four or other amount of spaces on other lines.You may be intersted in this article: [https://stackoverflow.blog/2017/06/15/developers-use-spaces-make-money-use-tabs/](https://stackoverflow.blog/2017/06/15/developers-use-spaces-make-money-use-tabs/) Enumerating a listWhen you are looping through a list, you may want to know the index of the current item. You could always use the *list.index(value)* syntax, but there is a simpler way. The *enumerate()* function tracks the index of each item for you, as it loops through the list:
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print("Results for the dog show are as follows:\n")
for index, dog in enumerate(dogs):
place = str(index)
print("Place: " + place + " Dog: " + dog.title())
###Output
_____no_output_____
###Markdown
To enumerate a list, you need to add an *index* variable to hold the current index. So instead of```python for dog in dogs:``` You have```python for index, dog in enumerate(dogs) ``` The value in the variable *index* is always an integer. If you want to print it in a string, you have to turn the integer into a string:```python str(index)``` The index always starts at 0, so in this example the value of *place* should actually be the current index, plus one:
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print("Results for the dog show are as follows:\n")
for index, dog in enumerate(dogs):
place = str(index + 1)
print("Place: " + place + " Dog: " + dog.title())
###Output
_____no_output_____
###Markdown
List enumeration is particularly useful when a data is represented by multiple elements from different arrays (not a good practice, though). For instance:
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
bark = ['bark', 'bark bark', 'bark bark bark']
print('Barking dogs:\n')
for index, dog in enumerate(dogs):
print(dogs[index] + ': ' + bark[index])
###Output
_____no_output_____
###Markdown
Common list operations Modifying elements in a listYou can change the value of any element in a list if you know the position of that item.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs[0] = 'australian shepherd'
print(dogs)
###Output
_____no_output_____
###Markdown
Finding an element in a listIf you want to find out the position of an element in a list, you can use the index() function.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print(dogs.index('australian cattle dog')) # the function index() here is not the variable 'index' we used in the previous examples
###Output
_____no_output_____
###Markdown
This method returns a ValueError if the requested item is not in the list.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print(dogs.index('poodle'))
###Output
_____no_output_____
###Markdown
Testing whether an item is in a listYou can test whether an item is in a list using the "in" keyword. This will become more useful after learning how to use if-else statements.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print('australian cattle dog' in dogs)
print('poodle' in dogs)
###Output
_____no_output_____
###Markdown
Adding items to a list Appending items to the end of a listWe can add an item to a list using the append() method. This method adds the new item to the end of the list.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs.append('poodle')
for dog in dogs:
print(dog.title() + "s are cool.")
###Output
_____no_output_____
###Markdown
Inserting items into a listWe can also insert items anywhere we want in a list, using the **insert()** function. We specify the position we want the item to have, and everything from that point on is shifted one position to the right. In other words, the index of every item after the new item is increased by one.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs.insert(1, 'poodle')
print(dogs)
###Output
_____no_output_____
###Markdown
Note that you have to give the position of the new item first, and then the value of the new item. If you do it in the reverse order, you will get an error. Creating an empty listNow that we know how to add items to a list after it is created, we can use lists more dynamically. We are no longer stuck defining our entire list at once.A common approach with lists is to define an empty list, and then let your program add items to the list as necessary. This approach works, for example, when starting to build an interactive web site. Your list of users might start out empty, and then as people register for the site it will grow. This is a simplified approach to how web sites actually work, but the idea is realistic.Here is a brief example of how to start with an empty list, start to fill it up, and work with the items in the list. The only new thing here is the way we define an empty list, which is just an empty set of square brackets.
###Code
# Create an empty list to hold our users.
usernames = []
# Add some users.
usernames.append('bernice')
usernames.append('cody')
usernames.append('aaron')
# Greet all of our users.
for username in usernames:
print("Welcome, " + username.title() + '!')
###Output
_____no_output_____
###Markdown
If we don't change the order in our list, we can use the list to figure out who our oldest and newest users are.
###Code
# Create an empty list to hold our users.
usernames = []
# Add some users.
usernames.append('bernice')
usernames.append('cody')
usernames.append('aaron')
# Greet all of our users.
for username in usernames:
print("Welcome, " + username.title() + '!')
# Recognize our first user, and welcome our newest user.
print("\nThank you for being our very first user, " + usernames[0].title() + '!')
print("And a warm welcome to our newest user, " + usernames[-1].title() + '!')
###Output
_____no_output_____
###Markdown
Note that the code welcoming our newest user will always work, because we have used the index -1. If we had used the index 2 we would always get the third user, even as our list of users grows and grows. Sorting a ListWe can sort a list alphabetically, in either order.
###Code
students = ['bernice', 'aaron', 'cody']
# Put students in alphabetical order.
students.sort()
# Display the list in its current order.
print("Our students are currently in alphabetical order.")
for student in students:
print(student.title())
#Put students in reverse alphabetical order.
students.sort(reverse=True)
# Display the list in its current order.
print("\nOur students are now in reverse alphabetical order.")
for student in studenbts:
print(student.title())
###Output
_____no_output_____
###Markdown
*sorted()* vs. *sort()*Whenever you consider sorting a list using sort(), keep in mind that you can not recover the original order. If you want to display a list in sorted order, but preserve the original order, you can use the *sorted()* function. The *sorted()* function also accepts the optional *reverse=True* argument. Please note that sorted() is not a function of the list datastructure.
###Code
students = ['bernice', 'aaron', 'cody']
# Display students in alphabetical order, but keep the original order.
print("Here is the list in alphabetical order:")
for student in sorted(students):
print(student.title())
# Display students in reverse alphabetical order, but keep the original order.
print("\nHere is the list in reverse alphabetical order:")
for student in sorted(students, reverse=True):
print(student.title())
print("\nHere is the list in its original order:")
# Show that the list is still in its original order.
for student in students:
print(student.title())
###Output
_____no_output_____
###Markdown
Reversing a listWe have seen three possible orders for a list:- The original order in which the list was created- Alphabetical order- Reverse alphabetical orderThere is one more order we can use, and that is the reverse of the original order of the list. The *reverse()* function gives us this order.
###Code
students = ['bernice', 'aaron', 'cody']
students.reverse()
print(students)
###Output
_____no_output_____
###Markdown
Note that reverse is permanent, although you could follow up with another call to *reverse()* and get back the original order of the list. Sorting a numerical listAll of the sorting functions work for numerical lists as well.
###Code
numbers = [1, 3, 4, 2]
# sort() puts numbers in increasing order.
numbers.sort()
print(numbers)
# sort(reverse=True) puts numbers in decreasing order.
numbers.sort(reverse=True)
print(numbers)
numbers = [1, 3, 4, 2]
# sorted() preserves the original order of the list:
print(sorted(numbers))
print(numbers)
numbers = [1, 3, 4, 2]
# The reverse() function also works for numerical lists.
numbers.reverse()
print(numbers)
###Output
_____no_output_____
###Markdown
** Exercise **:- Write a program to find the 2nd largest of an integer array.- If the array is not large enough, print "Not enough data"; print the result otherwise.For example, suppose the interger array (intarray) is:intarray = [1, 2, 3, 4, 5, 6, 7, 9, 10]The result should be 9. Finding the length of a listYou can find the length of a list using the *len()* function.
###Code
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print(user_count)
###Output
_____no_output_____
###Markdown
There are many situations where you might want to know how many items in a list. If you have a list that stores your users, you can find the length of your list at any time, and know how many users you have.
###Code
# Create an empty list to hold our users.
usernames = []
# Add some users, and report on how many users we have.
usernames.append('bernice')
user_count = len(usernames)
print("We have " + str(user_count) + " user!")
usernames.append('cody')
usernames.append('aaron')
user_count = len(usernames)
print("We have " + str(user_count) + " users!")
###Output
_____no_output_____
###Markdown
On a technical note, the *len()* function returns an integer, which can't be printed directly with strings. We use the *str()* function to turn the integer into a string so that it prints nicely:
###Code
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print("This will cause an error: " + user_count)
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print("This will work: " + str(user_count))
###Output
_____no_output_____
###Markdown
Removing Items from a ListHopefully you can see by now that lists are a dynamic structure. We can define an empty list and then fill it up as information comes into our program. To become really dynamic, we need some ways to remove items from a list when we no longer need them. You can remove items from a list through their position, or through their value. Removing items by positionIf you know the position of an item in a list, you can remove that item using the *del* command. To use this approach, give the command *del* and the name of your list, with the index of the item you want to move in square brackets:
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
# Remove the first dog from the list.
del dogs[0]
print(dogs)
###Output
_____no_output_____
###Markdown
Removing items by valueYou can also remove an item from a list if you know its value. To do this, we use the *remove()* function. Give the name of the list, followed by the word remove with the value of the item you want to remove in parentheses. Python looks through your list, finds the first item with this value, and removes it.
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
# Remove australian cattle dog from the list.
dogs.remove('australian cattle dog')
print(dogs)
###Output
_____no_output_____
###Markdown
Be careful to note, however, that *only* the first item with this value is removed. If you have multiple items with the same value, you will have some items with this value left in your list.
###Code
letters = ['a', 'b', 'c', 'a', 'b', 'c']
# Remove the letter a from the list.
letters.remove('a')
print(letters)
###Output
_____no_output_____
###Markdown
Popping items from a listThere is a cool concept in programming called "popping" items from a collection. Every programming language has some sort of data structure similar to Python's lists. All of these structures can be used as queues, and there are various ways of processing the items in a queue.One simple approach is to start with an empty list, and then add items to that list. When you want to work with the items in the list, you always take the last item from the list, do something with it, and then remove that item. The *pop()* function makes this easy. It removes the last item from the list, and gives it to us so we can work with it. This is easier to show with an example:
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
last_dog = dogs.pop()
print(last_dog)
print(dogs)
###Output
_____no_output_____
###Markdown
This is an example of a first-in, last-out approach. The first item in the list would be the last item processed if you kept using this approach. We will see a full implementation of this approach later on, when we learn about *while* loops.You can actually pop any item you want from a list, by giving the index of the item you want to pop. So we could do a first-in, first-out approach by popping the first iem in the list:
###Code
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
first_dog = dogs.pop(0)
print(first_dog)
print(dogs)
###Output
_____no_output_____
###Markdown
** Exercise **:- Write code to delete consecutive duplicates of list elementsFor example, given:x = [1, 1, 2, 3, 4, 5, 6, 6, 6, 7]The result should be:[1, 2, 3, 4, 5, 6, 7] ** Exercise **:- Write code to duplicate the elements of a listFor example, given:x = [1, 1, 2, 3, 4, 5, 6, 6, 6, 7]The result should be:[1, 1, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 6, 6, 6, 6, 7, 7] Slicing a ListSince a list is a collection of items, we should be able to get any subset of those items. For example, if we want to get just the first three items from the list, we should be able to do so easily. The same should be true for any three items in the middle of the list, or the last three items, or any x items from anywhere in the list. These subsets of a list are called *slices*.To get a subset of a list, we give the position of the first item we want, and the position of the first item we do *not* want to include in the subset. So the slice *list[0:3]* will return a list containing items 0, 1, and 2, but not item 3. Here is how you get a batch containing the first three items.
###Code
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[0:3]
for user in first_batch:
print(user.title())
###Output
_____no_output_____
###Markdown
If you want to grab everything up to a certain position in the list, you can also leave the first index blank:
###Code
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[:3]
for user in first_batch:
print(user.title())
###Output
_____no_output_____
###Markdown
When we grab a slice from a list, the original list is not affected:
###Code
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[0:3]
# The original list is unaffected.
for user in usernames:
print(user.title())
###Output
_____no_output_____
###Markdown
We can get any segment of a list we want, using the slice method:
###Code
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab a batch from the middle of the list.
middle_batch = usernames[1:4]
for user in middle_batch:
print(user.title())
###Output
_____no_output_____
###Markdown
To get all items from one position in the list to the end of the list, we can leave off the second index:
###Code
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab all users from the third to the end.
end_batch = usernames[2:]
for user in end_batch:
print(user.title())
###Output
_____no_output_____
###Markdown
Copying a list (Please pay attention to this section)You can use the slice notation to make a copy of a list, by leaving out both the starting and the ending index. This causes the slice to consist of everything from the first item to the last, which is the entire list.
###Code
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Make a copy of the list.
copied_usernames = usernames[:]
print("The full copied list:\n\t", copied_usernames)
# Remove the first two users from the copied list.
del copied_usernames[0]
del copied_usernames[0]
print("\nTwo users removed from copied list:\n\t", copied_usernames)
# The original list is unaffected.
print("\nThe original list:\n\t", usernames)
###Output
_____no_output_____
###Markdown
Numerical listsThere is nothing special about lists of numbers, but there are some functions you can use to make working with numerical lists more efficient. Let's make a list of the first ten numbers, and start working with it to see how we can use numbers in a list.
###Code
# Print out the first ten numbers.
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for number in numbers:
print(number)
###Output
_____no_output_____
###Markdown
** Exercise **:- Shift the elements of an integer list by one to the left, and then increment the value of each cell by its new index. The rightmost element of the original list should be placed at the end after shifting.For example,x = [1, 2, 3, 4]After shifting, the result should be:[2, 3, 4, 1]The final result should be:[2, 4, 6, 4] The *range()* functionThis works, but it is not very efficient if we want to work with a large set of numbers. The *range()* function helps us generate long lists of numbers. Here are two ways to do the same thing, using the *range* function.
###Code
# Print the first ten numbers.
for number in range(1,11):
print(number)
###Output
_____no_output_____
###Markdown
The range function takes in a starting number, and an end number. You get all integers, up to but not including the end number. You can also add a *step* value, which tells the *range* function how big of a step to take between numbers:
###Code
# Print the first ten odd numbers.
for number in range(1,21,2):
print(number)
###Output
_____no_output_____
###Markdown
If we want to store these numbers in a list, we can use the *list()* function. This function takes in a range, and turns it into a list:
###Code
# Create a list of the first ten numbers.
numbers = list(range(1,11))
print(numbers)
###Output
_____no_output_____
###Markdown
This is incredibly powerful; we can now create a list of the first million numbers, just as easily as we made a list of the first ten numbers. It doesn't really make sense to print the million numbers here, but we can show that the list really does have one million items in it, and we can print the last ten items to show that the list is correct.
###Code
# Store the first million numbers in a list.
numbers = list(range(1,1000001))
# Show the length of the list:
print("The list 'numbers' has " + str(len(numbers)) + " numbers in it.")
# Show the last ten numbers:
print("\nThe last ten numbers in the list are:")
for number in numbers[-10:]:
print(number)
###Output
_____no_output_____
###Markdown
There are two things here that might be a little unclear. The expression str(len(numbers))takes the length of the *numbers* list, and turns it into a string that can be printed.The expression numbers[-10:]gives us a *slice* of the list. The index `-1` is the last item in the list, and the index `-10` is the item ten places from the end of the list. So the slice `numbers[-10:]` gives us everything from that item to the end of the list. ** Exercise **:- Split a list into two. The first list should contain N randomly drawn elements from the original list of length L; whereas the second list should contain the remaining (L-N) elements in the original list.You can use the following code to generate a list of random integers (please modify it according to your need):
###Code
import numpy as np
np.random.randint(low=0,high=10,size=10)
###Output
_____no_output_____
###Markdown
The *min()*, *max()*, and *sum()* functionsThere are three functions you can easily use with numerical lists. As you might expect, the *min()* function returns the smallest number in the list, the *max()* function returns the largest number in the list, and the *sum()* function returns the total of all numbers in the list.
###Code
ages = [23, 16, 14, 28, 19, 11, 38]
youngest = min(ages)
oldest = max(ages)
total_years = sum(ages)
print("Our youngest reader is " + str(youngest) + " years old.")
print("Our oldest reader is " + str(oldest) + " years old.")
print("Together, we have " + str(total_years) + " years worth of life experience.")
###Output
_____no_output_____
###Markdown
List comprehensionsIf you are brand new to programming, list comprehensions may look confusing at first. They are a shorthand way of creating and working with lists. It is good to be aware of list comprehensions, because you will see them in other people's code, and they are really useful when you understand how to use them. That said, if they don't make sense to you yet, don't worry about using them right away. When you have worked with enough lists, you will want to use comprehensions. For now, it is good enough to know they exist, and to recognize them when you see them. If you like them, go ahead and start trying to use them now. (Using list comprehensions is a more idiomatic way of programming in Python.) Numerical comprehensionsLet's consider how we might make a list of the first ten square numbers. We could do it like this:
###Code
# Store the first ten square numbers in a list.
# Make an empty list that will hold our square numbers.
squares = []
# Go through the first ten numbers, square them, and add them to our list.
for number in range(1,11):
new_square = number**2
squares.append(new_square)
# Show that our list is correct.
for square in squares:
print(square)
###Output
_____no_output_____
###Markdown
This should make sense at this point. If it doesn't, go over the code with these thoughts in mind:- We make an empty list called *squares* that will hold the values we are interested in.- Using the *range()* function, we start a loop that will go through the numbers 1-10.- Each time we pass through the loop, we find the square of the current number by raising it to the second power.- We add this new value to our list *squares*.- We go through our newly-defined list and print out each square.Now let's make this code more efficient. We don't really need to store the new square in its own variable *new_square*; we can just add it directly to the list of squares. The line new_square = number**2is taken out, and the next line takes care of the squaring:
###Code
# Store the first ten square numbers in a list.
# Make an empty list that will hold our square numbers.
squares = []
# Go through the first ten numbers, square them, and add them to our list.
for number in range(1,11):
squares.append(number**2)
# Show that our list is correct.
for square in squares:
print(square)
###Output
_____no_output_____
###Markdown
List comprehensions allow us to collapse the first three lines of code into one line. Here's what it looks like:
###Code
# Store the first ten square numbers in a list.
squares = [number**2 for number in range(1,11)]
# Show that our list is correct.
for square in squares:
print(square)
###Output
_____no_output_____
###Markdown
It should be pretty clear that this code is more efficient than our previous approach, but it may not be clear what is happening. Let's take a look at everything that is happening in that first line:We define a list called *squares*.Look at the second part of what's in square brackets:```python for number in range(1,11)```This sets up a loop that goes through the numbers 1-10, storing each value in the variable *number*. Now we can see what happens to each *number* in the loop:```python number**2```Each number is raised to the second power, and this is the value that is stored in the list we defined. We might read this line in the following way:squares = [raise *number* to the second power, for each *number* in the range 1-10]Or more mathematical:\begin{align}\text{squares} &= \{x^2 | x \in \mathbb{Z} \land x >=1 \land x<10\}\end{align} It is probably helpful to see a few more examples of how comprehensions can be used. Let's try to make the first ten even numbers, the longer way:
###Code
# Make an empty list that will hold the even numbers.
evens = []
# Loop through the numbers 1-10, double each one, and add it to our list.
for number in range(1,11):
evens.append(number*2)
# Show that our list is correct:
for even in evens:
print(even)
###Output
_____no_output_____
###Markdown
Here's how we might think of doing the same thing, using a list comprehension:evens = [multiply each *number* by 2, for each *number* in the range 1-10]Here is the same line in code:
###Code
# Make a list of the first ten even numbers.
evens = [number*2 for number in range(1,11)]
for even in evens:
print(even)
###Output
_____no_output_____
###Markdown
Non-numerical comprehensionsWe can use comprehensions with non-numerical lists as well. In this case, we will create an initial list, and then use a comprehension to make a second list from the first one. Here is a simple example, without using comprehensions:
###Code
# Consider some students.
students = ['bernice', 'aaron', 'cody']
# Let's turn them into great students.
great_students = []
for student in students:
great_students.append(student.title() + " the great!")
# Let's greet each great student.
for great_student in great_students:
print("Hello, " + great_student)
###Output
_____no_output_____
###Markdown
To use a comprehension in this code, we want to write something like this:great_students = [add 'the great' to each *student*, for each *student* in the list of *students*]Here's what it looks like:
###Code
# Consider some students.
students = ['bernice', 'aaron', 'cody']
# Let's turn them into great students.
great_students = [student.title() + " the great!" for student in students]
# Let's greet each great student.
for great_student in great_students:
print("Hello, " + great_student)
###Output
_____no_output_____
###Markdown
Strings as ListsNow that you have some familiarity with lists, we can take a second look at strings. A string is really a list of characters, so many of the concepts from working with lists behave the same with strings. Strings as a list of charactersWe can loop through a string using a *for* loop, just like we loop through a list:
###Code
message = "Hello!"
for letter in message:
print(letter)
###Output
_____no_output_____
###Markdown
We can create a list from a string. The list will have one element for each character in the string:
###Code
message = "Hello world!"
message_list = list(message)
print(message_list)
###Output
_____no_output_____
###Markdown
Slicing stringsWe can access any character in a string by its position, just as we access individual items in a list:
###Code
message = "Hello World!"
first_char = message[0]
last_char = message[-1]
print(first_char, last_char)
###Output
_____no_output_____
###Markdown
We can extend this to take slices of a string:
###Code
message = "Hello World!"
first_three = message[:3]
last_three = message[-3:]
print(first_three, last_three)
###Output
_____no_output_____
###Markdown
Finding substringsNow that you have seen what indexes mean for strings, we can search for *substrings*. A substring is a series of characters that appears in a string.You can use the *in* keyword to find out whether a particular substring appears in a string:
###Code
message = "I like cats and dogs."
dog_present = 'dog' in message
print(dog_present)
###Output
_____no_output_____
###Markdown
If you want to know where a substring appears in a string, you can use the *find()* method. The *find()* method tells you the index at which the substring begins.
###Code
message = "I like cats and dogs."
dog_index = message.find('dog')
print(dog_index)
###Output
_____no_output_____
###Markdown
Note, however, that this function only returns the index of the first appearance of the substring you are looking for. If the substring appears more than once, you will miss the other substrings.
###Code
message = "I like cats and dogs, but I'd much rather own a dog."
dog_index = message.find('dog')
print(dog_index)
###Output
_____no_output_____
###Markdown
If you want to find the last appearance of a substring, you can use the *rfind()* function:
###Code
message = "I like cats and dogs, but I'd much rather own a dog."
last_dog_index = message.rfind('dog')
print(last_dog_index)
###Output
_____no_output_____
###Markdown
Replacing substringsYou can use the *replace()* function to replace any substring with another substring. To use the *replace()* function, give the substring you want to replace, and then the substring you want to replace it with. You also need to store the new string, either in the same string variable or in a new variable.
###Code
message = "I like cats and dogs, but I'd much rather own a dog."
message = message.replace('dog', 'snake')
print(message)
###Output
_____no_output_____
###Markdown
Counting substringsIf you want to know how many times a substring appears within a string, you can use the *count()* method.
###Code
message = "I like cats and dogs, but I'd much rather own a dog."
number_dogs = message.count('dog')
print(number_dogs)
###Output
_____no_output_____
###Markdown
Splitting stringsStrings can be split into a set of substrings when they are separated by a repeated character. If a string consists of a simple sentence, the string can be split based on spaces. The *split()* function returns a list of substrings. The *split()* function takes one argument, the character that separates the parts of the string.
###Code
message = "I like cats and dogs, but I'd much rather own a dog."
words = message.split(' ')
print(words)
###Output
_____no_output_____
###Markdown
Notice that the punctuation is left in the substrings.It is more common to split strings that are really lists, separated by something like a comma. The *split()* function gives you an easy way to turn comma-separated strings, which you can't do much with in Python, into lists. Once you have your data in a list, you can work with it in much more powerful ways.
###Code
animals = "dog, cat, tiger, mouse, liger, bear"
# Rewrite the string as a list, and store it in the same variable
animals = animals.split(',')
print(animals)
###Output
_____no_output_____
###Markdown
Notice that in this case, the spaces are also ignored. It is a good idea to test the output of the *split()* function and make sure it is doing what you want with the data you are interested in.One use of this is to work with spreadsheet data in your Python programs. Most spreadsheet applications allow you to dump your data into a comma-separated text file. You can read this file into your Python program, or even copy and paste from the text file into your program file, and then turn the data into a list. You can then process your spreadsheet data using a *for* loop. Other string methodsThere are a number of [other string methods](https://docs.python.org/3.8/library/stdtypes.htmlstring-methods) that we won't go into right here, but you might want to take a look at them. Most of these methods should make sense to you at this point. You might not have use for any of them right now, but it is good to know what you can do with strings. This way you will have a sense of how to solve certain problems, even if it means referring back to the list of methods to remind yourself how to write the correct syntax when you need it. TuplesTuples are basically lists that can never be changed. Lists are quite dynamic; they can grow as you append and insert items, and they can shrink as you remove items. You can modify any element you want to in a list. Sometimes we like this behavior, but other times we may want to ensure that no user or no part of a program can change a list. That's what tuples are for.Technically, lists are *mutable* objects and tuples are *immutable* objects. Mutable objects can change (think of *mutations*), and immutable objects can not change. Defining tuples, and accessing elementsYou define a tuple just like you define a list, except you use parentheses instead of square brackets. Once you have a tuple, you can access individual elements just like you can with a list, and you can loop through the tuple with a *for* loop:
###Code
colors = ('red', 'green', 'blue')
print("The first color is: " + colors[0])
print("\nThe available colors are:")
for color in colors:
print("- " + color)
###Output
_____no_output_____
###Markdown
If you try to add something to a tuple, you will get an error:
###Code
colors = ('red', 'green', 'blue')
colors.append('purple')
###Output
_____no_output_____
###Markdown
The same kind of thing happens when you try to remove something from a tuple, or modify one of its elements. Once you define a tuple, you can be confident that its values will not change. Using tuples to make stringsWe have seen that it is pretty useful to be able to mix raw English strings with values that are stored in variables, as in the following:
###Code
animal = 'dog'
print("I have a " + animal + ".")
###Output
_____no_output_____
###Markdown
This was especially useful when we had a series of similar statements to make:
###Code
animals = ['dog', 'cat', 'bear']
for animal in animals:
print("I have a " + animal + ".")
###Output
_____no_output_____
###Markdown
I like this approach of using the plus sign to build strings because it is fairly intuitive. We can see that we are adding several smaller strings together to make one longer string. This is intuitive, but it is a lot of typing. There is a shorter way to do this, using *placeholders*.Python ignores most of the characters we put inside of strings. There are a few characters that Python pays attention to, as we saw with strings such as "\t" and "\n". Python also pays attention to "%s" and "%d". These are placeholders. When Python sees the "%s" placeholder, it looks ahead and pulls in the first argument after the % sign:
###Code
animal = 'dog'
print("I have a %s." % animal)
###Output
_____no_output_____
###Markdown
This is a much cleaner way of generating strings that include values. We compose our sentence all in one string, and then tell Python what values to pull into the string, in the appropriate places.This is called *string formatting*, and it looks the same when you use a list:
###Code
animals = ['dog', 'cat', 'bear']
for animal in animals:
print("I have a %s." % animal)
###Output
_____no_output_____
###Markdown
If you have more than one value to put into the string you are composing, you have to pack the values into a tuple:
###Code
animals = ['dog', 'cat', 'bear']
print("I have a %s, a %s, and a %s." % (animals[0], animals[1], animals[2]))
###Output
_____no_output_____
###Markdown
String formatting with numbersIf you recall, printing a number with a string can cause an error:
###Code
number = 23
print("My favorite number is " + number + ".")
###Output
_____no_output_____
###Markdown
Python knows that you could be talking about the value 23, or the characters '23'. So it throws an error, forcing us to clarify that we want Python to treat the number as a string. We do this by *casting* the number into a string using the *str()* function:
###Code
number = 23
print("My favorite number is " + str(number) + ".")
###Output
_____no_output_____
###Markdown
The format string "%d" takes care of this for us. Watch how clean this code is:
###Code
number = 23
print("My favorite number is %d." % number)
###Output
_____no_output_____
###Markdown
If you want to use a series of numbers, you pack them into a tuple just like we saw with strings:
###Code
numbers = [7, 23, 42]
print("My favorite numbers are %d, %d, and %d." % (numbers[0], numbers[1], numbers[2]))
###Output
_____no_output_____
###Markdown
Just for clarification, look at how much longer the code is if you use concatenation instead of string formatting:
###Code
numbers = [7, 23, 42]
print("My favorite numbers are " + str(numbers[0]) + ", " + str(numbers[1]) + ", and " + str(numbers[2]) + ".")
###Output
_____no_output_____
###Markdown
You can mix string and numerical placeholders in any order you want.
###Code
names = ['Ada', 'ever']
numbers = [23, 2]
print("%s's favorite number is %d, and %s's favorite number is %d." % (names[0].title(), numbers[0], names[1].title(), numbers[1]))
###Output
_____no_output_____
###Markdown
There are more sophisticated ways to do string formatting in Python 3, but we will save that for later because it's a bit less intuitive than this approach. For now, you can use whichever approach consistently gets you the output that you want to see. If StatementsBy allowing you to respond selectively to different situations and conditions, if statements open up whole new possibilities for your programs. In this section, you will learn how to test for certain conditions, and then respond in appropriate ways to those conditions. What is an *if* statement?An *if* statement tests for a condition, and then responds to that condition. If the condition is true, then whatever action is listed next gets carried out. You can test for multiple conditions at the same time, and respond appropriately to each condition.Here is an example that shows a number of the desserts I like. It lists those desserts, but lets you know which one is my favorite.
###Code
# A list of desserts I like.
desserts = ['ice cream', 'chocolate', 'apple crisp', 'cookies']
favorite_dessert = 'apple crisp'
# Print the desserts out, but let everyone know my favorite dessert.
for dessert in desserts:
if dessert == favorite_dessert:
# This dessert is my favorite, let's let everyone know!
print("%s is my favorite dessert!" % dessert.title())
else:
# I like these desserts, but they are not my favorite.
print("I like %s." % dessert)
###Output
_____no_output_____
###Markdown
What happens in this program?- The program starts out with a list of desserts, and one dessert is identified as a favorite.- The for loop runs through all the desserts.- Inside the for loop, each item in the list is tested. - If the current value of *dessert* is equal to the value of *favorite_dessert*, a message is printed that this is my favorite. - If the current value of *dessert* is not equal to the value of *favorite_dessert*, a message is printed that I just like the dessert. You can test as many conditions as you want in an if statement, as you will see in a little bit. Logical TestsEvery if statement evaluates to *True* or *False*. *True* and *False* are Python keywords, which have special meanings attached to them. You can test for the following conditions in your if statements:- [equality](equality) (==)- [inequality](inequality) (!=)- [other inequalities](other_inequalities) - greater than (>) - greater than or equal to (>=) - less than (<) - less than or equal to (<=)- [You can test if an item is **in** a list.](in_list) EqualityTwo items are *equal* if they have the same value. You can test for equality between numbers, strings, and a number of other objects which you will learn about later. Some of these results may be surprising, so take a careful look at the examples below.In Python, as in many programming languages, two equals signs tests for equality.**Watch out!** Be careful of accidentally using one equals sign, which can really throw things off because that one equals sign actually sets your item to the value you are testing for!
###Code
5 == 5
3 == 5
5 == 5.0
'ada' == 'ada'
'Ada' == 'ada'
'Ada'.lower() == 'ada'.lower()
'5' == 5
'5' == str(5)
###Output
_____no_output_____
###Markdown
InequalityTwo items are *inequal* if they do not have the same value. In Python, we test for inequality using the exclamation point and one equals sign.Sometimes you want to test for equality and if that fails, assume inequality. Sometimes it makes more sense to test for inequality directly.
###Code
3 != 5
5 != 5
'Ada' != 'ada'
###Output
_____no_output_____
###Markdown
Other Inequalities greater than
###Code
5 > 3
###Output
_____no_output_____
###Markdown
greater than or equal to
###Code
5 >= 3
3 >= 3
###Output
_____no_output_____
###Markdown
less than
###Code
3 < 5
###Output
_____no_output_____
###Markdown
less than or equal to
###Code
3 <= 5
3 <= 3
###Output
_____no_output_____
###Markdown
Checking if an item is **in** a listYou can check if an item is in a list using the **in** keyword.
###Code
vowels = ['a', 'e', 'i', 'o', 'u']
'a' in vowels
vowels = ['a', 'e', 'i', 'o', 'u']
'b' in vowels
###Output
_____no_output_____
###Markdown
The if-elif...else chainYou can test whatever series of conditions you want to, and you can test your conditions in any combination you want. Simple if statementsThe simplest test has a single **if** statement, and a single statement to execute if the condition is **True**.
###Code
dogs = ['willie', 'hootz', 'peso', 'juno']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
###Output
_____no_output_____
###Markdown
In this situation, nothing happens if the test does not pass.
###Code
dogs = ['willie', 'hootz']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
###Output
_____no_output_____
###Markdown
Notice that there are no errors. The condition `len(dogs) > 3` evaluates to False, and the program moves on to any lines after the **if** block. if-else statementsMany times you will want to respond in two possible ways to a test. If the test evaluates to **True**, you will want to do one thing. If the test evaluates to **False**, you will want to do something else. The **if-else** structure lets you do that easily. Here's what it looks like:
###Code
dogs = ['willie', 'hootz', 'peso', 'juno']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
_____no_output_____
###Markdown
Our results have not changed in this case, because if the test evaluates to **True** only the statements under the **if** statement are executed. The statements under **else** area only executed if the test fails:
###Code
dogs = ['willie', 'hootz']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
_____no_output_____
###Markdown
The test evaluated to **False**, so only the statement under `else` is run. if-elif...else chainsMany times, you will want to test a series of conditions, rather than just an either-or situation. You can do this with a series of if-elif-else statementsThere is no limit to how many conditions you can test. You always need one if statement to start the chain, and you can never have more than one else statement. But you can have as many elif statements as you want.
###Code
dogs = ['willie', 'hootz', 'peso', 'monty', 'juno', 'turkey']
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
_____no_output_____
###Markdown
It is important to note that in situations like this, only the first test is evaluated. In an if-elif-else chain, once a test passes the rest of the conditions are ignored.
###Code
dogs = ['willie', 'hootz', 'peso', 'monty']
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
_____no_output_____
###Markdown
The first test failed, so Python evaluated the second test. That test passed, so the statement corresponding to `len(dogs) >= 3` is executed.
###Code
dogs = ['willie', 'hootz']
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
_____no_output_____
###Markdown
In this situation, the first two tests fail, so the statement in the else clause is executed. Note that this statement would be executed even if there are no dogs at all:
###Code
dogs = []
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
_____no_output_____
###Markdown
Note that you don't have to take any action at all when you start a series of if statements. You could simply do nothing in the situation that there are no dogs by replacing the `else` clause with another `elif` clause:
###Code
dogs = []
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
elif len(dogs) >= 1:
print("Okay, this is a reasonable number of dogs.")
###Output
_____no_output_____
###Markdown
In this case, we only print a message if there is at least one dog present. Of course, you could add a new `else` clause to respond to the situation in which there are no dogs at all:
###Code
dogs = []
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
elif len(dogs) >= 1:
print("Okay, this is a reasonable number of dogs.")
else:
print("I wish we had a dog here.")
###Output
_____no_output_____
###Markdown
As you can see, the if-elif-else chain lets you respond in very specific ways to any given situation. More than one passing testIn all of the examples we have seen so far, only one test can pass. As soon as the first test passes, the rest of the tests are ignored. This is really good, because it allows our code to run more efficiently. Many times only one condition can be true, so testing every condition after one passes would be meaningless.There are situations in which you want to run a series of tests, where every single test runs. These are situations where any or all of the tests could pass, and you want to respond to each passing test. Consider the following example, where we want to greet each dog that is present:
###Code
dogs = ['willie', 'hootz']
if 'willie' in dogs:
print("Hello, Willie!")
if 'hootz' in dogs:
print("Hello, Hootz!")
if 'peso' in dogs:
print("Hello, Peso!")
if 'monty' in dogs:
print("Hello, Monty!")
###Output
_____no_output_____
###Markdown
If we had done this using an if-elif-else chain, only the first dog that is present would be greeted:
###Code
dogs = ['willie', 'hootz']
if 'willie' in dogs:
print("Hello, Willie!")
elif 'hootz' in dogs:
print("Hello, Hootz!")
elif 'peso' in dogs:
print("Hello, Peso!")
elif 'monty' in dogs:
print("Hello, Monty!")
###Output
_____no_output_____
###Markdown
Of course, this could be written much more cleanly using lists and for loops. See if you can follow this code.
###Code
dogs_we_know = ['willie', 'hootz', 'peso', 'monty', 'juno', 'turkey']
dogs_present = ['willie', 'hootz']
# Go through all the dogs that are present, and greet the dogs we know.
for dog in dogs_present:
if dog in dogs_we_know:
print("Hello, %s!" % dog.title())
###Output
_____no_output_____
###Markdown
This is the kind of code you should be aiming to write. It is fine to come up with code that is less efficient at first. When you notice yourself writing the same kind of code repeatedly in one program, look to see if you can use a loop or a function to make your code more efficient. True and False valuesEvery value can be evaluated as True or False. The general rule is that any non-zero or non-empty value will evaluate to True. If you are ever unsure, you can open a Python terminal and write two lines to find out if the value you are considering is True or False. Take a look at the following examples, keep them in mind, and test any value you are curious about. I am using a slightly longer test just to make sure something gets printed each time.
###Code
if 0:
print("This evaluates to True.")
else:
print("This evaluates to False.")
if 1:
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Arbitrary non-zero numbers evaluate to True.
if 1253756:
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Negative numbers are not zero, so they evaluate to True.
if -1:
print("This evaluates to True.")
else:
print("This evaluates to False.")
# An empty string evaluates to False.
if '':
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Any other string, including a space, evaluates to True.
if ' ':
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Any other string, including a space, evaluates to True.
if 'hello':
print("This evaluates to True.")
else:
print("This evaluates to False.")
# None is a special object in Python. It evaluates to False.
if None:
print("This evaluates to True.")
else:
print("This evaluates to False.")
###Output
_____no_output_____
###Markdown
** Exercise (the exercises are getting harder from now on) **:- Given two strings a and b, find the longest substring in A that can be found in B. The index of the first character of the matched longest substring in B should also be reported.For example,a = "a dream"b = "I have a dream that one day this nation will rise up"The result is "a dream". The index of the 'a' in "a dream" is 7 in b. While Loops A while loop tests an initial condition. If that condition is true, the loop starts executing. Every time the loop finishes, the condition is reevaluated. As long as the condition remains true, the loop keeps executing. As soon as the condition becomes false, the loop stops executing. General syntax
###Code
# Set an initial condition.
game_active = True
# Set up the while loop.
while game_active:
# Run the game.
# At some point, the game ends and game_active will be set to False.
# When that happens, the loop will stop executing.
# Do anything else you want done after the loop runs.
###Output
_____no_output_____
###Markdown
- Every while loop needs an initial condition that starts out true.- The `while` statement includes a condition to test.- All of the code in the loop will run as long as the condition remains true.- As soon as something in the loop changes the condition such that the test no longer passes, the loop stops executing.- Any code that is defined after the loop will run at this point. Here is a simple example, showing how a game will stay active as long as the player has enough power.
###Code
# The player's power starts out at 5.
power = 5
# The player is allowed to keep playing as long as their power is over 0.
while power > 0:
print("You are still playing, because your power is %d." % power)
# Your game code would go here, which includes challenges that make it
# possible to lose power.
# We can represent that by just taking away from the power.
power = power - 1
print("\nOh no, your power dropped to 0! Game Over.")
###Output
_____no_output_____
###Markdown
** Exercise **:- Write the functionally equivalent while-loop version for the following code:```sum = 0for i in range(1, 100, 2): sum = sum + i``` ** Exercise **:- Write the functionally equivalent while-loop version for the following code:```sum = 9999for i in range(100, 0, -1): sum = sum - i``` ** Exercise **:- Write the functionally equivalent while-loop version for the following code:```sum = 9999for i in range(100, 0): sum = sum - i``` ** Exercise **:- State whether the following code fragments are functionally equivalent:```i = 0s = 0for i in range(0, 9999): if i % 4 != 0: s += i``````i = 0s = 0while i < 9999: if i % 4 != 0: s += i i +=1 ``````i = 0s = 0while ++i < 9999: if i % 4 != 0: s += i``` Accidental Infinite loopsSometimes we want a while loop to run until a defined action is completed, such as emptying out a list. Sometimes we want a loop to run for an unknown period of time, for example when we are allowing users to give as much input as they want. What we rarely want, however, is a true 'runaway' infinite loop.Take a look at the following example. Can you pick out why this loop will never stop? ```python ///////////////////////////////////////// /// don't execute thie piece of code! /// /////////////////////////////////////////current_number = 1 Count up to 5, printing the number each time.while current_number <= 5: print(current_number)```
###Code
1
1
1
1
1
...
###Output
_____no_output_____
###Markdown
I faked that output, because if I ran it the output would fill up the browser. You can try to run it on your computer, as long as you know how to interrupt runaway processes:- On most systems, Ctrl-C will interrupt the currently running program.- If you are using Geany, your output is displayed in a popup terminal window. You can either press Ctrl-C, or you can use your pointer to close the terminal window.The loop runs forever, because there is no way for the test condition to ever fail. The programmer probably meant to add a line that increments current_number by 1 each time through the loop:
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number + 1
###Output
_____no_output_____
###Markdown
You will certainly make some loops run infintely at some point. When you do, just interrupt the loop and figure out the logical error you made.Infinite loops will not be a real problem until you have users who run your programs on their machines. You won't want infinite loops then, because your users would have to shut down your program, and they would consider it buggy and unreliable. Learn to spot infinite loops, and make sure they don't pop up in your polished programs later on.Here is one more example of an accidental infinite loop: ```python ///////////////////////////////////////// /// don't execute thie piece of code! /// /////////////////////////////////////////current_number = 1 Count up to 5, printing the number each time.while current_number <= 5: print(current_number) current_number = current_number - 1```
###Code
1
0
-1
-2
-3
...
###Output
_____no_output_____
###Markdown
In this example, we accidentally started counting down. The value of `current_number` will always be less than 5, so the loop will run forever. Introducing FunctionsOne of the core principles of any programming language is, "Don't Repeat Yourself". If you have an action that should occur many times, you can define that action once and then call that code whenever you need to carry out that action.We are already repeating ourselves in our code, so this is a good time to introduce simple functions. Functions mean less work for us as programmers, and effective use of functions results in code that is less error-prone. Functions are a set of actions that we group together, and give a name to. You have already used a number of functions from the core Python language, such as *string.title()* and *list.sort()*. We can define our own functions, which allows us to "teach" Python new behavior. General SyntaxA general function looks something like this:
###Code
# Let's define a function.
def function_name(argument_1, argument_2):
# Do whatever we want this function to do,
# using argument_1 and argument_2
# Use function_name to call the function.
function_name(value_1, value_2)
###Output
_____no_output_____
###Markdown
This code will not run, but it shows how functions are used in general.- **Defining a function** - Give the keyword `def`, which tells Python that you are about to *define* a function. - Give your function a name. A variable name tells you what kind of value the variable contains; a function name should tell you what the function does. - Give names for each value the function needs in order to do its work. - These are basically variable names, but they are only used in the function. - They can be different names than what you use in the rest of your program. - These are called the function's *arguments*. - Make sure the function definition line ends with a colon. - Inside the function, write whatever code you need to make the function do its work.- **Using your function** - To *call* your function, write its name followed by parentheses. - Inside the parentheses, give the values you want the function to work with. - These can be variables such as `current_name` and `current_age`, or they can be actual values such as 'ada' and 5.
###Code
print("You are doing good work, Adriana!")
print("Thank you very much for your efforts on this project.")
print("\nYou are doing good work, Billy!")
print("Thank you very much for your efforts on this project.")
print("\nYou are doing good work, Caroline!")
print("Thank you very much for your efforts on this project.")
###Output
_____no_output_____
###Markdown
Functions take repeated code, put it in one place, and then you call that code when you want to use it. Here's what the same program looks like with a function.
###Code
def thank_you(name):
# This function prints a two-line personalized thank you message.
print("\nYou are doing good work, %s!" % name)
print("Thank you very much for your efforts on this project.")
thank_you('Adriana')
thank_you('Billy')
thank_you('Caroline')
###Output
_____no_output_____
###Markdown
In our original code, each pair of print statements was run three times, and the only difference was the name of the person being thanked. When you see repetition like this, you can usually make your program more efficient by defining a function.The keyword *def* tells Python that we are about to define a function. We give our function a name, *thank\_you()* in this case. A variable's name should tell us what kind of information it holds; a function's name should tell us what the variable does. We then put parentheses. Inside these parenthese we create variable names for any variable the function will need to be given in order to do its job. In this case the function will need a name to include in the thank you message. The variable `name` will hold the value that is passed into the function *thank\_you()*.To use a function we give the function's name, and then put any values the function needs in order to do its work. In this case we call the function three times, each time passing it a different name. A common errorA function must be defined before you use it in your program. For example, putting the function at the end of the program would not work.
###Code
thank_you('Adriana')
thank_you('Billy')
thank_you('Caroline')
def thank_you(name):
# This function prints a two-line personalized thank you message.
print("\nYou are doing good work, %s!" % name)
print("Thank you very much for your efforts on this project.")
###Output
_____no_output_____
###Markdown
On the first line we ask Python to run the function *thank\_you()*, but Python does not yet know how to do this function. We define our functions at the beginning of our programs, and then we can use them when we need to. A second example---When we introduced the different methods for [sorting a list](Python%20-%20Hands-on%20Introduction%20to%20Python%20and%20Machine%20Learning.ipynbSorting-a-List), our code got very repetitive. It takes two lines of code to print a list using a for loop, so these two lines are repeated whenever you want to print out the contents of a list. This is the perfect opportunity to use a function, so let's see how the code looks with a function.First, let's see the code we had without a function:
###Code
students = ['bernice', 'aaron', 'cody']
# Put students in alphabetical order.
students.sort()
# Display the list in its current order.
print("Our students are currently in alphabetical order.")
for student in students:
print(student.title())
# Put students in reverse alphabetical order.
students.sort(reverse=True)
# Display the list in its current order.
print("\nOur students are now in reverse alphabetical order.")
for student in students:
print(student.title())
###Output
_____no_output_____
###Markdown
Here's what the same code looks like, using a function to print out the list:
###Code
def show_students(students, message):
# Print out a message, and then the list of students
print(message)
for student in students:
print(student.title())
students = ['bernice', 'aaron', 'cody']
# Put students in alphabetical order.
students.sort()
show_students(students, "Our students are currently in alphabetical order.")
#Put students in reverse alphabetical order.
students.sort(reverse=True)
show_students(students, "\nOur students are now in reverse alphabetical order.")
###Output
_____no_output_____
###Markdown
This is much cleaner code. We have an action we want to take, which is to show the students in our list along with a message. We give this action a name, *show\_students()*. This function needs two pieces of information to do its work, the list of students and a message to display. Inside the function, the code for printing the message and looping through the list is exactly as it was in the non-function code.Now the rest of our program is cleaner, because it gets to focus on the things we are changing in the list, rather than having code for printing the list. We define the list, then we sort it and call our function to print the list. We sort it again, and then call the printing function a second time, with a different message. This is much more readable code. Advantages of using functionsYou might be able to see some advantages of using functions, through this example:- We write a set of instructions once. We save some work in this simple example, and we save even more work in larger programs.- When our function works, we don't have to worry about that code anymore. Every time you repeat code in your program, you introduce an opportunity to make a mistake. Writing a function means there is one place to fix mistakes, and when those bugs are fixed, we can be confident that this function will continue to work correctly.- We can modify our function's behavior, and that change takes effect every time the function is called. This is much better than deciding we need some new behavior, and then having to change code in many different places in our program. For a quick example, let's say we decide our printed output would look better with some form of a bulleted list. Without functions, we'd have to change each print statement. With a function, we change just the print statement in the function:
###Code
def show_students(students, message):
# Print out a message, and then the list of students
print(message)
for student in students:
print("- " + student.title())
students = ['bernice', 'aaron', 'cody']
# Put students in alphabetical order.
students.sort()
show_students(students, "Our students are currently in alphabetical order.")
#Put students in reverse alphabetical order.
students.sort(reverse=True)
show_students(students, "\nOur students are now in reverse alphabetical order.")
###Output
_____no_output_____
###Markdown
You can think of functions as a way to "teach" Python some new behavior. In this case, we taught Python how to create a list of students using hyphens; now we can tell Python to do this with our students whenever we want to. Returning a ValueEach function you create can return a value. This can be in addition to the primary work the function does, or it can be the function's main job. The following function takes in a number, and returns the corresponding word for that number:
###Code
def get_number_word(number):
# Takes in a numerical value, and returns
# the word corresponding to that number.
if number == 1:
return 'one'
elif number == 2:
return 'two'
elif number == 3:
return 'three'
# ...
# Let's try out our function.
for current_number in range(0,4):
number_word = get_number_word(current_number)
print(current_number, number_word)
###Output
_____no_output_____
###Markdown
It's helpful sometimes to see programs that don't quite work as they are supposed to, and then see how those programs can be improved. In this case, there are no Python errors; all of the code has proper Python syntax. But there is a logical error, in the first line of the output.We want to either not include 0 in the range we send to the function, or have the function return something other than `None` when it receives a value that it doesn't know. Let's teach our function the word 'zero', but let's also add an `else` clause that returns a more informative message for numbers that are not in the if-chain.
###Code
def get_number_word(number):
# Takes in a numerical value, and returns
# the word corresponding to that number.
if number == 0:
return 'zero'
elif number == 1:
return 'one'
elif number == 2:
return 'two'
elif number == 3:
return 'three'
else:
return "I'm sorry, I don't know that number."
# Let's try out our function.
for current_number in range(0,6):
number_word = get_number_word(current_number)
print(current_number, number_word)
###Output
_____no_output_____
###Markdown
If you use a return statement in one of your functions, keep in mind that the function stops executing as soon as it hits a return statement. For example, we can add a line to the *get\_number\_word()* function that will never execute, because it comes after the function has returned a value:
###Code
def get_number_word(number):
# Takes in a numerical value, and returns
# the word corresponding to that number.
if number == 0:
return 'zero'
elif number == 1:
return 'one'
elif number == 2:
return 'two'
elif number == 3:
return 'three'
else:
return "I'm sorry, I don't know that number."
# This line will never execute, because the function has already
# returned a value and stopped executing.
print("This message will never be printed.")
# Let's try out our function.
for current_number in range(0,6):
number_word = get_number_word(current_number)
print(current_number, number_word)
###Output
_____no_output_____
###Markdown
More LaterThere is much more to learn about functions, but we will get to those details later. For now, feel free to use functions whenever you find yourself writing the same code several times in a program. Some of the things you will learn when we focus on functions:- How to give the arguments in your function default values.- How to let your functions accept different numbers of arguments. User inputAlmost all interesting programs accept input from the user at some point. You can start accepting user input in your programs by using the `input()` function. The input function displays a messaget to the user describing the kind of input you are looking for, and then it waits for the user to enter a value. When the user presses Enter, the value is passed to your variable. General syntaxThe general case for accepting input looks something like this:
###Code
# Get some input from the user.
variable = input('Please enter a value: ')
# Do something with the value that was entered.
###Output
_____no_output_____
###Markdown
You need a variable that will hold whatever value the user enters, and you need a message that will be displayed to the user. In the following example, we have a list of names. We ask the user for a name, and we add it to our list of names.
###Code
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
_____no_output_____
###Markdown
Using while loops to keep your programs runningMost of the programs we use every day run until we tell them to quit, and in the background this is often done with a while loop. Here is an example of how to let the user enter an arbitrary number of names.
###Code
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
_____no_output_____
###Markdown
That worked, except we ended up with the name 'quit' in our list. We can use a simple `if` test to eliminate this bug:
###Code
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
if new_name != 'quit':
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
_____no_output_____
###Markdown
This is pretty cool! We now have a way to accept input from users while our programs run, and we have a way to let our programs run until our users are finished working. Using while loops to make menusYou now have enough Python under your belt to offer users a set of choices, and then respond to those choices until they choose to quit. Let's look at a simple example, and then analyze the code:
###Code
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
print("\nHere's a bicycle. Have fun!\n")
elif choice == '2':
print("\nHere are some running shoes. Run fast!\n")
elif choice == '3':
print("\nHere's a map. Can you leave a trip plan for us?\n")
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
_____no_output_____ |
nbs/10_Project_Questions.ipynb | ###Markdown
*Practical Data Science* Capstone Project QuestionsMatthias GriebelChair of Information Systems and ManagementWinter Semester 21/22 __Credits for this lecuture__- https://github.com/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb
###Code
# Install on colab
!pip install transformers datasets
###Output
_____no_output_____
###Markdown
Fine Tuning Transformer for MultiClass Text Classification IntroductionToday, we will be fine tuning a transformer model for the **Multiclass text classification** problem. - Data: - Capstone Project Data. - Language Model Used: - DistilBERT this is a smaller transformer model as compared to BERT or Roberta. It is created by process of distillation applied to Bert. - [Blog-Post](https://medium.com/huggingface/distilbert-8cf3380435b5) - [Research Paper](https://arxiv.org/abs/1910.01108) - [Documentation for python](https://huggingface.co/transformers/model_doc/distilbert.html) - Hardware Requirements: - Python 3.6 and above - Pytorch, Transformers and All the stock Python ML Libraries - GPU enabled setup Importing Python Libraries and preparing the environment
###Code
# Importing the libraries needed
import torch
import transformers
from torch.utils.data import Dataset, DataLoader
from transformers import DistilBertModel, DistilBertTokenizer
import linecache
from pathlib import Path
from bs4 import BeautifulSoup
import json
import pandas as pd
import gc
gc.enable()
# Setting up the device for GPU usage
from torch import cuda
device = 'cuda' if cuda.is_available() else 'cpu'
###Output
_____no_output_____
###Markdown
Importing and Pre-Processing data 1. Connect to google drive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
2. Copy and unzip data
###Code
!cp /content/drive/MyDrive/industry_data/test_small.ndjson.gz . && gzip -d test_small.ndjson.gz
!cp /content/drive/MyDrive/industry_data/train_small.ndjson.gz . && gzip -d train_small.ndjson.gz
###Output
_____no_output_____
###Markdown
3. Get categories
###Code
data_path = Path('.')
test_path = data_path/'train_small.ndjson'
with test_path.open("r", encoding="utf-8") as file:
test_data = [json.loads(line) for line in file]
categories = pd.DataFrame(test_data)
categories = categories[['industry_label', 'industry']].sort_values('industry_label').drop_duplicates().reset_index(drop=True)
categories = categories.reset_index().set_index('industry')
del test_data
categories
###Output
_____no_output_____
###Markdown
Preparing the Dataset and DataloaderWe will start with defining few key variables that will be used later during the training/fine tuning stage.Followed by creation of Dataset class - This defines how the text is pre-processed before sending it to the neural network. We will also define the Dataloader that will feed the data in batches to the neural network for suitable training and processing. Dataset and Dataloader are constructs of the PyTorch library for defining and controlling the data pre-processing and its passage to neural network. For further reading into Dataset and Dataloader read the [docs at PyTorch](https://pytorch.org/docs/stable/data.html) *NdjsonDataset* Dataset Class- This class is defined to generate tokenized output that is used by the DistilBERT model for training. Dataloader- Dataloader is used to for creating training and validation dataloader that load data to the neural network in a defined manner. This is needed because all the data from the dataset cannot be loaded to the memory at once, hence the amount of dataloaded to the memory and then passed to the neural network needs to be controlled.- This control is achieved using the parameters such as `batch_size` and `max_len`.- Training and Validation dataloaders are used in the training and validation part of the flow respectively
###Code
# Defining some key variables that will be used later on in the training
MAX_LEN = 512
TRAIN_BATCH_SIZE = 4
VALID_BATCH_SIZE = 2
EPOCHS = 1
LEARNING_RATE = 1e-05
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased')
class NdjsonDataset(torch.utils.data.Dataset):
def __init__(self, filepath, categories, tokenizer, max_len, n_lines=None):
super(NdjsonDataset).__init__()
self.filename = filepath.as_posix()
self.categories = categories
self.tokenizer = tokenizer
self.max_len = max_len
with filepath.open("r", encoding="utf-8") as file:
self.n_lines = n_lines or sum(1 for line in file)
def __len__(self):
return self.n_lines
def __getitem__(self, idx):
line = json.loads(linecache.getline(self.filename, idx+1))
industry = line['industry']
plainline = BeautifulSoup(line['html'], 'html.parser').get_text()
inputs = self.tokenizer.encode_plus(
plainline,
None,
add_special_tokens=True,
max_length=self.max_len,
pad_to_max_length=True,
return_token_type_ids=True,
truncation=True
)
ids = inputs['input_ids']
mask = inputs['attention_mask']
return {
'ids': torch.tensor(ids, dtype=torch.long),
'mask': torch.tensor(mask, dtype=torch.long),
'targets': torch.tensor(self.categories.at[industry, 'index'], dtype=torch.long)
}
# Creating the dataset and dataloader for the neural network
# How would you creat a validation dataset?
data_path = Path('.')
training_set = NdjsonDataset(filepath=data_path/'train_small.ndjson',
categories=categories,
tokenizer=tokenizer,
max_len=MAX_LEN,
n_lines=25185)
testing_set = NdjsonDataset(filepath=data_path/'test_small.ndjson',
categories=categories,
tokenizer=tokenizer,
max_len=MAX_LEN,
n_lines=8396)
train_params = {'batch_size': TRAIN_BATCH_SIZE,
'shuffle': True,
'num_workers': 0
}
test_params = {'batch_size': VALID_BATCH_SIZE,
'shuffle': True,
'num_workers': 0
}
training_loader = DataLoader(training_set, **train_params)
testing_loader = DataLoader(testing_set, **test_params)
###Output
_____no_output_____
###Markdown
Creating the Neural Network for Fine Tuning Neural Network - We will be creating a neural network with the `DistillBERTClass`. - This network will have the DistilBERT Language model followed by a `dropout` and finally a `Linear` layer to obtain the final outputs. - The data will be fed to the DistilBERT Language model as defined in the dataset. - Final layer outputs is what will be compared to the `encoded category` to determine the accuracy of models prediction. - We will initiate an instance of the network called `model`. This instance will be used for training and then to save the final trained model for future inference. Loss Function and Optimizer - `Loss Function` and `Optimizer` and defined in the next cell. - The `Loss Function` is used the calculate the difference in the output created by the model and the actual output. - `Optimizer` is used to update the weights of the neural network to improve its performance. Further Reading- You can refer to my [Pytorch Tutorials](https://github.com/abhimishra91/pytorch-tutorials) to get an intuition of Loss Function and Optimizer.- [Pytorch Documentation for Loss Function](https://pytorch.org/docs/stable/nn.htmlloss-functions)- [Pytorch Documentation for Optimizer](https://pytorch.org/docs/stable/optim.html)- Refer to the links provided on the top of the notebook to read more about DistiBERT.
###Code
# Creating the customized model, by adding a drop out and a dense layer on top of distil bert to get the final output for the model.
class DistillBERTClass(torch.nn.Module):
def __init__(self):
super(DistillBERTClass, self).__init__()
self.l1 = DistilBertModel.from_pretrained("distilbert-base-uncased")
self.pre_classifier = torch.nn.Linear(768, 768)
self.dropout = torch.nn.Dropout(0.3)
self.classifier = torch.nn.Linear(768, len(categories))
def forward(self, input_ids, attention_mask):
output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask)
hidden_state = output_1[0]
pooler = hidden_state[:, 0]
pooler = self.pre_classifier(pooler)
pooler = torch.nn.ReLU()(pooler)
pooler = self.dropout(pooler)
output = self.classifier(pooler)
return output
model = DistillBERTClass()
model.to(device)
# Creating the loss function and optimizer
loss_function = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params = model.parameters(), lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
Fine Tuning the ModelAfter all the effort of loading and preparing the data and datasets, creating the model and defining its loss and optimizer. This is probably the easier steps in the process. Here we define a training function that trains the model on the training dataset created above, specified number of times (EPOCH), An epoch defines how many times the complete data will be passed through the network. Following events happen in this function to fine tune the neural network:- The dataloader passes data to the model based on the batch size. - Subsequent output from the model and the actual category are compared to calculate the loss. - Loss value is used to optimize the weights of the neurons in the network.- After every 100 steps the loss value is printed in the console.
###Code
# Function to calcuate the accuracy of the model
def calcuate_accu(big_idx, targets):
n_correct = (big_idx==targets).sum().item()
return n_correct
# Defining the training function on the 80% of the dataset for tuning the distilbert model
def train(epoch):
tr_loss = 0
n_correct = 0
nb_tr_steps = 0
nb_tr_examples = 0
model.train()
for _,data in enumerate(training_loader, 0):
ids = data['ids'].to(device, dtype = torch.long)
mask = data['mask'].to(device, dtype = torch.long)
targets = data['targets'].to(device, dtype = torch.long)
outputs = model(ids, mask)
loss = loss_function(outputs, targets)
tr_loss += loss.item()
big_val, big_idx = torch.max(outputs.data, dim=1)
n_correct += calcuate_accu(big_idx, targets)
nb_tr_steps += 1
nb_tr_examples+=targets.size(0)
if _%100==0:
loss_step = tr_loss/nb_tr_steps
accu_step = (n_correct*100)/nb_tr_examples
print(f"Training Loss per 100 steps: {loss_step}")
print(f"Training Accuracy per 100 steps: {accu_step}")
optimizer.zero_grad()
loss.backward()
# # When using GPU
optimizer.step()
print(f'The Total Accuracy for Epoch {epoch}: {(n_correct*100)/nb_tr_examples}')
epoch_loss = tr_loss/nb_tr_steps
epoch_accu = (n_correct*100)/nb_tr_examples
print(f"Training Loss Epoch: {epoch_loss}")
print(f"Training Accuracy Epoch: {epoch_accu}")
return
for epoch in range(EPOCHS):
train(epoch)
###Output
_____no_output_____
###Markdown
Validating the Model (not tested)During the validation stage we pass the unseen data(Testing Dataset) to the model. This step determines how good the model performs on the unseen data.
###Code
def valid(model, testing_loader):
model.eval()
n_correct = 0; n_wrong = 0; total = 0
with torch.no_grad():
for _, data in enumerate(testing_loader, 0):
ids = data['ids'].to(device, dtype = torch.long)
mask = data['mask'].to(device, dtype = torch.long)
targets = data['targets'].to(device, dtype = torch.long)
outputs = model(ids, mask).squeeze()
loss = loss_function(outputs, targets)
tr_loss += loss.item()
big_val, big_idx = torch.max(outputs.data, dim=1)
n_correct += calcuate_accu(big_idx, targets)
nb_tr_steps += 1
nb_tr_examples+=targets.size(0)
if _%5000==0:
loss_step = tr_loss/nb_tr_steps
accu_step = (n_correct*100)/nb_tr_examples
print(f"Validation Loss per 100 steps: {loss_step}")
print(f"Validation Accuracy per 100 steps: {accu_step}")
epoch_loss = tr_loss/nb_tr_steps
epoch_accu = (n_correct*100)/nb_tr_examples
print(f"Validation Loss Epoch: {epoch_loss}")
print(f"Validation Accuracy Epoch: {epoch_accu}")
return epoch_accu
print('This is the validation section to print the accuracy and see how it performs')
print('Here we are leveraging on the dataloader crearted for the validation dataset, the approcah is using more of pytorch')
acc = valid(model, testing_loader)
print("Accuracy on test data = %0.2f%%" % acc)
###Output
_____no_output_____
###Markdown
Saving the Trained Model Artifacts for inferenceThis is the final step in the process of fine tuning the model. The model and its vocabulary are saved locally. These files are then used in the future to make inference on new inputs of news headlines.Please remember that a trained neural network is only useful when used in actual inference after its training. In the lifecycle of an ML projects this is only half the job done. We will leave the inference of these models for some other day.
###Code
# Saving the files for re-use
output_model_file = './models/pytorch_distilbert_news.bin'
output_vocab_file = './models/vocab_distilbert_news.bin'
model_to_save = model
torch.save(model_to_save, output_model_file)
tokenizer.save_vocabulary(output_vocab_file)
print('All files saved')
###Output
_____no_output_____ |
notebooks/10_Color-physics-of-translucent-inks.ipynb | ###Markdown
Color physics of translucent inks Rendering an RGB image*Explain Kubelka-Munk theory*
###Code
from inktime import data, rgbkm
import numpy as np
import matplotlib.pyplot as plt
Rg = data.fetch_blackwhite()[:,:,0:3]
# todo: quick fix multiplier
D = 5 * data.fetch_star()[:,:,0]
# Hansa yellow RGB KM parameters according to Curtis (1997)
K_hansa = np.array([0.06, 0.21, 1.78])
S_hansa = np.array([0.50, 0.88, 0.009])
refl = rgbkm.reflectance(K_hansa, S_hansa, D, Rg)
plt.imshow(refl);
###Output
_____no_output_____
###Markdown
Functions
###Code
#export
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import cv2
import scipy.optimize as optimize
def reflectance(K, S, D, Rg):
'''Calculates reflectance for single colorant Kubelka-Munk model.
Based on Nobbs (1997) formulation with modified Saunderson expression for infinite reflectance.
Function works for single channel, 3 RGB channels, and spectral data/images with muliple wavelength channels.
Parameters:
-----------
K: tuple-like (n channels)
Colorant absorption coefficients for wavelength or RGB channels
S: tuple-like (n channels)
Colorant scattering coefficients for wavelength or RGB channels
D: array ( height x width)
Colorant thickness image
Rg: array (height x width x n) or rgb tuple with shape (3,)
Background reflectance image or background color
Returns:
--------
refl: array (height x width x n)
n-channel reflectance image
'''
# create uniform background image if Rg is rgb tuple
Rg = np.array(Rg)
shape = Rg.shape
if len(shape) == 1: # understood as rgb tuple
h, w = D.shape
Rg_img = np.ones([h, w, 3])
Rg_img[:,:] = Rg
Rg = Rg_img
shape = Rg.shape
n_channels = shape[-1]
K = np.array(K).reshape(1, n_channels)
S = np.array(S).reshape(1, n_channels)
D = np.array(D).reshape(-1, 1)
Rg = Rg.reshape(-1, n_channels)
# need to return infinity for K =< 0 or S < 0 in optimization code
#pos_S = S >= 0
#pos_K = K > 0 # also non-zero
#ok = pos_S & pos_K
#Rinf = np.zeros([1, n_channels])
Rinf = (S/K) / ((S/K) + 1 + np.sqrt(1 + 2 * (S/K)))
#Rinf[ok] = (S[ok]/K[ok]) / ((S[ok]/K[ok]) + 1 + np.sqrt(1 + 2 * (S[ok]/K[ok])))
#Rinf[~ok] = np.infty
Z = D * np.sqrt(K * (K + 2 * S))
Z = np.clip(Z, a_min=0, a_max=50)
beta = np.exp(2 * Z) - 1
alpha = (1 - Rinf**2) / (1 - Rg * Rinf)
refl = (alpha * Rg + beta * Rinf) / (alpha + beta)
refl = refl.reshape(shape)
return refl
# hide
def get_optical_density(img, bg_color, blf=True):
'''Generates ideal ink optical density model for *img* with background color *bg_color*.'''
# generate uniform background
paper_color_img = np.ones_like(img)
paper_color_img[:,:] = bg_color
# not sure if this is needed
if blf:
img = cv2.bilateralFilter(img, 10, 0.1, 120) # got these params from 2018-11-16 notebook
img_blf = img
rgb = img.transpose(2, 0, 1)
r, g, b = rgb
img_od = mu.normalize_image(-np.log(np.clip(img/paper_color_img, a_min=0, a_max=1)))
return img_od
class PaintDistribution:
'''Single colorant layer model'''
def __init__(self, D, Rg, R_meas):
'''Initializes statigraphic model with thickness array *D*, background array *Rg* and measured array *R_meas*. '''
self.D = D
self.Rg = Rg
self.R_meas = R_meas
D_max = self.D.max()
if D_max > 10:
print('Warning: found maxium thickness {} larger then 10. Might cause numerical problems.'.format(D_max))
# better .residuals ??
def residuals(self, KS):
'''Returns residuals vector between measured and calculated for *KS* '''
n_channels = int(len(KS) / 2)
K, S = KS[0:n_channels], KS[n_channels: 2*n_channels] # split vector
img_calc = reflectance(K, S, self.D, self.Rg)
img_diff = self.R_meas - img_calc
is_non_zero_thickness = self.D > 0
res = img_diff[is_non_zero_thickness].flatten()
res = res**2 # check quadratic
return res
def fit_KS(self):
'''Non-linear fit of K and S for stratigraphic model'''
n_channels = self.Rg.shape[-1]
KS_start = np.ones(2 * n_channels)
KS_min = np.ones(2 * n_channels) * 10e-8 # not sure if this avoids numerical problems
KS_max = np.ones(2 * n_channels) * 100 # same
bounds = [KS_min, KS_max]
fit = optimize.least_squares(self.residuals, KS_start, verbose=1, bounds=bounds, xtol=1e-10, ftol=1e-10, gtol=1e-10) # self is callable (function object)
self.K_fit, self.S_fit = fit.x[0:n_channels], fit.x[n_channels:2*n_channels]
self.R_fit = reflectance(self.K_fit, self.S_fit, self.D, self.Rg) # for convenience
return self.K_fit, self.S_fit
class Ramp_model:
def __init__(self, material, rgb_bg, rgb_1, rgb_2, thickness_1, thickness_2):
'''Fits K and S to a simple two patch ramp model '''
# should extend to n-patches list but not now
self.material = material
self.Rg = np.ones([3, 4, 3], dtype=float)
self.Rg[:,:] = rgb_bg
self.R_meas = self.Rg.copy()
self.R_meas[1, 1:3] = np.array([rgb_1, rgb_2])
self.D = np.zeros([3, 4])
self.D[1, 1:3] = [thickness_1, thickness_2]
pdist = PaintDistribution(self.D, self.Rg, self.R_meas)
self.K_fit, self.S_fit = pdist.fit_KS()
self.rendering = reflectance(self.K_fit, self.S_fit, self.D, self.Rg)
print('Created 3x4 pixel ramp model object for: "{}"'.format(self.material))
###Output
_____no_output_____ |
ngwl_features.ipynb | ###Markdown
loading data from kaggle to colab
###Code
!pip install -q kaggle
from google.colab import files
files.upload()
!mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! kaggle competitions download -c ngwl-predict-customer-churn --force
###Output
_____no_output_____
###Markdown
imports
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import itertools
import gc
from datetime import datetime, timedelta, date
from google.colab import drive
drive.mount('/content/gdrive')
addresses = pd.read_csv('addresses.csv.zip')
addresses.shape
ship1 = pd.read_csv('shipments2020-01-01.csv.zip')
ship2 = pd.read_csv('shipments2020-03-01.csv.zip')
ship3 = pd.read_csv('shipments2020-04-30.csv.zip')
ship4 = pd.read_csv('shipments2020-06-29.csv.zip')
ship1.shape, ship2.shape, ship3.shape, ship4.shape
#concatenate all shipments
all_shipments = pd.concat([ship1, ship2, ship3, ship4])
del ship1, ship2, ship3, ship4
gc.collect()
all_shipments.shape
#get phone_id from addresses
all_shipments = all_shipments.merge(addresses, left_on='ship_address_id', right_on='id', how='left').drop(['id'], axis=1).drop_duplicates()
all_shipments.shape
#get calendar month from order completion timestamp
all_shipments['month'] = pd.to_datetime(all_shipments.order_completed_at).dt.month
###Output
_____no_output_____
###Markdown
features: nr of cancelled/completed orders
###Code
features = []
#aggregate features for each month (April-Sep)
for month in range(4, 10):
#take 3 months history
temp = all_shipments[(all_shipments.month<month)&(all_shipments.month>=month-3)]
#leave only cancelled/complete states
temp = temp[temp['s.order_state'].isin(['complete', 'canceled'])]
#get nr of cancelled/completed orders for each customer at each month
f = temp.pivot_table(index=['phone_id'], columns=['month', 's.order_state'], aggfunc='size', fill_value=0)
#rename columns
f.columns = ['canc_1', 'comp_1', 'canc_2', 'comp_2', 'canc_3', 'comp_3']
#change indices
f.index = f.index.astype(str)+'_2020-0'+str(month)
features.append(f)
features_all = pd.concat(features)
#save features
features_all.to_pickle('/content/gdrive/My Drive/cancelled_completed_features.pkl')
###Output
_____no_output_____
###Markdown
features: statistics from shipments
###Code
features = []
for month in range(4, 10):
temp = all_shipments[(all_shipments.month<month)&(all_shipments.month>=month-3)]
f = temp.groupby(['phone_id', 'month']).agg({'retailer':['nunique'], 'total_cost':[np.mean, 'max', 'min'],
'total_weight':[np.mean, 'max', 'min'], 'rate': [np.count_nonzero, 'sum', 'max']}).unstack()
stat1, stat2 = ['mean', 'max', 'min'], ['count_not_zero', 'sum', 'max']
cols = ['ret_nr']+['cost_'+stat for stat in stat1]+['weight_'+stat for stat in stat1]+['rate_'+stat for stat in stat2]
months = [1, 2, 3]
f.columns = [pair[0]+'_'+str(pair[1]) for pair in itertools.product(cols, months)]
f.index = f.index.astype(str)+'_2020-0'+str(month)
features.append(f)
features_all = pd.concat(features).fillna(-1)
#save data
features_all.to_pickle('/content/gdrive/My Drive/retailer_other_stats_features.pkl')
###Output
_____no_output_____
###Markdown
features: statistics on delivery time
###Code
#get shipment duration in hours
all_shipments['duration'] = (pd.to_datetime(all_shipments.shipped_at, format='%Y-%m-%d %H:%M:%S') - pd.to_datetime(all_shipments.shipment_starts_at, format='%Y-%m-%d %H:%M:%S')).astype('timedelta64[h]')
lb = all_shipments.duration.quantile(0.01)
ub = all_shipments.duration.quantile(0.99)
lb, ub
#change outliers to nan
all_shipments['duration'] = np.where((all_shipments['duration']<lb)|(all_shipments['duration']>ub), np.nan, all_shipments['duration'])
features = []
for month in range(4, 10):
temp = all_shipments[(all_shipments.month<month)&(all_shipments.month>=month-3)]
f = temp.groupby(['phone_id', 'month']).agg({'duration':[np.mean, 'max', 'min']}).unstack()
stats = ['mean', 'max', 'min']
cols = ['duration_'+stat for stat in stats]
months = [1, 2, 3]
f.columns = [pair[0]+'_'+str(pair[1]) for pair in itertools.product(cols, months)]
f.index = f.index.astype(str)+'_2020-0'+str(month)
features.append(f)
features_all = pd.concat(features).fillna(-1)
#save data
features_all.to_pickle('/content/gdrive/My Drive/duration_features.pkl')
###Output
_____no_output_____
###Markdown
features: nr of messages received
###Code
messages = pd.read_csv('messages.csv.zip')
messages.shape
#change timestamp to date
messages.sent = pd.to_datetime(messages.sent,unit='s')
#get month
messages['month'] = messages.sent.dt.month
#get nr of messages per month
agg_messages = messages.groupby(['user_id', 'month']).sent.count().reset_index()
#get phone_id from shipments
agg_messages = agg_messages.merge(all_shipments[['user_id', 'phone_id']], left_on='user_id', right_on='user_id').drop_duplicates()
features = []
for month in range(4, 10):
temp = agg_messages[(agg_messages.month<month)&(agg_messages.month>=month-3)]
f = temp.groupby(['phone_id', 'month']).sent.sum().unstack()
months = [1, 2, 3]
f.columns = ['messages_sent'+'_'+str(m) for m in months]
f.index = f.index.astype(str)+'_2020-0'+str(month)
features.append(f)
features_all = pd.concat(features).fillna(0)
#save data
features_all.to_pickle('/content/gdrive/My Drive/messages_sent.pkl')
###Output
_____no_output_____
###Markdown
features: nr of messages received per type
###Code
actions = pd.read_csv('actions.csv')
actions.shape
#get action type
messages = messages.merge(actions[['id', 'type']], left_on='action_id', right_on='id', how='left').drop(['id'], axis=1)
messages.shape
agg_messages_type = messages.groupby(['user_id', 'month', 'type']).sent.count().reset_index()
agg_messages_type = agg_messages_type.merge(all_shipments[['user_id', 'phone_id']], left_on='user_id', right_on='user_id').drop_duplicates()
agg_messages_type.type.value_counts()
#sms were started only in August
agg_messages_type[agg_messages_type.type=='sms'].month.value_counts()
#changed message types to push/other
agg_messages_type.loc[agg_messages_type.type!='push', 'type'] = 'other'
agg_messages_type.type.value_counts()
features = []
for month in range(4, 10):
temp = agg_messages_type[(agg_messages_type.month<month)&(agg_messages_type.month>=month-3)]
f = temp.groupby(['phone_id', 'month', 'type']).sent.sum().unstack().unstack()
months = [1, 2, 3]
types = ['other', 'push']
f.columns = [pair[0]+'_'+str(pair[1]) for pair in itertools.product(types, months)]
f.index = f.index.astype(str)+'_2020-0'+str(month)
print(month, f.shape)
features.append(f)
features_all = pd.concat(features).fillna(0)
features_all.shape
#save data
features_all.to_csv('/content/gdrive/My Drive/messages_sent_by_type.csv')
#save data
features_all.to_pickle('/content/gdrive/My Drive/messages_sent_by_type.pkl')
###Output
_____no_output_____
###Markdown
user profile features
###Code
users = pd.read_csv('user_profiles.csv.zip')
users.shape
#add phone_id
users = users.merge(all_shipments[['user_id', 'phone_id']], left_on='user_id', right_on='user_id').drop_duplicates()
users.shape
#extract city from shipments
city = all_shipments.groupby('phone_id')['s.city_name'].apply(lambda x:x.value_counts().index[0])
users = users.merge(city, left_on='phone_id', right_index=True)
users.drop(['user_id'], axis=1, inplace=True)
#change birthdate to age
def get_age(bdate):
today = date.today()
return today.year - bdate.year - ((today.month, today.day) < (bdate.month, bdate.day))
users['bdate'] = pd.to_datetime(users['bdate'], errors='coerce')
users['age'] = users.bdate.apply(get_age)
users.drop(['bdate'], axis=1, inplace=True)
users.rename(columns={'s.city_name':'city'}, inplace=True)
#save data
users.to_pickle('/content/gdrive/My Drive/user_features.pkl')
users.to_csv('/content/gdrive/My Drive/user_features.csv')
###Output
_____no_output_____ |
ISL-main.ipynb | ###Markdown
Part 1 : Data preprocessing
###Code
#Loading dataset images and labels from csv files
def load_dataset(filename, n, h, w):
data = []
with open(filename, 'r') as csvfile:
# creating a csv reader object
csvreader = csv.reader(csvfile)
# extracting each data row one by one
for row in csvreader:
data.append(row)
x_data = np.zeros((n, h * w), dtype=float)
y_data = []
path = "/home/jayant/PycharmProjects/Indian sign language character recognition/"
i = 0
for row in data:
current_image_path = path + row[0]
y_data.append(int(row[1]))
current_image = cv2.imread(current_image_path)#, cv2.IMREAD_GRAYSCALE)
canny_image = get_canny_edge(current_image)[0]
# normalize and store the image
x_data[i] = (np.asarray(canny_image).reshape(1, 128 * 128)) / 255
i += 1
return x_data, y_data
x_train, y_train = load_dataset("/home/jayant/PycharmProjects/Indian sign language character recognition/Dataset/train.csv",28520,128,128)
x_test, y_test = load_dataset("/home/jayant/PycharmProjects/Indian sign language character recognition/Dataset/test.csv",7130,128,128)
y = y_test
from sklearn.preprocessing import LabelBinarizer
label_binarizer = LabelBinarizer()
y_train = label_binarizer.fit_transform(y_train)
y_test = label_binarizer.fit_transform(y_test)
#Reshaping the data from 1-D to 3-D as required through input by CNN's
x_train = x_train.reshape(-1,128,128,1)
x_test = x_test.reshape(-1,128,128,1)
f, ax = plt.subplots(2,5)
f.set_size_inches(10, 10)
k = 0
for i in range(2):
for j in range(5):
ax[i,j].imshow(x_train[k].reshape(128, 128) , cmap = "gray")
k += 1
plt.tight_layout()
# With data augmentation to prevent overfitting
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.1, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
horizontal_flip=False, # randomly flip images
vertical_flip=False) # randomly flip images
datagen.fit(x_train)
###Output
_____no_output_____
###Markdown
Part 2 : Model training
###Code
learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy', patience = 2, verbose=1,factor=0.5, min_lr=0.00001)
model = Sequential()
model.add(Conv2D(75 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu' , input_shape = (128,128,1)))
model.add(BatchNormalization())
model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same'))
model.add(Conv2D(50 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
model.add(Dropout(0.2))
model.add(BatchNormalization())
model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same'))
model.add(Conv2D(25 , (3,3) , strides = 1 , padding = 'same' , activation = 'relu'))
model.add(BatchNormalization())
model.add(MaxPool2D((2,2) , strides = 2 , padding = 'same'))
model.add(Flatten())
model.add(Dense(units = 512 , activation = 'relu'))
model.add(Dropout(0.3))
model.add(Dense(units = 31 , activation = 'softmax'))
model.compile(optimizer = 'adam' , loss = 'categorical_crossentropy' , metrics = ['accuracy'])
model.summary()
history = model.fit(datagen.flow(x_train,y_train, batch_size = 8) ,epochs = 20 , validation_data = (x_test, y_test) , callbacks = [learning_rate_reduction])
print("Accuracy of the model is - " , model.evaluate(x_test,y_test)[1]*100 , "%")
# reference link : https://machinelearningmastery.com/save-load-keras-deep-learning-models/
# save model and architecture to single file
model.save("model_final.h5")
#save model into json format and weights in different file
# serialize model to JSON
model_json = model.to_json()
with open("model_json_format.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights("model_json_final.h5")
print("Saved model to disk")
epochs = [i for i in range(20)]
fig , ax = plt.subplots(1,2)
train_acc = history.history['accuracy']
train_loss = history.history['loss']
val_acc = history.history['val_accuracy']
val_loss = history.history['val_loss']
fig.set_size_inches(16,9)
ax[0].plot(epochs , train_acc , 'go-' , label = 'Training Accuracy')
ax[0].plot(epochs , val_acc , 'ro-' , label = 'Testing Accuracy')
ax[0].set_title('Training & Validation Accuracy')
ax[0].legend()
ax[0].set_xlabel("Epochs")
ax[0].set_ylabel("Accuracy")
ax[1].plot(epochs , train_loss , 'g-o' , label = 'Training Loss')
ax[1].plot(epochs , val_loss , 'r-o' , label = 'Testing Loss')
ax[1].set_title('Testing Accuracy & Loss')
ax[1].legend()
ax[1].set_xlabel("Epochs")
ax[1].set_ylabel("Loss")
plt.show()
x,y = load_dataset("/home/jayant/PycharmProjects/Indian sign language character recognition/Dataset/test.csv",7130,128,128)
print(type(y[0]))
predictions = model.predict_classes(x_test)
print(predictions)
predictions +=1
classes = ["Class " + str(i) for i in range(32)]# if i != 9]
#classes = [0,1,2,3,4,5,6,7,8,9,10,11,12]
print(classes)
print(classification_report(y, predictions, target_names = classes))
cm = confusion_matrix(y,predictions)
cm = pd.DataFrame(cm , index = [i for i in range(32)] , columns = [i for i in range(32)])
plt.figure(figsize = (15,15))
sns.heatmap(cm,cmap= "Blues", linecolor = 'black' , linewidth = 1 , annot = True, fmt='')
correct = np.nonzero(predictions == y)[0]
i = 0
for c in correct[:6]:
plt.subplot(3,2,i+1)
plt.imshow(x_test[c].reshape(128,128), cmap="gray", interpolation='none')
plt.title("Predicted Class {},Actual Class {}".format(predictions[c], y[c]))
plt.tight_layout()
i += 1
import tensorflow as tf
path ="/home/jayant/PycharmProjects/Indian-sign-language-recognition-master/data/Z/1198.jpg"
mod = tf.keras.models.load_model('model_final.h5')
img = cv2.imread(path)
img = get_canny_edge(img)[0]
#img = cv2.resize(img,(128,128))
img = img.reshape((1,128,128,1))
predictions = mod.predict(img)
print(predictions)
score = tf.nn.softmax(predictions[0])
classes = ['1', '2','3', '4', '5', '7', '8', '9', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'I', 'K', 'L', 'M',
'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U','W',
'X', 'Y', 'Z']
#classes = ['1', '2', '3','4','5', '7', '8','9','C','L','O','U']
print(
"This image most likely belongs to {} "
.format(classes[np.argmax(score)])
)
###Output
WARNING:tensorflow:11 out of the last 11 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7fcc3051c4c0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 1.]]
This image most likely belongs to Z
|
cryptodash-prediction/price prediction.ipynb | ###Markdown
Bitcoin prediction with Hierarchical Temporal Memory ML by NumentaBased on and modified from an example in their core repo:https://github.com/htm-community/htm.core/blob/master/py/htm/examples/hotgym.pyLicense will be the same - GNU Affero:https://github.com/htm-community/htm.core/blob/master/LICENSE.txt
###Code
import csv
import datetime
import os
import numpy as np
import random
import math
import matplotlib as mpl
import matplotlib.pyplot as plt
from jupyterthemes import jtplot
jtplot.style()
#%config InlineBackend.figure_format = 'svg'
plt.rcParams['figure.dpi'] = 150
plt.rcParams["figure.figsize"] = (11,8)
from htm.bindings.sdr import SDR, Metrics
from htm.encoders.rdse import RDSE, RDSE_Parameters
from htm.encoders.date import DateEncoder
from htm.bindings.algorithms import SpatialPooler
from htm.bindings.algorithms import TemporalMemory
from htm.algorithms.anomaly_likelihood import AnomalyLikelihood #FIXME use TM.anomaly instead, but it gives worse results than the py.AnomalyLikelihood now
from htm.bindings.algorithms import Predictor
_EXAMPLE_DIR = os.path.abspath('')
_INPUT_FILE_PATH = os.path.join(_EXAMPLE_DIR, "bitcoin_all.csv")
# copied from https://github.com/htm-community/htm.core/blob/master/py/htm/examples/hotgym.py and modified
# i guess license will be the same
parameters = {
# there are 2 (3) encoders: "value" (RDSE) & "time" (DateTime weekend, timeOfDay)
'enc': {
"value" :
{'resolution': 0.88, 'size': 800, 'sparsity': 0.02},
"time":
{'timeOfDay': (30, 1), 'weekend': 21}
},
'predictor': {'sdrc_alpha': 0.1},
'sp': {'boostStrength': 3.0,
'columnCount': 238,
'localAreaDensity': 0.06395604395604396,
'potentialPct': 0.95,
'synPermActiveInc': 0.04,
'synPermConnected': 0.13999999999999999,
'synPermInactiveDec': 0.06},
'tm': {'activationThreshold': 17,
'cellsPerColumn': 20,
'initialPerm': 0.11,
'maxSegmentsPerCell': 128,
'maxSynapsesPerSegment': 64,
'minThreshold': 10,
'newSynapseCount': 128,
'permanenceDec': 0.1,
'permanenceInc': 0.1},
'anomaly': {
'likelihood':
{#'learningPeriod': int(math.floor(self.probationaryPeriod / 2.0)),
#'probationaryPeriod': self.probationaryPeriod-default_parameters["anomaly"]["likelihood"]["learningPeriod"],
'probationaryPct': 0.1,
'reestimationPeriod': 100} #These settings are copied from NAB
}
}
import pprint
print("Parameters:")
pprint.pprint(parameters, indent=4)
print("")
# Read the input file.
records = []
with open(_INPUT_FILE_PATH, "r") as fin:
reader = csv.reader(fin)
headers = next(reader)
for record in reader:
records.append(record)
x_g = []
y_g = []
for r in records:
x_g.append(datetime.datetime.strptime(r[0], "%Y-%m-%d %H:%M:%S"))
y_g.append(float(r[1]))
plt.xlabel("Time")
plt.ylabel("Val")
plt.plot(x_g, y_g)
print("running....")
# Make the Encoders. These will convert input data into binary representations.
dateEncoder = DateEncoder(timeOfDay= parameters["enc"]["time"]["timeOfDay"],
weekend = parameters["enc"]["time"]["weekend"])
scalarEncoderParams = RDSE_Parameters()
scalarEncoderParams.size = parameters["enc"]["value"]["size"]
scalarEncoderParams.sparsity = parameters["enc"]["value"]["sparsity"]
scalarEncoderParams.resolution = parameters["enc"]["value"]["resolution"]
scalarEncoder = RDSE( scalarEncoderParams )
encodingWidth = (dateEncoder.size + scalarEncoder.size)
enc_info = Metrics( [encodingWidth], 999999999 )
# Make the HTM. SpatialPooler & TemporalMemory & associated tools.
spParams = parameters["sp"]
sp = SpatialPooler(
inputDimensions = (encodingWidth,),
columnDimensions = (spParams["columnCount"],),
potentialPct = spParams["potentialPct"],
potentialRadius = encodingWidth,
globalInhibition = True,
localAreaDensity = spParams["localAreaDensity"],
synPermInactiveDec = spParams["synPermInactiveDec"],
synPermActiveInc = spParams["synPermActiveInc"],
synPermConnected = spParams["synPermConnected"],
boostStrength = spParams["boostStrength"],
wrapAround = True
)
sp_info = Metrics( sp.getColumnDimensions(), 999999999 )
tmParams = parameters["tm"]
tm = TemporalMemory(
columnDimensions = (spParams["columnCount"],),
cellsPerColumn = tmParams["cellsPerColumn"],
activationThreshold = tmParams["activationThreshold"],
initialPermanence = tmParams["initialPerm"],
connectedPermanence = spParams["synPermConnected"],
minThreshold = tmParams["minThreshold"],
maxNewSynapseCount = tmParams["newSynapseCount"],
permanenceIncrement = tmParams["permanenceInc"],
permanenceDecrement = tmParams["permanenceDec"],
predictedSegmentDecrement = 0.0,
maxSegmentsPerCell = tmParams["maxSegmentsPerCell"],
maxSynapsesPerSegment = tmParams["maxSynapsesPerSegment"]
)
tm_info = Metrics( [tm.numberOfCells()], 999999999 )
# setup likelihood, these settings are used in NAB
anParams = parameters["anomaly"]["likelihood"]
probationaryPeriod = int(math.floor(float(anParams["probationaryPct"])*len(records)))
learningPeriod = int(math.floor(probationaryPeriod / 2.0))
anomaly_history = AnomalyLikelihood(learningPeriod= learningPeriod,
estimationSamples= probationaryPeriod - learningPeriod,
reestimationPeriod= anParams["reestimationPeriod"])
predictor = Predictor( steps=[1, 90], alpha=parameters["predictor"]['sdrc_alpha'] )
# Resolution is how accurate the prediction should be to some dollar amount
# resolution 1000 means the prediction will only be accurate within 1000
# That is fine for bitcoin, but the graph will be broken if it is something like dogecoin,
# which is worth under 1 cent
# Predictor resolution needs to match data scale, or else it will take a long time to process.
#Should be based on data fluctuation range I think
resolutions_choices = {
10: [10000, 999999999999], # max between 10000 - ∞
1: [1000, 10000],
0.1: [100, 1000],
0.01: [0, 10]
}
predictor_resolution = 10
for res in resolutions_choices:
price_range = resolutions_choices[res]
if max(y_g) >= price_range[0] and max(y_g) <= price_range[1]:
predictor_resolution = res
print("predictor_resolution")
print(predictor_resolution)
# Iterate through every datum in the dataset, record the inputs & outputs.
inputs = []
anomaly = []
anomalyProb = []
predictions = {1: [], 90: []}
for count, record in enumerate(records):
# Convert date string into Python date object.
dateString = datetime.datetime.strptime(record[0], "%Y-%m-%d %H:%M:%S")
# Convert data value string into float.
consumption = float(record[1])
inputs.append( consumption )
# Call the encoders to create bit representations for each value. These are SDR objects.
dateBits = dateEncoder.encode(dateString)
consumptionBits = scalarEncoder.encode(consumption)
# Concatenate all these encodings into one large encoding for Spatial Pooling.
encoding = SDR( encodingWidth ).concatenate([consumptionBits, dateBits])
enc_info.addData( encoding )
# Create an SDR to represent active columns, This will be populated by the
# compute method below. It must have the same dimensions as the Spatial Pooler.
activeColumns = SDR( sp.getColumnDimensions() )
# Execute Spatial Pooling algorithm over input space.
sp.compute(encoding, True, activeColumns)
sp_info.addData( activeColumns )
# Execute Temporal Memory algorithm over active mini-columns.
tm.compute(activeColumns, learn=True)
tm_info.addData( tm.getActiveCells().flatten() )
# Predict what will happen, and then train the predictor based on what just happened.
pdf = predictor.infer( tm.getActiveCells() )
for n in (1, 90):
if pdf[n]:
predictions[n].append( np.argmax( pdf[n] ) * predictor_resolution )
else:
predictions[n].append(float('nan'))
anomalyLikelihood = anomaly_history.anomalyProbability( consumption, tm.anomaly )
anomaly.append( tm.anomaly )
anomalyProb.append( anomalyLikelihood )
predictor.learn(count, tm.getActiveCells(), int(consumption / predictor_resolution))
# Print information & statistics about the state of the HTM.
print("Encoded Input", enc_info)
print("")
print("Spatial Pooler Mini-Columns", sp_info)
print(str(sp))
print("")
print("Temporal Memory Cells", tm_info)
print(str(tm))
print("")
# Shift the predictions so that they are aligned with the input they predict.
for n_steps, pred_list in predictions.items():
for x in range(n_steps):
pred_list.insert(0, float('nan'))
pred_list.pop()
# Calculate the predictive accuracy, Root-Mean-Squared
accuracy = {1: 0, 90: 0}
accuracy_samples = {1: 0, 90: 0}
for idx, inp in enumerate(inputs):
for n in predictions: # For each [N]umber of time steps ahead which was predicted.
val = predictions[n][ idx ]
if not math.isnan(val):
accuracy[n] += (inp - val) ** 2
accuracy_samples[n] += 1
for n in sorted(predictions):
accuracy[n] = (accuracy[n] / accuracy_samples[n]) ** .5
print("Predictive Error (RMS)", n, "steps ahead:", accuracy[n])
# Show info about the anomaly (mean & std)
print("Anomaly Mean", np.mean(anomaly))
print("Anomaly Std ", np.std(anomaly))
# Plot the Predictions and Anomalies.
print("Graph of training progress through the set. Gets more accurated as it gets further through the set:")
plt.subplot(2,1,1)
plt.title("Predictions")
plt.xlabel("Time")
plt.ylabel("Power Consumption")
plt.plot(np.arange(len(inputs)), y_g,
np.arange(len(inputs)), predictions[1],
np.arange(len(inputs)), predictions[90])
plt.legend(labels=('Input', '1 Step Prediction, Shifted 1 step', '5 Step Prediction, Shifted 5 steps'))
plt.subplot(2,1,2)
plt.title("Anomaly Score")
plt.xlabel("Time")
plt.ylabel("Power Consumption")
inputs = np.array(inputs) / max(inputs)
plt.plot(np.arange(len(inputs)), inputs,
np.arange(len(inputs)), anomaly)
plt.legend(labels=('Input', 'Anomaly Score'))
plt.subplots_adjust(hspace=0.4)
plt.show()
print("-accuracy[5]:", -accuracy[90])
#print(records)
goal_len = len(records) + 90
while len(records) < goal_len:
record = records[-1]
# Convert date string into Python date object.
dateString = datetime.datetime.strptime(record[0], "%Y-%m-%d %H:%M:%S")
dateStringPlusOne = (datetime.datetime.strptime(record[0], "%Y-%m-%d %H:%M:%S")+datetime.timedelta(days=1))
dateStringPlusOne = dateStringPlusOne.strftime("%Y-%m-%d %H:%M:%S")
# Convert data value string into float.
consumption = float(record[1])
#inputs.append( consumption )
# Call the encoders to create bit representations for each value. These are SDR objects.
dateBits = dateEncoder.encode(dateString)
consumptionBits = scalarEncoder.encode(consumption)
# Concatenate all these encodings into one large encoding for Spatial Pooling.
encoding = SDR( encodingWidth ).concatenate([consumptionBits, dateBits])
enc_info.addData( encoding )
# Create an SDR to represent active columns, This will be populated by the
# compute method below. It must have the same dimensions as the Spatial Pooler.
activeColumns = SDR( sp.getColumnDimensions() )
# Execute Spatial Pooling algorithm over input space.
sp.compute(encoding, True, activeColumns)
sp_info.addData( activeColumns )
# Execute Temporal Memory algorithm over active mini-columns.
tm.compute(activeColumns, learn=False)
tm_info.addData( tm.getActiveCells().flatten() )
# Predict what will happen, and then add to records for next prediction
pdf = predictor.infer( tm.getActiveCells() )
#for n in (1):#, 5):
if pdf[1]:
#predictions[n].append( np.argmax( pdf[n] ) * predictor_resolution )
records.append([dateStringPlusOne, np.argmax( pdf[1] ) * predictor_resolution])
else:
records.append([dateStringPlusOne, records[-1][1]])
#predictions[n].append(float('nan'))
y_g2 = []
for r in records:
#x_g.append(datetime.datetime.strptime(r[0], "%Y-%m-%d %H:%M:%S"))
y_g2.append(float(r[1]))
#print(y_)
plt.subplot(2,1,1)
plt.title("Prediction 90 days out")
plt.xlabel("Time")
plt.ylabel("Val")
plt.plot(np.arange(len(y_g2)), y_g2, np.arange(len(y_g)), y_g)
plt.legend(labels=('Predicted data', 'Real data'))
y_g = y_g[-30:]
y_g2 = y_g2[-120:]
y_g2 = y_g2[0:len(y_g) + 14]
plt.subplot(2,1,2)
plt.title("Prediction 14 days out")
plt.xlabel("Time")
plt.ylabel("Val")
plt.plot(np.arange(len(y_g2)), y_g2, np.arange(len(y_g)), y_g)
plt.legend(labels=('Predicted data', 'Real data'))
plt.subplots_adjust(hspace=0.4)
plt.show()
import sys
csv = "i,ds,y\n"
for i in range(len(records)):
csv += "_," + ",".join(map(str, records[i]))
csv += "\n"
f = open("./predictions-cache/bitcoin.csv", "w")#+sys.argv[1], "w")
f.write(csv)
f.close()
###Output
_____no_output_____ |
runge.ipynb | ###Markdown
Fenômeno de Runge Prof. Pedro Peixoto Maio 2022
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
def lagrange(x, y, x_samples):
y_samples = np.zeros_like(x_samples)
for xi, yi in zip(x, y):
y_samples += yi*np.prod([(x_samples - xj)/(xi - xj)
for xj in x if xi!=xj], axis=0)
return y_samples
#Runge function
f = lambda x: 1/(1 + 25*x**2)
n= 20
#Dados
x = np.linspace(-1, 1, n)
# Locais onde quero calcular (interpolar) -
# vou colocar mais pontos para ver como se conporta entre os pontos dados
x_full = np.linspace(-1, 1, 10000)
y_full = lagrange(x, f(x), x_full)
plt.plot(x, f(x), "k")
plt.plot(x_full, y_full)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
print("n=", n, " Max Error=", np.max(np.abs(y_full-f(x_full))))
# https://notebook.community/tclaudioe/Scientific-Computing/SC1/07_Polynomial_Interpolation_1D
def Chebyshev(xmin,xmax,n=5):
# This function calculates the n Chebyshev points and plots or returns them depending on ax
ns = np.arange(1,n+1)
x = np.cos((2*ns-1)*np.pi/(2*n))
y = np.sin((2*ns-1)*np.pi/(2*n))
plt.figure(figsize=(10,5))
plt.ylim(-0.1,1.1)
plt.xlim(-1.1,1.1)
plt.plot(np.cos(np.linspace(0,np.pi)),np.sin(np.linspace(0,np.pi)),'k-')
plt.plot([-2,2],[0,0],'k-')
plt.plot([0,0],[-1,2],'k-')
for i in range(len(y)):
plt.plot([x[i],x[i]],[0,y[i]],'r-')
plt.plot([0,x[i]],[0,y[i]],'r-')
plt.plot(x,[0]*len(x),'bo',label='Chebyshev points')
plt.plot(x,y,'ro')
plt.xlabel('$x$')
plt.title('n = '+str(n))
plt.grid(True)
plt.legend(loc='best')
plt.show()
def Chebyshev_points(xmin,xmax,n):
ns = np.arange(1,n+1)
x = np.cos((2*ns-1)*np.pi/(2*n))
return (xmin+xmax)/2 + (xmax-xmin)*x/2
from ipywidgets import interact, fixed, IntSlider
interact(Chebyshev,xmin=fixed(-1),xmax=fixed(1),n=(2,50))
#Runge function
f = lambda x: 1/(1 + 25*x**2)
n= 20
#Dados
x = Chebyshev_points(-1,1,n)
# Locais onde quero calcular (interpolar) -
# vou colocar mais pontos para ver como se conporta entre os pontos dados
x_full = np.linspace(-1, 1, 10000)
y_full = lagrange(x, f(x), x_full)
plt.plot(x, f(x), "k")
plt.plot(x_full, y_full)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
print("n=", n, " Max Error=", np.max(np.abs(y_full-f(x_full))))
###Output
_____no_output_____
###Markdown
A manifestation of the Runge's phenomenon
###Code
import numpy as np
import matplotlib.pyplot as plt
def f(x):
''' The Runge's function
:param x: independent variable
:type x: double or numpy array of doubles
:return: dependent variable after function evaluation
:rtype: double or numpy array of doubles
'''
return 1.0/(1.0+25.0*x**2)
def newton_divdiff(x,node_x,node_y):
'''Generates an interpolating polynomial using the Newton's divided differences
:param x: the locations where the Newton's polynomials needs to be evaluated
:type x: double or np.array
:param node_x: the x values for the locations to be interpolated
:type node_x: list or np.array
:param node_y: the y values for the locations to be interpolated
:type node_y: list or np.array
:return: the interpolating polynomial at the provided x locations
:rtype: list or np.array
'''
n = len(node_x)-1
F = np.zeros((n+1,n+1))
F[:,0] = node_y
# Compute the coefficients of the Newton's polynomial
# and store them on the main diagonal of F
for i in range(1,n+1):
for j in range(1,i+1):
F[i,j] = (F[i,j-1] - F[i-1,j-1])/(node_x[i] - node_x[i-j])
# Generate the interpolating polynomial
poly = np.zeros(len(x))
for i in range(n+1):
# Compute the product of the x
prod_x = 1.0
for j in range(i):
prod_x *= (x-node_x[j])
poly += F[i,i]*prod_x
# Return the polynomial
return poly
# 21 Equally spaced point in [-1,1]
n=21
node_x = np.linspace(-1.0,1.0,n)
node_y = f(node_x)
# Provided the x locations where to evaluate the interpolating polynomial
x = np.linspace(-1,1,1000)
# Construct the interpolant and plot it
plt.plot(node_x,node_y,'ro')
plt.plot(x,f(x),'k--',lw=2,label='function f',alpha=0.7)
plt.plot(x,newton_divdiff(x,node_x,node_y),lw=2,label='Lagrange interpolating polynomial',alpha=0.9)
plt.ylim([0,1.1])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Lagrange interpolation on equally spaced point is an increasignly ill-conditioned process.Reminiscent of **overfitting** and **generalization** in neural networks.
###Code
# Use Chebyshev nodes
n=21
k = np.arange(n)
node_x = -np.cos(k*np.pi/n)
node_y = f(node_x)
# Provided the x locations where to evaluate the interpolating polynomial
x = np.linspace(-1,1,1000)
# Construct the interpolant and plot it
plt.plot(node_x,node_y,'ro')
plt.plot(x,f(x),'k--',lw=2,label='function f',alpha=0.7)
plt.plot(x,newton_divdiff(x,node_x,node_y),lw=2,label='Lagrange interpolating polynomial',alpha=0.9)
plt.ylim([0,1.1])
plt.legend()
plt.show()
###Output
_____no_output_____ |
notebooks/netcdf.ipynb | ###Markdown
NetCDF handlingNetCDF formatted files are much faster to read and write for large datasets. In order to make the most of this, the `ScmRun` objects have the ability to read and write netCDF files.
###Code
# NBVAL_IGNORE_OUTPUT
import traceback
from glob import glob
import numpy as np
import seaborn as sns
import xarray as xr
import pandas as pd
from scmdata.run import ScmRun, run_append
from scmdata.netcdf import nc_to_run
pd.set_option("display.width", 120)
pd.set_option("display.max_columns", 15)
pd.set_option("display.max_colwidth", 80)
pd.set_option("display.min_rows", 20)
###Output
_____no_output_____
###Markdown
Helper bits and piecs
###Code
OUT_FNAME = "/tmp/out_runs.nc"
def new_timeseries(
n=100,
count=1,
model="example",
scenario="ssp119",
variable="Surface Temperature",
unit="K",
region="World",
cls=ScmRun,
**kwargs,
):
data = np.random.rand(n, count) * np.arange(n)[:, np.newaxis]
index = 2000 + np.arange(n)
return cls(
data,
columns={
"model": model,
"scenario": scenario,
"variable": variable,
"region": region,
"unit": unit,
**kwargs,
},
index=index,
)
###Output
_____no_output_____
###Markdown
Let's create an `ScmRun` which contains a few variables and a number of runs. Such a dataframe would be used to store the results from an ensemble of simple climate model runs.
###Code
# NBVAL_IGNORE_OUTPUT
runs = run_append(
[
new_timeseries(
count=3,
variable=[
"Surface Temperature",
"Atmospheric Concentrations|CO2",
"Radiative Forcing",
],
unit=["K", "ppm", "W/m^2"],
run_id=run_id,
)
for run_id in range(10)
]
)
runs.metadata["source"] = "fake data"
runs
###Output
_____no_output_____
###Markdown
Reading/Writing to NetCDF4 BasicsWriting the runs to disk is easy. The one trick is that each variable and dimension combination must have unique metadata. If they do not, you will receive an error message like the below.
###Code
try:
runs.to_nc(OUT_FNAME, dimensions=["region"])
except ValueError:
traceback.print_exc(limit=0, chain=False)
###Output
ValueError: dimensions: `['region']` and extras: `[]` do not uniquely define the timeseries, please add extra dimensions and/or extras
###Markdown
In our dataset, there is more than one "run_id" per variable hence we need to use a different dimension, `run_id`, because this will result in each variable's remaining metadata being unique.
###Code
runs.to_nc(OUT_FNAME, dimensions=["run_id"])
###Output
_____no_output_____
###Markdown
The output netCDF file can be read using the `from_nc` method, `nc_to_run` function or directly using `xarray`.
###Code
# NBVAL_IGNORE_OUTPUT
runs_netcdf = ScmRun.from_nc(OUT_FNAME)
runs_netcdf
# NBVAL_IGNORE_OUTPUT
nc_to_run(ScmRun, OUT_FNAME)
# NBVAL_IGNORE_OUTPUT
xr.load_dataset(OUT_FNAME)
###Output
_____no_output_____
###Markdown
The additional `metadata` in `runs` is also serialized and deserialized in the netCDF files. The `metadata` of the loaded `ScmRun` will also contain some additional fields about the file creation.
###Code
# NBVAL_IGNORE_OUTPUT
assert "source" in runs_netcdf.metadata
runs_netcdf.metadata
###Output
_____no_output_____
###Markdown
Splitting your dataSometimes if you have complicated ensemble runs it might be more efficient to split the data into smaller subsets.In the below example we iterate over scenarios to produce a netCDF file per scenario.
###Code
large_run = []
# 10 runs for each scenario
for sce in ["ssp119", "ssp370", "ssp585"]:
large_run.extend(
[
new_timeseries(
count=3,
scenario=sce,
variable=[
"Surface Temperature",
"Atmospheric Concentrations|CO2",
"Radiative Forcing",
],
unit=["K", "ppm", "W/m^2"],
paraset_id=paraset_id,
)
for paraset_id in range(10)
]
)
large_run = run_append(large_run)
# also set a run_id (often we'd have paraset_id and run_id,
# one which keeps track of the parameter set we've run and
# the other which keeps track of the run in a large ensemble)
large_run["run_id"] = large_run.meta.index.values
large_run
###Output
_____no_output_____
###Markdown
Data for each scenario can then be loaded independently instead of having to load all the data and then filtering
###Code
for sce_run in large_run.groupby("scenario"):
sce = sce_run.get_unique_meta("scenario", True)
sce_run.to_nc(
"/tmp/out-{}-sparse.nc".format(sce),
dimensions=["run_id", "paraset_id"],
)
# NBVAL_IGNORE_OUTPUT
ScmRun.from_nc("/tmp/out-ssp585-sparse.nc").filter("Surface Temperature").line_plot()
###Output
_____no_output_____
###Markdown
For such a data set, since both `run_id` and `paraset_id` vary, both could be added as dimensions in the file.The one problem with this approach is that you get very sparse arrays because the data is written on a 100 x 30 x 90 (time points x paraset_id x run_id) grid but there's only 90 timeseries so you end up with 180 timeseries worth of nans (although this is a relatively small problem because the netCDF files use compression to minismise the impact of the extra nan values).
###Code
# NBVAL_IGNORE_OUTPUT
xr.load_dataset("/tmp/out-ssp585-sparse.nc")
# NBVAL_IGNORE_OUTPUT
# Load all scenarios
run_append([ScmRun.from_nc(fname) for fname in glob("/tmp/out-ssp*-sparse.nc")])
###Output
_____no_output_____
###Markdown
An alternative to the sparse arrays is to specify the variables in the `extras` attribute. If possible, this adds the metadata to the netCDF file as an extra co-ordinate, which uses one of the dimensions as it's co-ordinate. If using one of the dimensions as a co-ordinate would not specify the metadata uniquely, we add the extra as an additional co-ordinate, which itself has co-ordinates of `_id`. This `_id` co-ordinate provides a unique mapping between the extra metadata and the timeseries.
###Code
for sce_run in large_run.groupby("scenario"):
sce = sce_run.get_unique_meta("scenario", True)
sce_run.to_nc(
"/tmp/out-{}-extras.nc".format(sce),
dimensions=["run_id"],
extras=["paraset_id"],
)
###Output
_____no_output_____
###Markdown
`paraset_id` is uniquely defined by `run_id` so we don't end up with an extra `_id` co-ordinate.
###Code
# NBVAL_IGNORE_OUTPUT
xr.load_dataset("/tmp/out-ssp585-extras.nc")
# NBVAL_IGNORE_OUTPUT
ScmRun.from_nc("/tmp/out-ssp585-extras.nc").filter("Surface Temperature").line_plot()
###Output
_____no_output_____
###Markdown
If we use dimensions and extra such that our extra co-ordinates are not uniquely defined by the regions, an `_id` dimension is automatically added to ensure we don't lose any information.
###Code
large_run.to_nc(
"/tmp/out-extras-sparse.nc",
dimensions=["scenario"],
extras=["paraset_id", "run_id"],
)
# NBVAL_IGNORE_OUTPUT
xr.load_dataset("/tmp/out-extras-sparse.nc")
###Output
_____no_output_____
###Markdown
Multi-dimensional data**scmdata** can also handle having more than one dimension. This can be especially helpful if you have output from a number of models (IAMs), scenarios, regions and runs.
###Code
multi_dimensional_run = []
for model in ["AIM", "GCAM", "MESSAGE", "REMIND"]:
for sce in ["ssp119", "ssp370", "ssp585"]:
for region in ["World", "R5LAM", "R5MAF", "R5ASIA", "R5OECD", "R5REF"]:
multi_dimensional_run.extend(
[
new_timeseries(
count=3,
model=model,
scenario=sce,
region=region,
variable=[
"Surface Temperature",
"Atmospheric Concentrations|CO2",
"Radiative Forcing",
],
unit=["K", "ppm", "W/m^2"],
paraset_id=paraset_id,
)
for paraset_id in range(10)
]
)
multi_dimensional_run = run_append(multi_dimensional_run)
multi_dimensional_run
multi_dim_outfile = "/tmp/out-multi-dimensional.nc"
multi_dimensional_run.to_nc(
multi_dim_outfile,
dimensions=("region", "model", "scenario", "paraset_id"),
)
# NBVAL_IGNORE_OUTPUT
xr.load_dataset(multi_dim_outfile)
# NBVAL_IGNORE_OUTPUT
multi_dim_loaded_co2_conc = ScmRun.from_nc(multi_dim_outfile).filter(
"Atmospheric Concentrations|CO2"
)
seaborn_df = multi_dim_loaded_co2_conc.long_data()
seaborn_df.head()
# NBVAL_IGNORE_OUTPUT
sns.relplot(
data=seaborn_df,
x="time",
y="value",
units="paraset_id",
estimator=None,
hue="scenario",
style="model",
col="region",
col_wrap=3,
kind="line",
)
###Output
_____no_output_____
###Markdown
NetCDF handlingNetCDF formatted files are much faster to read and write for large datasets. In order to make the most of this, the `ScmRun` objects have the ability to read and write netCDF files.
###Code
# NBVAL_IGNORE_OUTPUT
import traceback
from glob import glob
import numpy as np
import seaborn as sns
import xarray as xr
import pandas as pd
from scmdata.run import ScmRun, run_append
from scmdata.netcdf import nc_to_run
pd.set_option("display.width", 120)
pd.set_option("display.max_columns", 15)
pd.set_option("display.max_colwidth", 80)
pd.set_option("display.min_rows", 20)
###Output
_____no_output_____
###Markdown
Helper bits and piecs
###Code
OUT_FNAME = "/tmp/out_runs.nc"
def new_timeseries(
n=100,
count=1,
model="example",
scenario="ssp119",
variable="Surface Temperature",
unit="K",
region="World",
cls=ScmRun,
**kwargs,
):
data = np.random.rand(n, count) * np.arange(n)[:, np.newaxis]
index = 2000 + np.arange(n)
return cls(
data,
columns={
"model": model,
"scenario": scenario,
"variable": variable,
"region": region,
"unit": unit,
**kwargs,
},
index=index,
)
###Output
_____no_output_____
###Markdown
Let's create an `ScmRun` which contains a few variables and a number of runs. Such a dataframe would be used to store the results from an ensemble of simple climate model runs.
###Code
# NBVAL_IGNORE_OUTPUT
runs = run_append(
[
new_timeseries(
count=3,
variable=[
"Surface Temperature",
"Atmospheric Concentrations|CO2",
"Radiative Forcing",
],
unit=["K", "ppm", "W/m^2"],
run_id=run_id,
)
for run_id in range(10)
]
)
runs.metadata["source"] = "fake data"
runs
###Output
_____no_output_____
###Markdown
Reading/Writing to NetCDF4 BasicsWriting the runs to disk is easy. The one trick is that each variable and dimension combination must have unique metadata. If they do not, you will receive an error message like the below.
###Code
try:
runs.to_nc(OUT_FNAME, dimensions=["region"])
except ValueError:
traceback.print_exc(limit=0, chain=False)
###Output
Traceback (most recent call last):
ValueError: dimensions: `['region']` and extras: `[]` do not uniquely define the timeseries, please add extra dimensions and/or extras
###Markdown
In our dataset, there is more than one "run_id" per variable hence we need to use a different dimension, `run_id`, because this will result in each variable's remaining metadata being unique.
###Code
runs.to_nc(OUT_FNAME, dimensions=["run_id"])
###Output
_____no_output_____
###Markdown
The output netCDF file can be read using the `from_nc` method, `nc_to_run` function or directly using `xarray`.
###Code
# NBVAL_IGNORE_OUTPUT
runs_netcdf = ScmRun.from_nc(OUT_FNAME)
runs_netcdf
# NBVAL_IGNORE_OUTPUT
nc_to_run(ScmRun, OUT_FNAME)
# NBVAL_IGNORE_OUTPUT
xr.load_dataset(OUT_FNAME)
###Output
_____no_output_____
###Markdown
The additional `metadata` in `runs` is also serialized and deserialized in the netCDF files. The `metadata` of the loaded `ScmRun` will also contain some additional fields about the file creation.
###Code
# NBVAL_IGNORE_OUTPUT
assert "source" in runs_netcdf.metadata
runs_netcdf.metadata
###Output
_____no_output_____
###Markdown
Splitting your dataSometimes if you have complicated ensemble runs it might be more efficient to split the data into smaller subsets.In the below example we iterate over scenarios to produce a netCDF file per scenario.
###Code
large_run = []
# 10 runs for each scenario
for sce in ["ssp119", "ssp370", "ssp585"]:
large_run.extend(
[
new_timeseries(
count=3,
scenario=sce,
variable=[
"Surface Temperature",
"Atmospheric Concentrations|CO2",
"Radiative Forcing",
],
unit=["K", "ppm", "W/m^2"],
paraset_id=paraset_id,
)
for paraset_id in range(10)
]
)
large_run = run_append(large_run)
# also set a run_id (often we'd have paraset_id and run_id,
# one which keeps track of the parameter set we've run and
# the other which keeps track of the run in a large ensemble)
large_run["run_id"] = large_run.meta.index.values
large_run
###Output
_____no_output_____
###Markdown
Data for each scenario can then be loaded independently instead of having to load all the data and then filtering
###Code
for sce_run in large_run.groupby("scenario"):
sce = sce_run.get_unique_meta("scenario", True)
sce_run.to_nc(
"/tmp/out-{}-sparse.nc".format(sce),
dimensions=["run_id", "paraset_id"],
)
# NBVAL_IGNORE_OUTPUT
ScmRun.from_nc("/tmp/out-ssp585-sparse.nc").filter(
"Surface Temperature"
).line_plot()
###Output
_____no_output_____
###Markdown
For such a data set, since both `run_id` and `paraset_id` vary, both could be added as dimensions in the file.The one problem with this approach is that you get very sparse arrays because the data is written on a 100 x 30 x 90 (time points x paraset_id x run_id) grid but there's only 90 timeseries so you end up with 180 timeseries worth of nans (although this is a relatively small problem because the netCDF files use compression to minismise the impact of the extra nan values).
###Code
# NBVAL_IGNORE_OUTPUT
xr.load_dataset("/tmp/out-ssp585-sparse.nc")
# NBVAL_IGNORE_OUTPUT
# Load all scenarios
run_append(
[ScmRun.from_nc(fname) for fname in glob("/tmp/out-ssp*-sparse.nc")]
)
###Output
_____no_output_____
###Markdown
An alternative to the sparse arrays is to specify the variables in the `extras` attribute. If possible, this adds the metadata to the netCDF file as an extra co-ordinate, which uses one of the dimensions as it's co-ordinate. If using one of the dimensions as a co-ordinate would not specify the metadata uniquely, we add the extra as an additional co-ordinate, which itself has co-ordinates of `_id`. This `_id` co-ordinate provides a unique mapping between the extra metadata and the timeseries.
###Code
for sce_run in large_run.groupby("scenario"):
sce = sce_run.get_unique_meta("scenario", True)
sce_run.to_nc(
"/tmp/out-{}-extras.nc".format(sce),
dimensions=["run_id"],
extras=["paraset_id"],
)
###Output
_____no_output_____
###Markdown
`paraset_id` is uniquely defined by `run_id` so we don't end up with an extra `_id` co-ordinate.
###Code
# NBVAL_IGNORE_OUTPUT
xr.load_dataset("/tmp/out-ssp585-extras.nc")
# NBVAL_IGNORE_OUTPUT
ScmRun.from_nc("/tmp/out-ssp585-extras.nc").filter(
"Surface Temperature"
).line_plot()
###Output
_____no_output_____
###Markdown
If we use dimensions and extra such that our extra co-ordinates are not uniquely defined by the regions, an `_id` dimension is automatically added to ensure we don't lose any information.
###Code
large_run.to_nc(
"/tmp/out-extras-sparse.nc",
dimensions=["scenario"],
extras=["paraset_id", "run_id"],
)
# NBVAL_IGNORE_OUTPUT
xr.load_dataset("/tmp/out-extras-sparse.nc")
###Output
_____no_output_____
###Markdown
Multi-dimensional data**scmdata** can also handle having more than one dimension. This can be especially helpful if you have output from a number of models (IAMs), scenarios, regions and runs.
###Code
multi_dimensional_run = []
for model in ["AIM", "GCAM", "MESSAGE", "REMIND"]:
for sce in ["ssp119", "ssp370", "ssp585"]:
for region in ["World", "R5LAM", "R5MAF", "R5ASIA", "R5OECD", "R5REF"]:
multi_dimensional_run.extend(
[
new_timeseries(
count=3,
model=model,
scenario=sce,
region=region,
variable=[
"Surface Temperature",
"Atmospheric Concentrations|CO2",
"Radiative Forcing",
],
unit=["K", "ppm", "W/m^2"],
paraset_id=paraset_id,
)
for paraset_id in range(10)
]
)
multi_dimensional_run = run_append(multi_dimensional_run)
multi_dimensional_run
multi_dim_outfile = "/tmp/out-multi-dimensional.nc"
multi_dimensional_run.to_nc(
multi_dim_outfile,
dimensions=("region", "model", "scenario", "paraset_id"),
)
# NBVAL_IGNORE_OUTPUT
xr.load_dataset(multi_dim_outfile)
# NBVAL_IGNORE_OUTPUT
multi_dim_loaded_co2_conc = ScmRun.from_nc(multi_dim_outfile).filter(
"Atmospheric Concentrations|CO2"
)
seaborn_df = multi_dim_loaded_co2_conc.long_data()
seaborn_df.head()
# NBVAL_IGNORE_OUTPUT
sns.relplot(
data=seaborn_df,
x="time",
y="value",
units="paraset_id",
estimator=None,
hue="scenario",
style="model",
col="region",
col_wrap=3,
kind="line",
)
###Output
_____no_output_____ |
notebooks/MAU.ipynb | ###Markdown
Plotting
###Code
from bokeh.io import show
from bokeh.io import output_file
from bokeh.plotting import figure
for i in range(len(maus)):
year = years[i]
count = maus[i]
output_file('mau_{}.html'.format(year))
p = figure(x_range=months, plot_height=400, plot_width=600,
title= "Monthly Active Users for Year {}".format(year))
p.xaxis.major_label_text_font_size = "10pt"
p.vbar(x=months, top=count, width=0.3)
p.xgrid.grid_line_color = None
p.y_range.start = 0
show(p)
output_file('mau.html')
p = figure(x_range=months, plot_height=1000, plot_width=1600,
title= "Monthly Active Users")
p.xaxis.major_label_text_font_size = "14pt"
p.vbar(x=months, top=count, width=0.5)
p.xgrid.grid_line_color = None
p.y_range.start = 0
show(p)
###Output
/home/renjie/anaconda3/lib/python3.6/site-packages/bokeh/models/sources.py:110: BokehUserWarning: ColumnDataSource's columns must be of the same length. Current lengths: ('top', 11), ('x', 12)
"Current lengths: %s" % ", ".join(sorted(str((k, len(v))) for k, v in data.items())), BokehUserWarning))
###Markdown
Others
###Code
# cursor = hn_2018.find({'time':{'$gt' : "1517443200"}, 'time':{'$lt': "1519862400"}})
# cursor = hn_2018.find()
# temp = []
# for i in cursor:
# if int(i['time']) > 1517443200 and int(i['time']) < 1519862400:
# temp.append(i)
# usr_profiles = set([i['by'] for i in temp])
# len(usr_profiles), len(temp)
# fix this
count = []
for i in range(1,12):
cursor = collections[0].find()
print("month: {}".format(i))
start_month = int((date(2016,i,1)- date(1970,1,1)).total_seconds())
end_month = int((date(2016,i+1,1)- date(1970,1,1)).total_seconds())
temp = []
for i in cursor:
if int(i['time']) > start_month and int(i['time']) < end_month:
temp.append(i)
usr_profiles = set([i['by'] for i in temp])
print("active users per month: {}".format(len(usr_profiles)))
count.append(len(usr_profiles))
###Output
month: 1
active users per month: 31115
month: 2
active users per month: 32010
month: 3
active users per month: 33574
month: 4
active users per month: 33165
month: 5
active users per month: 33096
month: 6
active users per month: 33035
month: 7
active users per month: 31478
month: 8
active users per month: 32708
month: 9
active users per month: 33504
month: 10
active users per month: 35150
month: 11
active users per month: 36364
|
classifier.ipynb | ###Markdown
Data Preparation
###Code
path = Path("crypto_data.csv")
df = pd.read_csv(path)
df
df = df[df['IsTrading']==True]
#drop the IsTrading column from the dataframe.
df = df.drop(columns=['IsTrading'])
#Remove all rows that have at least one null value.
df = df.dropna(how='any',axis=0)
#TotalCoinsMined > 0
df = df[df['TotalCoinsMined']>0]
#delete the CoinName
df = df.drop(columns=['CoinName'])
df = df.drop(columns=['Unnamed: 0'])
df
algorithms = {}
algorithmsList = df['Algorithm'].unique().tolist()
for i in range(len(algorithmsList)):
algorithms[algorithmsList[i]] = i
proofType = {}
proofTypeList = df['ProofType'].unique().tolist()
for i in range(len(proofTypeList)):
proofType[proofTypeList[i]] = i
df = df.replace(({'Algorithm':algorithms}))
df = df.replace(({'ProofType':proofType}))
df.dtypes
# Standarize data with StandarScaler
scaler = StandardScaler()
scaled_data = scaler.fit_transform(df[['TotalCoinsMined', 'TotalCoinSupply']])
df1 = pd.DataFrame(scaled_data, columns=df.columns[2:])
df1['Algorithm']=df['Algorithm'].values
df1['ProofType']=df['ProofType'].values
df1
###Output
_____no_output_____
###Markdown
Dimensionality Reduction
###Code
#PCA
pca = PCA(n_components=.99)
df_pca = pca.fit_transform(df1)
df_pca = pd.DataFrame(
data=df_pca, columns=["principal component 1", "principal component 2"]
)
df_pca.head()
pca.explained_variance_ratio_.sum()
###Output
_____no_output_____
###Markdown
Cluster Analysis with k-Means
###Code
#Create an elbow plot
inertia = []
k = list(range(1, 11))
# Looking for the best k
for i in k:
km = KMeans(n_clusters=i, random_state=1234)
km.fit(df1)
inertia.append(km.inertia_)
# Define a DataFrame to plot the Elbow Curve using hvPlot
elbow_data = {"k": k, "inertia": inertia}
df_elbow = pd.DataFrame(elbow_data)
plt.plot(df_elbow['k'], df_elbow['inertia'])
plt.xticks(range(1,11))
plt.xlabel('Number of clusters')
plt.ylabel('Inertia')
plt.show()
df_pca = pd.DataFrame(
data=df_pca,
columns=["principal component 1", "principal component 2"],
)
df_pca.head()
# Initialize the K-Means model
model = KMeans(n_clusters=2, random_state=1234)
# Fit the model
model.fit(df_pca)
# Predict clusters
predictions = model.predict(df_pca)
# Add the predicted class columns
df_pca["class"] = model.labels_
df_pca.head()
df1
model.fit(df1)
# Predict clusters
predictions = model.predict(df1)
df2 = df1
# Add the predicted class columns
df2["class"] = model.labels_
df3 = df2.drop(['class'], axis=1)
labels = df2['class']
df3
# TSNE
tsne = TSNE(learning_rate=35)
# Reduce dimensions
tsne_features = tsne.fit_transform(df3)
tsne_features.shape
# Prepare to plot the dataset
# The first column of transformed features
df3['x'] = tsne_features[:,0]
# The second column of transformed features
df3['y'] = tsne_features[:,1]
# Visualize the clusters
plt.scatter(df3['x'], df3['y'])
plt.show()
labels.value_counts()
# Visualize the clusters with color
plt.scatter(df3['x'], df3['y'], c=labels)
plt.show()
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
datafile='train1.pt'
project_name='em-showers-network-training'
work_space='ketrint'
experiment = Experiment('6O55PoJt4tkp9LyupIE86eikH', project_name=project_name, workspace=work_space)
device = torch.device('cuda')
showers = preprocess_dataset(datafile)
k = showers[0].x.shape[1]
print(k)
epochs=10
learning_rate=1e-3
dim_out=10
threshold =0.9
graph_embedder='GraphNN_KNN_v3'
edge_classifier='EdgeClassifier_v3'
showers
showers_train, showers_test = train_test_split(showers, random_state=1337)
train_loader = DataLoader(showers_train, batch_size=1, shuffle=True)
test_loader = DataLoader(showers_test, batch_size=1, shuffle=True)
graph_embedder = str_to_class(graph_embedder)(dim_out=dim_out, k=k).to(device)
edge_classifier = str_to_class(edge_classifier)(dim_out=dim_out).to(device)
criterion = FocalLoss(gamma=2.)
optimizer = torch.optim.Adam(list(graph_embedder.parameters()) + list(edge_classifier.parameters()),
lr=learning_rate)
loss_train = RunningAverageMeter()
loss_test = RunningAverageMeter()
roc_auc_test = RunningAverageMeter()
pr_auc_test = RunningAverageMeter()
acc_test = RunningAverageMeter()
class_disbalance = RunningAverageMeter()
experiment = Experiment('6O55PoJt4tkp9LyupIE86eikH', project_name=project_name, workspace=work_space)
early_stopping = EarlyStopping_(patience=100, verbose=True)
for _ in tqdm(range(epochs)):
for shower in train_loader:
shower = shower.to(device)
edge_labels_true, edge_labels_predicted = predict_one_shower(shower,
graph_embedder=graph_embedder,
edge_classifier=edge_classifier)
# calculate the batch loss
loss = criterion(edge_labels_predicted, edge_labels_true.float())
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_train.update(loss.item())
class_disbalance.update((edge_labels_true.sum().float() / len(edge_labels_true)).item())
y_true_list = []
y_pred_list = []
for shower in test_loader:
shower = shower.to(device)
edge_labels_true, edge_labels_predicted = predict_one_shower(shower,
graph_embedder=graph_embedder,
edge_classifier=edge_classifier)
# calculate the batch loss
loss = criterion(edge_labels_predicted, edge_labels_true.float())
y_true, y_pred = edge_labels_true.detach().cpu().numpy(), edge_labels_predicted.detach().cpu().numpy()
y_true_list.append(y_true)
y_pred_list.append(y_pred)
acc = accuracy_score(y_true, y_pred.round())
roc_auc = roc_auc_score(y_true, y_pred)
pr_auc = average_precision_score(y_true, y_pred)
loss_test.update(loss.item())
acc_test.update(acc)
roc_auc_test.update(roc_auc)
pr_auc_test.update(pr_auc)
class_disbalance.update((edge_labels_true.sum().float() / len(edge_labels_true)).item())
experiment_key = experiment.get_key()
eval_loss = loss_test.val
early_stopping(eval_loss, graph_embedder, edge_classifier, experiment_key)
if early_stopping.early_stop:
print("Early stopping")
break
experiment.log_metric('loss_test', loss_test.val)
experiment.log_metric('acc_test', acc_test.val)
experiment.log_metric('roc_auc_test', roc_auc_test.val)
experiment.log_metric('pr_auc_test', pr_auc_test.val)
experiment.log_metric('class_disbalance', class_disbalance.val)
y_true = np.concatenate(y_true_list)
y_pred = np.concatenate(y_pred_list)
# load the last checkpoint with the best model
graph_embedder.load_state_dict(torch.load("graph_embedder_{}.pt".format(experiment_key)))
edge_classifier.load_state_dict(torch.load("edge_classifier_{}.pt".format(experiment_key)))
###Output
0%| | 0/10 [00:00<?, ?it/s]
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Import data
###Code
import tensorflow as tf
import os
os.environ['OMP_NUM_THREADS'] = '1'
import pandas as pd
import numpy as np
ins_df = pd.read_csv('data/instagram_data.csv')
ins_df = ins_df[ins_df['Contents'].notna()] # We might want to do something different here - SN
ins_df
from sklearn.utils import shuffle
anger_df = pd.read_csv('data/twitter/anger.tsv', sep='\t').drop(columns=['index', 'intensity'])
fear_df = pd.read_csv('data/twitter/fear.tsv', sep='\t').drop(columns=['index', 'intensity'])
joy_df = pd.read_csv('data/twitter/joy.tsv', sep='\t').drop(columns=['index', 'intensity'])
sadness_df = pd.read_csv('data/twitter/sadness.tsv', sep='\t').drop(columns=['index', 'intensity'])
emotion_df = pd.concat([anger_df, fear_df, joy_df, sadness_df])
emotion_df = shuffle(emotion_df)
emotion_df
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
!pip install --upgrade transformers datasets emoji deep-translator
import torch
from transformers import AutoTokenizer
from deep_translator import GoogleTranslator
import emoji
translator = GoogleTranslator(source='auto', target='en')
# Note: How we preprocess may depend on model we use to transfer.
# This comes from https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment
def preprocess(text):
new_text = []
for t in text.split(" ")[:300]:
t = '' if t.startswith('@') and len(t) > 1 else t
t = '' if t.startswith('http') else t
t = t.replace('\r', '')
t = t.replace("\n", " ") # Remove newlines
# Remove hashtags but keep words -- ended up leaving them as it seemed to perform better - SN
t = ' #'.join(t.split("#")) if '#' in t else t
# change emojis to be explanation of emoji
if emoji.get_emoji_regexp().search(t) != None:
t = ' '.join(emoji.demojize(i) for i in emoji.get_emoji_regexp().split(t))
t = t.replace("_"," ")
t = t.replace("-"," ")
t = t.replace(":"," ")
# t = emoji.get_emoji_regexp().sub("", t)
t = " ".join(t.split()) # Remove excess whitespace
new_text.append(t)
cleaned_text = " ".join(new_text)
try:
cleaned_text = translator.translate(cleaned_text) # Translate non english to english
except Exception as e:
print(e)
if cleaned_text is None or len(cleaned_text.split()) == 0: return text # return original text if our cleaning made empty string
return cleaned_text
# Load data into numpy arrays
X = np.array(emotion_df['tweet'])
Y = np.array(emotion_df['category'])
Y_ints = np.array(pd.factorize(emotion_df['category'])[0])
X_ins = np.array(ins_df['Contents'])
east_asian = np.array(ins_df['Q5A. If yes to Q5, what type of Asian'] == 1, dtype=int)
# Preprocess text
for i in range(len(X)): X[i] = preprocess(X[i])
for i in range(len(X_ins)): X_ins[i] = preprocess(X_ins[i])
# Split into train/val/test sets
TRAIN_PCT, VAL_PCT, TEST_PCT = 0.6, 0.2, 0.2
train_idx = int(TRAIN_PCT * len(X))
val_idx = train_idx + int(VAL_PCT * len(X))
for i in range(30):
print(np.array(ins_df['Contents'])[i])
print('---')
print(X_ins[i])
print()
print()
# print(i)
X_train, Y_train = X[:train_idx], Y_ints[:train_idx]
X_val, Y_val = X[train_idx:val_idx], Y_ints[train_idx:val_idx]
X_test, Y_test = X[val_idx:], Y_ints[val_idx:]
# TOKENIZER_MODEL = "cardiffnlp/twitter-roberta-base-sentiment"
# TOKENIZER_MODEL = "digitalepidemiologylab/covid-twitter-bert-v2"
TOKENIZER_MODEL = "roberta-base"
# TOKENIZER_MODEL = 'bert-base-uncased'
# TOKENIZER_MODEL = 'siebert/sentiment-roberta-large-english'
# TOKENIZER_MODEL = 'bhadresh-savani/albert-base-v2-emotion'
# TOKENIZER_MODEL = 'bhadresh-savani/roberta-base-emotion'
# TOKENIZER_MODEL = 'bhadresh-savani/bert-base-uncased-emotion'
# TOKENIZER_MODEL = 'bhadresh-savani/distilbert-base-uncased-emotion'
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_MODEL)
# Tokenize the data
# if TOKENIZER_MODEL == 'gpt2':
# tokenizer.add_special_tokens({'pad_token': '[PAD]'})
X_ins_enc = tokenizer(list(X_ins), return_tensors='pt', padding=True, truncation=True)
X_train_enc = tokenizer(list(X_train), return_tensors='pt', padding=True, truncation=True, max_length=X_ins_enc['input_ids'].shape[1])
X_val_enc = tokenizer(list(X_val), return_tensors='pt', padding=True, truncation=True, max_length=X_ins_enc['input_ids'].shape[1])
X_test_enc = tokenizer(list(X_test), return_tensors='pt', padding=True, truncation=True, max_length=X_ins_enc['input_ids'].shape[1])
###Output
_____no_output_____
###Markdown
Model Definition
###Code
# TODO: define our machine learning model, from our discussion it we can try deep learning models
import os
from torch.utils.data import (
Dataset,
DataLoader,
RandomSampler,
SequentialSampler
)
import math
from transformers import (
BertPreTrainedModel,
RobertaConfig,
RobertaTokenizerFast,
AutoModelForSequenceClassification
)
from transformers.optimization import (
AdamW,
get_linear_schedule_with_warmup
)
from scipy.special import softmax
from torch.nn import CrossEntropyLoss
from sklearn.metrics import (
confusion_matrix,
classification_report,
matthews_corrcoef,
roc_curve,
auc,
average_precision_score,
accuracy_score
)
from transformers.models.roberta.modeling_roberta import (
RobertaClassificationHead,
RobertaConfig,
RobertaModel,
)
from transformers import AutoModel
from torch import nn
num_labels = 4
if torch.cuda.is_available():
device = torch.device("cuda")
print('Number of GPUs: ',torch.cuda.device_count())
else:
print('No GPU, using CPU.')
device = torch.device("cpu")
max_seq_length = 128
train_batch_size = 8
test_batch_size = 8
warmup_ratio = 0.06
weight_decay=0.0
gradient_accumulation_steps = 1
num_train_epochs = 5
learning_rate = 1e-05
adam_epsilon = 1e-08
hidden_units = 512
MODEL, pretrained_output_size = "roberta-base", 768
class RobertaClassification(BertPreTrainedModel):
def __init__(self, config, MODEL=None, num_labels=None, pretrained_output_size=None, hidden_units=None):
super(RobertaClassification, self).__init__(config)
self.num_labels = config.num_labels
self.roberta = RobertaModel(config)
self.classifier = RobertaClassificationHead(config)
def forward(self, input_ids, attention_mask, labels):
outputs = self.roberta(input_ids,attention_mask=attention_mask)
sequence_output = outputs[0]
logits = self.classifier(sequence_output)
outputs = (logits,) + outputs[2:]
loss_fct = CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
config_class = RobertaConfig
model_class = RobertaClassification
config = config_class.from_pretrained(MODEL, num_labels=num_labels)
model = model_class.from_pretrained(MODEL, config=config)
print('Model=\n',model,'\n')
# If you want to use a different model than roberta-base above
# Uncomment MODEL you want
# MODEL, pretrained_output_size = "roberta-base", 768
# MODEL, pretrained_output_size = 'bhadresh-savani/albert-base-v2-emotion', 6
MODEL, pretrained_output_size = 'bhadresh-savani/roberta-base-emotion', 6
# MODEL, pretrained_output_size = 'bhadresh-savani/bert-base-uncased-emotion', 6
# MODEL, pretrained_output_size = "bhadresh-savani/distilbert-base-uncased-emotion", 6
# MODEL, pretrained_output_size = "cardiffnlp/twitter-roberta-base-sentiment", 768
# MODEL, pretrained_output_size = "bert-base-uncased", 768
# MODEL, pretrained_output_size = "siebert/sentiment-roberta-large-english", 1024
# MODEL, pretrained_output_size = "digitalepidemiologylab/covid-twitter-bert-v2", 1024
assert MODEL == TOKENIZER_MODEL
class Model(nn.Module):
def __init__(self, config, MODEL, num_labels, pretrained_output_size, hidden_units):
super(Model, self).__init__()
self.num_labels = num_labels
self.pretrained_model = AutoModelForSequenceClassification.from_pretrained(MODEL, num_labels=6)
self.linear1 = nn.Linear(pretrained_output_size, self.num_labels)
def forward(self, input_ids, attention_mask, labels):
output = self.pretrained_model(input_ids, attention_mask=attention_mask)
out = self.linear1(output.logits)
loss_fct = CrossEntropyLoss()
loss = loss_fct(out.view(-1, self.num_labels), labels.view(-1))
return loss, out
model = Model(None, MODEL, num_labels, pretrained_output_size, hidden_units)
print('Model=\n',model,'\n')
class MyClassificationDataset(Dataset):
def __init__(self, data,y):
text = data
labels=y
self.examples = text
# targets = tr.transform(labels)
self.labels = torch.as_tensor(labels, dtype=torch.long)
def __len__(self):
return len(self.examples["input_ids"])
def __getitem__(self, index):
return {key: self.examples[key][index] for key in self.examples}, self.labels[index]
train_dataset = MyClassificationDataset(X_train_enc,Y_train)
val_dataset = MyClassificationDataset(X_val_enc, Y_val)
test_dataset = MyClassificationDataset(X_test_enc, Y_test)
ins_dataset = MyClassificationDataset(X_ins_enc, [0.] * len(X_ins))
train_batch_size = 8
val_batch_size = 8
test_batch_size = 8
def get_inputs_dict(batch):
inputs = {key: value.squeeze(1).to(device) for key, value in batch[0].items()}
inputs["labels"] = batch[1].to(device)
return inputs
train_sampler = RandomSampler(train_dataset)
train_dataloader = DataLoader(train_dataset,sampler=train_sampler,batch_size=train_batch_size)
val_sampler = SequentialSampler(val_dataset)
val_dataloader = DataLoader(val_dataset, sampler=val_sampler, batch_size=val_batch_size)
test_sampler = SequentialSampler(test_dataset)
test_dataloader = DataLoader(test_dataset, sampler=test_sampler, batch_size=test_batch_size)
ins_sampler = SequentialSampler(ins_dataset)
ins_dataloader = DataLoader(ins_dataset, sampler=ins_sampler, batch_size=test_batch_size)
#Extract a batch as sanity-check
# batch = get_inputs_dict(next(iter(train_dataloader)))
# input_ids = batch['input_ids'].to(device)
# attention_mask = batch['attention_mask'].to(device)
# labels = batch['labels'].to(device)
# print(batch)
def setup_opts(model):
t_total = len(train_dataloader) // gradient_accumulation_steps * num_train_epochs
optimizer_grouped_parameters = []
custom_parameter_names = set()
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters.extend(
[
{
"params": [
p
for n, p in model.named_parameters()
if n not in custom_parameter_names and not any(nd in n for nd in no_decay)
],
"weight_decay": weight_decay,
},
{
"params": [
p
for n, p in model.named_parameters()
if n not in custom_parameter_names and any(nd in n for nd in no_decay)
],
"weight_decay": 0.0,
},
]
)
warmup_steps = math.ceil(t_total * warmup_ratio)
optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate, eps=adam_epsilon)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps, num_training_steps=t_total)
return optimizer, scheduler
optimizer, scheduler = setup_opts(model)
###Output
_____no_output_____
###Markdown
Training
###Code
# TODO: train our model using the loaded data
model.to(device)
model.zero_grad()
def log_metrics(y, y_preds):
print(classification_report(y, y_preds, target_names=['Joy', 'Fear', 'Sadness', 'Anger']))
def train_epochs(num_train_epochs):
avg_loss=[]
avg_val_loss=[]
for epoch in range(num_train_epochs):
model.train()
epoch_loss = []
for batch in train_dataloader:
batch = get_inputs_dict(batch)
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
loss = outputs[0]
loss.backward()
optimizer.step()
scheduler.step()
model.zero_grad()
epoch_loss.append(loss.item())
#evaluate model with test_df at the end of the epoch.
eval_loss = 0.0
nb_eval_steps = 0
n_batches = len(val_dataloader)
preds = np.empty((len(val_dataset), num_labels))
out_label_ids = np.empty((len(val_dataset)))
model.eval()
for i,test_batch in enumerate(val_dataloader):
with torch.no_grad():
test_batch = get_inputs_dict(test_batch)
input_ids = test_batch['input_ids'].to(device)
attention_mask = test_batch['attention_mask'].to(device)
labels = test_batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
tmp_eval_loss, logits = outputs[:2]
eval_loss += tmp_eval_loss.item()
nb_eval_steps += 1
start_index = test_batch_size * i
end_index = start_index + test_batch_size if i != (n_batches - 1) else len(test_dataset)
preds[start_index:end_index] = logits.detach().cpu().numpy()
out_label_ids[start_index:end_index] = test_batch["labels"].detach().cpu().numpy()
eval_loss = eval_loss / nb_eval_steps
model_outputs = preds
preds = np.argmax(preds, axis=1)
#result, wrong = compute_metrics(preds, model_outputs, out_label_ids)
epoch_loss=np.mean(epoch_loss)
print('epoch',epoch,'Training avg loss',epoch_loss)
print('epoch',epoch,'Testing avg loss',eval_loss)
print('---------------------------------------------------\n')
avg_loss.append(epoch_loss)
avg_val_loss.append(eval_loss)
report=log_metrics(Y_val, preds)
print(report)
avg_loss=np.mean(avg_loss)
avg_val_loss=np.mean(avg_val_loss)
accuracy=accuracy_score(Y_val, preds)
return avg_loss,avg_val_loss,report,accuracy
###Output
_____no_output_____
###Markdown
Performance Evaluation
###Code
def test():
model.to(device)
eval_loss = 0.0
nb_eval_steps = 0
n_batches = len(test_dataloader)
preds = np.empty((len(test_dataset), num_labels))
out_label_ids = np.empty((len(test_dataset)))
model.eval()
for i,test_batch in enumerate(test_dataloader):
with torch.no_grad():
test_batch = get_inputs_dict(test_batch)
input_ids = test_batch['input_ids'].to(device)
attention_mask = test_batch['attention_mask'].to(device)
labels = test_batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
tmp_eval_loss, logits = outputs[:2]
eval_loss += tmp_eval_loss.item()
nb_eval_steps += 1
start_index = test_batch_size * i
end_index = start_index + test_batch_size if i != (n_batches - 1) else len(test_dataset)
preds[start_index:end_index] = logits.detach().cpu().numpy()
out_label_ids[start_index:end_index] = test_batch["labels"].detach().cpu().numpy()
eval_loss = eval_loss / nb_eval_steps
model_outputs = preds
preds = np.argmax(preds, axis=1)
print("classification report for test set")
print(log_metrics(Y_test, preds))
accuracy=accuracy_score(Y_test, preds)
return eval_loss,accuracy
train_loss=[]
val_loss=[]
val_acc=[]
test_loss=[]
test_acc=[]
for epoch in range(2,12,2):
print("train with epochs=",epoch)
model = model_class.from_pretrained(MODEL, config=config)
# model = Model(None, MODEL, num_labels, pretrained_output_size, hidden_units)
model.to(device)
optimizer, scheduler = setup_opts(model)
avg_loss,avg_val_loss,report,accuracy=train_epochs(epoch)
train_loss.append(avg_loss)
val_loss.append(avg_val_loss)
val_acc.append(accuracy)
testloss,testacc=test()
test_loss.append(testloss)
test_acc.append(testacc)
import matplotlib.pyplot as plt
x=[2,4,6,8,10]
plt.figure(figsize=(10,5))
plt.xlabel('epoch')
plt.ylabel('Loss')
plt.title("Loss for twitter data")
plt.plot(x,train_loss,marker='o',label='train')
plt.plot(x,val_loss,marker='o',label='validation')
plt.plot(x,test_loss,marker='o',label='test')
plt.legend()
plt.figure(figsize=(10,5))
plt.xlabel('epoch')
plt.ylabel('Accuracy')
plt.title("Accuracy for twitter data")
plt.plot(x,val_acc,marker='o',label='validation')
plt.plot(x,test_acc,marker='o',label='test')
plt.legend()
###Output
_____no_output_____
###Markdown
Prediction
###Code
# TODO: predict the submission data
nb_eval_steps = 0
n_batches = len(ins_dataloader)
preds = np.empty((len(ins_dataset), num_labels))
model.eval()
for i,test_batch in enumerate(ins_dataloader):
with torch.no_grad():
test_batch = get_inputs_dict(test_batch)
input_ids = test_batch['input_ids'].to(device)
attention_mask = test_batch['attention_mask'].to(device)
labels = test_batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
_, logits = outputs[:2]
nb_eval_steps += 1
start_index = test_batch_size * i
end_index = start_index + test_batch_size if i != (n_batches - 1) else len(ins_dataset)
preds[start_index:end_index] = logits.detach().cpu().numpy()
model_outputs = preds
preds = np.argmax(preds, axis=1)
np.savetxt('instagram_predictions-with-hash.txt', preds) # We might want to do something different here - SN
from scipy.stats import pearsonr
# from scipy.stats import spearmanr
emotions = ['Joy', 'Fear', 'Sadness', 'Anger']
preds_one_hot = np.zeros((len(preds), preds.max()+1))
preds_one_hot[np.arange(len(preds)),preds] = 1
for i in range(num_labels):
corr, _ = pearsonr(preds_one_hot[:,i], east_asian)
print('Correlation with {}: {}'.format(emotions[i], corr))
for i in range(20):
print('Prediction: {} \nProcessed:\n{}\nUnprocessed:\n{}\n\n'.format(emotions[preds[i]], X_ins[i],np.array(ins_df['Contents'])[i]))
###Output
Prediction: Anger
Processed:
#covid #covid2020 #covidvirus #virus #coronavairus #coronavirus #coronavírus #coronavirüs #blackandwhite #blackandwhiteportrait #blackandwhitephoto #blackandwhite_photos #lockdown #lockdown2020 #lockdownlife #lockdownitaly #italylockdown #lockdowndiaries #lockdownactivities #stayathome #staysafe #stayhome #iorestoacasa #myhome #covid19 #covıd19 #covi̇d_19 #coviditalia # black heart # black heart black heart black heart
Unprocessed:
#covid #covid2020 #covidvirus #virus #coronavairus #coronavirus #coronavírus #coronavirüs #blackandwhite #blackandwhiteportrait #blackandwhitephoto #blackandwhite_photos #lockdown #lockdown2020 #lockdownlife #lockdownitaly #italylockdown #lockdowndiaries #lockdownactivities #stayathome #staysafe #stayhome #iorestoacasa #myhome #covid19 #covıd19 #covi̇d_19 #coviditalia #🖤 #🖤🖤🖤
Prediction: Fear
Processed:
Well this is the final mural of my trip in Australia, a very weird trip, to be honest I couldn’t connect with my painting, at the beginning it was a popular psicosis which looked unreal, then the airline call me with news that my flights was rebooked for 4 month later, I still had a lot to do, people to meet and paint to make, it was super sad when I had to buy another thicket a week before of planed and run out of the country, the same day they closed the border, I like to think that everything happens for reason, everything is meant to be ... if this is my way to start the painting tour this year I don’t really know what to expect. When I painted the mural I meant to make something for the woman, for warriors who don’t want anybody to tell them what to do or say, now it feels empty cos the world is thinking on a different thing. Thank you to my new friend who really helped and connect with me, what a beautiful city melbourne, what a beautiful country Australia, I hope to be back some day for more projects. Be safe and do what you’re told, this is not a joke. #graff #graffiti #mural #muralart #muralgraffiti #streetart #artecallejero #portrait #retraro #painting #pintura #realismo #realism #hiperrealismo #hyperrealism #portrait #retrato #onlyspraypaint #onlyspray #noproyector #sinproyector #cobreart #melbourne #australia #graffitiaustralia #corona #coronavirus
Unprocessed:
Well this is the final mural of my trip in Australia, a very weird trip, to be honest I couldn’t connect with my painting, at the beginning it was a popular psicosis which looked unreal, then the airline call me with news that my flights was rebooked for 4 month later, I still had a lot to do, people to meet and paint to make, it was super sad when I had to buy another thicket a week before of planed and run out of the country, the same day they closed the border, I like to think that everything happens for reason, everything is meant to be ... if this is my way to start the painting tour this year I don’t really know what to expect. When I painted the mural I meant to make something for the woman, for warriors who don’t want anybody to tell them what to do or say, now it feels empty cos the world is thinking on a different thing. Thank you to my new friend @r_o_n_e who really helped and connect with me, what a beautiful city melbourne, what a beautiful country Australia, I hope to be back some day for more projects. Be safe and do what you’re told, this is not a joke.
#graff#graffiti#mural#muralart#muralgraffiti#streetart#artecallejero#portrait#retraro#painting#pintura#realismo#realism#hiperrealismo#hyperrealism#portrait#retrato#onlyspraypaint#onlyspray#noproyector#sinproyector#cobreart#melbourne#australia#graffitiaustralia#corona#coronavirus
Prediction: Anger
Processed:
We have arrived!!! Keep following the movement... There is a lot that we are preparing for you!!!
Unprocessed:
Chegamos !!! Vão seguindo o movimento... Tem muita coisa que estamos preparando pra vocês!!!
Prediction: Joy
Processed:
smiling cat with heart eyes smiling cat with heart eyes smiling cat with heart eyes smiling cat with heart eyes smiling cat with heart eyes
Unprocessed:
😻😻😻😻😻
Prediction: Anger
Processed:
EN MI DOMICILIO house with garden #quedateencasa mobile phone with arrow 0414-464.18.89. USA TAPA BOCAS face with medical mask. Safe and comfortable mouth covers A bikini gift Thank you. Even in our homes we should use a mouth cover, when using some cleaning products such as washing powder for clothes, chlorine and others with strong odors. This in order to prevent common flu or allergies that warrant going to the doctor. . . . #quedatencasa #usatapabocas #lavatelasmanos #coronavirus
Unprocessed:
EN MI DOMICILIO 🏡
#quedateencasa
📲 0414-464.18.89
.
USA TAPA BOCAS😷
.
Tapa bocas seguros y cómodos
Un obsequio de @strongirlslingerie 👙
Graciasss
.
Aún en nuestro hogares deberíamos usa tapa bocas,
al usar algunos productos de limpieza como jabón en polvo para ropa, cloro y otros con olores fuertes. Esto con el fin de prevenir gripe común o alergias que amerite salir al médico.
.
.
.
#quedatencasa #usatapabocas #lavatelasmanos #coronavirus
Prediction: Anger
Processed:
#covid # covid19 #coronavirus #secuide # cuidedequemvocêama #fiqueemcasa
Unprocessed:
#covid #covid19 #coronavirus #secuide #cuidedequemvocêama #fiqueemcasa
Prediction: Anger
Processed:
Life brings us moments of silence and isolation, let's take this opportunity to reflect on the really important things in life #serendipiacovid #duel #personal growth #coronavirus
Unprocessed:
La vida nos trae momentos de silencio y aislamiento, aprovechemos para reflexionar sobre las cosas realmente importantes de la vida #serendipiacovid #duelo #crecimientopersonal #coronavirus
Prediction: Fear
Processed:
Self isolation nachos! The homemade queso mixes nicely with the loneliness to create a subtle aroma of doom . . . . . #nachos #vegetarian #vegetariannachos #food #foodporn #isolation #coronavirus #queso #homemade #cheese #chips #quac #beans #spicy #covid_19
Unprocessed:
Self isolation nachos! The homemade queso mixes nicely with the loneliness to create a subtle aroma of doom
.
.
.
.
.
#nachos #vegetarian #vegetariannachos #food #foodporn #isolation #coronavirus #queso #homemade #cheese #chips #quac #beans #spicy #covid_19
Prediction: Sadness
Processed:
Toilet paper factory #lowpoly #lowpolyart #illustration #3dillustration #blender3d #blender #b3d #3D #3Dmodel #3dart #digitalart #3drender #render #rendering #artwork #cyclesrender #3dartis #3ddesign #design #Toiletpaperfactory #factory #Toiletpaper #pandemic #pandemia #coronavirus #armament #defense #covid19 #covid_19
Unprocessed:
Toilet paper factory
#lowpoly #lowpolyart #illustration #3dillustration
#blender3d #blender #b3d #3D #3Dmodel #3dart #digitalart #3drender #render #rendering #artwork #cyclesrender #3dartis #3ddesign #design #Toiletpaperfactory #factory #Toiletpaper #pandemic #pandemia #coronavirus #armament #defense #covid19 #covid_19
Prediction: Anger
Processed:
#yo #coronavirus is winning! 11,949 and counting? #shit #socialdistancing better work. #if it doesn’t work it’s because people failed. I thought #weed be better at this...I guess not. Am I #surprised I guess not. Did I hope we could really come together..yea? Am I delusional? Hell yes! #fuckit
Unprocessed:
#yo #coronavirus is winning! 11,949 and counting? #shit #socialdistancing better work. #if it doesn’t work it’s because people failed. I thought #weed be better at this...I guess not. Am I #surprised I guess not. Did I hope we could really come together..yea? Am I delusional? Hell yes! #fuckit
Prediction: Joy
Processed:
Thank you smiling face results after one treatment star struck
Unprocessed:
Thank you @bbn_donibziee ☺️ results after one treatment 🤩
Prediction: Anger
Processed:
One Man Army 2 "Brothers" . . . #homeless #coronavirus #covid19 #streetsoftoronto #toronto #photojournalist #lifewithlouis #weareallcreators #supersweetstreet #thecreatorclass #createexplore #candidphotographer #shoot2tell #fujifilm #XSeries #fujinonglobal #fujifilm_street #thestreetphotographyhub #storyofthestreet #streetclassics #streetfinder #streethunters #streets_storytelling #storyofthestreet #streetdreamsmag #streetsgrammer #lensculturestreets #fromstreetswithlove #friendsinperson #friendsinstreets
Unprocessed:
One Man Army 2 "Brothers"
.
. .
#homeless #coronavirus #covid19 #streetsoftoronto #toronto #photojournalist #lifewithlouis #weareallcreators #supersweetstreet #thecreatorclass #createexplore #candidphotographer #shoot2tell #fujifilm #XSeries #fujinonglobal #fujifilm_street #thestreetphotographyhub #storyofthestreet #streetclassics #streetfinder #streethunters #streets_storytelling #storyofthestreet #streetdreamsmag #streetsgrammer #lensculturestreets #fromstreetswithlove #friendsinperson #friendsinstreets
Prediction: Anger
Processed:
A simple movement that was primitive reflex! Don't underestimate the importance of strengthening your feet. There's our base! If you don't have an elastic band, any cloth will do! This simple exercise is capable of: Strengthening extrinsic and intrinsic muscles of legs and feet, lubricates, prepares and protects the foot joints. It works beautifully on the three structural arches of the foot. It gives that reinforcement to the plantar fascia and improves proprioception. #institutotorteloti #itti #pilates #massage #asculpture #physiotherapy #Chinese medicine #physician #health #quarantine #stay at home #elderly #covid19 #coronavirus
Unprocessed:
Um movimento simples que era reflexo primitivo! Não subestime a importância de fortalecer os pés. Aí está a nossa base!
Caso não tenha faixa elástica, qualquer pano serve!
Esse simples exercício é capaz de: Fortalecer musculaturas extrínsecas e intrínsecas de pernas e pés, lubrifica prepara e protege as articulações podais. Trabalha lindamente os três arcos estruturais do pé. Dá aquele reforço na fáscea plantar e aprimora a propriocepção.
#institutotorteloti
#itti
#pilates
#massagem
#aculputura
#fisioterapia
#medicinachinesa
#ficadica
#saude
#quarentena
#ficaemcasa
#idosos
#covid19
#coronavirus
Prediction: Anger
Processed:
"Our New Normal" - Artwork for Turbulent Times face with medical mask face with medical mask face with medical mask #coronavirusart #coronavirus #pandemicart #pandemic #deafartist #mixedmediaartist #mixedmedia #neworleansartist #louisianaartist #blingismything
Unprocessed:
"Our New Normal" - Artwork for Turbulent Times 😷😷😷 #coronavirusart #coronavirus #pandemicart #pandemic #deafartist #mixedmediaartist #mixedmedia #neworleansartist #louisianaartist #blingismything
Prediction: Anger
Processed:
#sunrise #start #photography #myshots #sunset #covid_19 #staysafe #quedateencasa #coronaviru #createathome #creative #creativity #photooftheday camera #artwork #artist #photography #artgallery #artdaily #dailyart #artshub #streetphotographyindia #oph #staysafe #indianshutterbugs #indiaclicks #_coi #india_everyday #i_hobbygraphy #dslr_official #staysafestayhome #indianphotography #coronavirus
Unprocessed:
#sunrise #start #photography #myshots#sunset
#covid_19 #staysafe #quedateencasa #coronaviru #createathome #creative #creativity #photooftheday📷 #artwork #artist #photography #artgallery #artdaily #dailyart #artshub #streetphotographyindia #oph #staysafe #indianshutterbugs #indiaclicks #_coi #india_everyday #i_hobbygraphy #dslr_official #staysafestayhome #indianphotography #coronavirus
Prediction: Anger
Processed:
Si tú amor no vuelve broken heart @greeicy1 @mikebahia . . . . • • • face with medical mask #Quarantine #ncov2019 #fightvirus #coronavirus #CoronavirusOutbreak #toptags #covid19 #QuarantineLife #Quarantined #stayinside #socialdistancing #socialdistance #SelfQuarantine #QuarantineAndChill #stayingin #stayingathome #staytogether #staysafe #fighttogether #stayhome #QuarantineSurvival #staypositive #coronamemes #happyathome #care
Unprocessed:
Si tú amor no vuelve 💔
@greeicy1
@mikebahia .
.
.
.
•
•
•
😷 #Quarantine #ncov2019 #fightvirus #coronavirus #CoronavirusOutbreak #toptags #covid19 #QuarantineLife #Quarantined #stayinside #socialdistancing #socialdistance #SelfQuarantine #QuarantineAndChill #stayingin #stayingathome #staytogether #staysafe #fighttogether #stayhome #QuarantineSurvival #staypositive #coronamemes #happyathome #care
Prediction: Anger
Processed:
Italian online course from our volunteer Daniel Italy and Young Initiative Team Italy Turkey Due to the current safety measures, taken by the Ministry of Health, to prevent the spreading of the COVID-19, we will keep going with the Italian and Turkish Lessons on a virtual base, through Skype Our online Italian lesson with our volunteer Daniel, who came to our country from Italy! In line with the measures taken against the Covid-19 virus, we continue to conduct our Russian, Italian and Turkish lessons online. @ulusalajans #ulusalajans #onlinelearning #europeancommision #erasmusplus #stayhome #staysafe #EU #EuropeanUnion #Italy Italy #Ukraine Ukraine #Turkey Turkey #GençGirişim #YoungInitiative #Italian #languagelearning #coronavirus
Unprocessed:
Italian online course from our volunteer Daniel 🇮🇹 and Genç Girişim Team 🇮🇹🇹🇷
Due to the current safety measures, taken by the Ministry of Health, to prevent the spreading of the COVID-19, we will keep going with the Italian and Turkish Lessons on a virtual base, through Skype
İtalya'dan ülkemize gelen gönüllümüz Daniel ile çevrimiçi İtalyanca dersimiz! Covid-19 virüsüne karşı alınan önlemler doğrultusunda Rusça, İtalyanca ve Türkçe derslerimizi çevrimiçi olarak yapmaya devam ediyoruz.
@ulusalajans
#ulusalajans #onlinelearning #europeancommision #erasmusplus #stayhome #staysafe #EU #EuropeanUnion #Italy🇮🇹 #Ukraine🇺🇦 #Turkey🇹🇷 #GençGirişim #YoungInitiative #Italian #languagelearning #coronavirus
Prediction: Joy
Processed:
Miss Chelsea & Miss Kayra teach us a new game your whole family can play at home! And check back soon for basketball videos from Mr. Chase! Click the link in our bio for the full video with complete rules and to see who won! #throwbackthursday #race #gym #basketball #exercise #run #running #boysandgirlsclub #beach #coronavirus #inspiration #motivation #tbt #funny #family #videooftheday #vidoftheday #video #bestoftheday #sandiego #carlsbadvillage #carlsbad #best #sunset #thursday #socialdistancing
Unprocessed:
Miss Chelsea & Miss Kayra teach us a new game your whole family can play at home!
And check back soon for basketball videos from Mr. Chase!
Click the link in our bio for the full video with complete rules and to see who won! @theellenshow
#tictactoe #throwbackthursday #race #gym #basketball #exercise #run #running #boysandgirlsclub #beach #coronavirus #inspiration #motivation #tbt #funny #family #videooftheday #vidoftheday #video #bestoftheday #sandiego #carlsbadvillage #carlsbad #best #sunset #thursday #socialdistancing
Prediction: Joy
Processed:
I am thoroughly enjoying the time I get to spend at home catching up on my reading! sparkling heart open book sparkling heart #reading #books #book #selfisolation #isolation #socialdistancing #covid #covid19 #coronavirus #corona #virus #peaceful #peace
Unprocessed:
I am thoroughly enjoying the time I get to spend at home catching up on my reading! 💖📖💖 #reading #books #book #selfisolation #isolation #socialdistancing #covid #covid19 #coronavirus #corona #virus #peaceful #peace
Prediction: Anger
Processed:
The Prisoner. La prigionia è solo una questione mentale #coronavirus #photography #photo #photooftheday #photographer #photographylovers #photos #moda #photograph #photographers #fashionblogger #fashion #istayathome #fashionstyle #Dress #Fashion #design #Detail #Trousers #designinspiration #designer #nature #naturephotography #naturelovers #nature_perfection #man #iorestoacasa #freedom #istayhome #green #yoga
Unprocessed:
The Prisoner. La prigionia è solo una questione mentale #coronavirus #photography #photo #photooftheday #photographer #photographylovers #photos #moda #photograph #photographers #fashionblogger #fashion #istayathome #fashionstyle #Dress #Fashion #design #Detail #Trousers #designinspiration #designer #nature #naturephotography #naturelovers #nature_perfection #man #iorestoacasa #freedom #istayhome #green #yoga
###Markdown
Naive Bayes
###Code
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn import metrics
vectorizer = CountVectorizer()
vectorizer.fit(X_train)
train_vec = vectorizer.transform(X_train)
test_vec = vectorizer.transform(X_test)
nb = MultinomialNB()
nb.fit(train_vec, Y_train)
naive_preds = nb.predict(test_vec)
print('Accuracy: {:.3}%'.format(metrics.accuracy_score(Y_test, naive_preds)*100))
print(classification_report(y_true=Y_test, y_pred=naive_preds))
ins_vec = vectorizer.transform(X_test)
naive_preds = nb.predict(ins_vec)
for i in range(20):
print('Prediction: {} \nProcessed:\n{}\nUnprocessed:\n{}\n\n'.format(emotions[naive_preds[i]], X_ins[i],np.array(ins_df['Contents'])[i]))
np.savetxt('instagram_predictions-naive.txt', naive_preds)
###Output
_____no_output_____
###Markdown
Prediction conversion
###Code
emotions = ['Joy', 'Fear', 'Sadness', 'Anger']
ins_df = pd.read_csv('data/instagram_data.csv')
ins_df = ins_df[ins_df['Contents'].notna()]
preds = np.loadtxt('instagram_predictions-with-hash.txt')
string_preds = []
for pred in preds:
string_preds.append(emotions[int(pred)])
idxs = np.expand_dims(np.array(ins_df.index), -1)
string_preds = np.expand_dims(np.array(string_preds), -1)
final_preds = np.concatenate([idxs, string_preds], -1)
# print(final_preds.dtype)
np.savetxt('final_instagram_predictions.txt', final_preds, fmt='%s')
###Output
<U21
###Markdown
ULMFiT + Siamese Network for Sentence Vectors Part Three: ClassifyingThe second notebook created a new language model from the SNLI dataset.This notebook will adapt that model to predicting the SNLI category for sentence pairs.The model will be used as a sentence encoder for a Siamese Network that builds sentence vectors that are feed into a classifier network.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from ipyexperiments import *
import fastai
from fastai.text import *
import html
import json
import html
import re
import pickle
from collections import Counter
import random
import pandas as pd
import numpy as np
from pathlib import Path
import sklearn
from sklearn import model_selection
from functools import partial
from collections import Counter, defaultdict
import random
import numpy as np
import torch
import torch.nn as nn
import torch.utils
import torch.optim as optim
import torch.optim.lr_scheduler as lr_scheduler
from torch.utils.data import dataset, dataloader
import torch.optim as optim
import torch.nn.functional as F
import time
import math
import sys
import data
import joblib
token_files = './data/PAN14/tokens/'
model_files = './data/PAN14/models/'
TRAINDATAPATH = "./data/PAN14/pan14_train_english-essays/"
TESTDATAPATH = "./data/PAN14/pan14_test02_english-essays/"
FNAMES = ['known01','known02','known03','known04','known05', 'unknown']
KCOLS=['known01','known02','known03','known04','known05']
LABELCOL="answer"
UNKOWN="unknown"
doc_pairs_train=joblib.load(f'{model_files}traindf-2.pkl')
doc_pairs_val=joblib.load(f'{model_files}valdf-2.pkl')
doc_pairs_test=joblib.load(f'{model_files}testdf-2.pkl')
data_lm = TextLMDataBunch.load(model_files)
data_clf1 = TextClasDataBunch.from_df(model_files, doc_pairs_train, doc_pairs_val, doc_pairs_test,
vocab=data_lm.train_ds.vocab, bs=64,
text_cols=['known', 'unknown'], label_cols=['label'], mark_fields=True)
data_clf1.save()
data_clf1 = TextClasDataBunch.load(model_files, bs=64)
learn1 = text_classifier_learner(data_clf1, drop_mult=0.5)
learn1.load_encoder('healthy_enc')
learn1.lr_find()
learn1.recorder.plot()
exp1=IPyExperimentsPytorch()
learn1.fit_one_cycle(1, slice(1e-03, 1e-02), wd=1e-05)
learn1.fit_one_cycle(1, slice(1e-03/10, 1e-02), wd=1e-04)
learn1.fit_one_cycle(2, slice(1e-04/100, 1e-02), wd=1e-03)
learn1.save('init_av_clf1')
data_lm = TextLMDataBunch.load(model_files)
data_clf2 = TextClasDataBunch.from_df(model_files, doc_pairs_train, doc_pairs_val, doc_pairs_test,
vocab=data_lm.train_ds.vocab, bs=64,
text_cols=['known', 'unknown'],
label_cols=['label'],
mark_fields=True
qrnn=True)
data_clf2.save()
data_clf2 = TextClasDataBunch.load(model_files, bs=64)
learn2 = text_classifier_learner(data_clf2, drop_mult=0.5)
learn2.load_encoder('healthy_enc')
learn2.lr_find()
learn2.recorder.plot()
exp2=IPyExperimentsPytorch()
learn2.fit_one_cycle(1, slice(1e-03, 1e-02), wd=1e-05)
learn2.fit_one_cycle(1, slice(1e-03/10, 1e-02), wd=1e-04)
learn2.fit_one_cycle(2, slice(1e-04/100, 1e-02), wd=1e-03)
learn2.save('data_clf2')
preds1, y1 = np.array(learn1.get_preds())
preds2, y2 = np.array(learn2.get_preds())
preds1b, y1b = np.load('data_clf1b')
preds2b, y2b = np.load('data_clf2b')
# all y are == so
y = y1
preds = np.hstack((preds1, preds2, preds1b, preds2b))
from sklearn.linear_model import LogisticRegression
def m(y):
def proba(y_i, y):
p = x[y==y_i].sum(0)
return (p+1) / ((y==y_i).sum()+1)
y = y.values
r = np.log(proba(1,y) / proba(0,y))
m = LogisticRegression(C=4, dual=True)
x_nb = x.multiply(r)
return m.fit(x_nb, y), r
probas = np.zeros((len(test),1))
lbls = np.zeros((len(test),1))
m,r = get_mdl(train)
probas[:,0] = m.predict_proba(test_x.multiply(r))[:,0]
mu = np.average(probas) # we can do average here or we can go with 0.5 or wecan just report
# probabilities like recommended by the PAN, whichever works best for specific validation
# set
for i, prb in enumerate(probas):
if prb > mu:
lbls[i] = 1
else:
lbls[i] = 0
###Output
_____no_output_____
###Markdown
Before you run this notebook:1. Please make sure that you have the "HCV-Egy-Data.csv" file at the same folder of this notebook. If not, please modify the first code cell (right below) accordingly.2. If you are running on Google's Colab, you may need to mount the Google drive as your storage. Follow the instruction given by Google. 3. If you are running on a local environment, please comment out the first block inside the very first code cell.4. If you already have the fabricated data file, please modify the conresponding parameter in the first code cell.5. The source of the original data is at: [UC Irvine Machine Learining Repository: Hepatitis C Virus (HCV) for Egyptian patients Data Set](https://archive.ics.uci.edu/ml/datasets/Hepatitis+C+Virus+%28HCV%29+for+Egyptian+patients) When you run this code:1. You may find the number of entries is very limited, so the results of an array of clasiifiers are only marginally better than a random guess. The reason of this ineffectiveness is, according to my guess, due to the limited amount of data and my lack of domain knowledge to implememnt a rule-based classifier or something better. 2. Despite of the low accuracy, I listed some of the most important features and you could select them for the training. I recommend that you run the whole notebook first so you would get the main logical flow of this document to better assist you modify to your needs. 3. Since the main purpose of this portion is to help wih the data fabrication. As soon as you have the fabricated data, you could run the classifier on the fabriicated data to check if the accuracy score is close to that on the true dataset. 4. There is a paper on implementibg a complex classifier for this specific problem. [A novel model based on non invasive methods for prediction of liver fibrosis](https://ieeexplore.ieee.org/document/8289800). The discretization part of the dataset below is based on the knowledge provided in this paper. However, I was not able to achieve the accuracy from this paper. The knowledge required in duplicating their result is beyond the scope of this course (CMPE 256 in SJSU.)
###Code
## ------ Comment this block if run on local machine -----###
#from google.colab import drive
#drive.mount('/content/drive', force_remount=True)
## ---------------- Block ends --------------#####
#------ Change below parameters if necessary --------#
#path = "/content/drive/My Drive/Colab Notebooks/" # change path here
path = 'C:/Users/swlee/Desktop/syn_models/'
#source_data_file_name = "HCV-Egy-Data.csv"
source_data_file_name = 'hcv.csv'
## ---------------- Block ends --------------#####
csv_file_path = path + source_data_file_name
#import packages
## Utility tools
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler
from time import time
import itertools
## Models
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn import linear_model
from sklearn import svm
from sklearn.pipeline import Pipeline
## Analysis tools
from sklearn.metrics import confusion_matrix,classification_report, accuracy_score, make_scorer
from sklearn import model_selection
from sklearn.calibration import CalibratedClassifierCV
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2, f_classif
## Warning settings
import warnings
warnings.simplefilter("ignore")
def plot_test_result(clfs, test_score, test_scores):
names = []
for i in range(0, len(clfs)):
clf = clfs[i]
clf_name = clf.__class__.__name__
names.append(clf_name)
y_pos = np.arange(len(names))
plt.barh(y_pos, test_scores, align='center')
plt.yticks(y_pos, names)
plt.xlim(0.0, 0.99)
plt.xlabel('Score')
plt.title('Test Data Accuracy Scores')
plt.show()
#Splitting the data into Train and Test data sets
def train_models():
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size = 0.2, stratify = labels, random_state=42)
## Initializing all models and parameters
#Initializing classifiers
RF_clf = RandomForestClassifier(n_estimators = 30, random_state = 1, class_weight = 'balanced')
AB_clf = AdaBoostClassifier(n_estimators = 30, random_state = 2)
MLP_clf = MLPClassifier(solver='lbfgs', alpha=1e-5, hidden_layer_sizes=(5, 2))
KNN_clf = KNeighborsClassifier()
LOG_clf = linear_model.LogisticRegression(multi_class = "ovr", solver = "sag", class_weight = 'balanced')
GNB_clf = GaussianNB()
SVM_clf = svm.SVC(gamma='scale', probability=True)
clfs = [RF_clf, AB_clf, MLP_clf, KNN_clf, LOG_clf, GNB_clf, SVM_clf]
#Specficying scorer and parameters for grid search
feature_len = features.shape[1]
print("Number of selected features:", feature_len)
scorer = make_scorer(accuracy_score)
pca_n_components = (list)(range(4, feature_len, 3))
pca_n_components.append(feature_len)
parameters_RF = {'clf__max_features': ['auto', 'log2'],
'pca__n_components': pca_n_components}
parameters_AB = {'clf__learning_rate': np.linspace(0.5, 2, num=10),
'pca__n_components': pca_n_components}
parameters_MLP = {'pca__n_components': pca_n_components}
parameters_KNN = {'clf__n_neighbors': [10, 20, 30, 40, 50, 60],
'pca__n_components': pca_n_components}
parameters_LOG = {'clf__C': np.logspace(1, 1000, 5),
'pca__n_components': pca_n_components}
parameters_GNB = {'pca__n_components': pca_n_components}
parameters_SVM = {'pca__n_components': pca_n_components}
parameters = {clfs[0]: parameters_RF,
clfs[1]: parameters_AB,
clfs[2]: parameters_MLP,
clfs[3]: parameters_KNN,
clfs[4]: parameters_LOG,
clfs[5]: parameters_GNB,
clfs[6]: parameters_SVM}
#Initializing PCA
pca = PCA()
#Creating cross validation data splits
cv_sets = model_selection.StratifiedShuffleSplit(n_splits = 3, test_size = 0.25, random_state=42)
#Initialize result storage
clfs_return = []
dm_reduce_return = []
train_scores = []
test_scores = []
#Loop through classifiers
for clf in clfs:
estimators = [('pca', pca), ('clf', clf)]
pipeline = Pipeline(estimators)
print("Training a {} with {}...".format(clf.__class__.__name__, pca.__class__.__name__))
start = time()
#Grid search over pipeline and return best classifier
grid = model_selection.GridSearchCV(pipeline, param_grid = parameters[clf], scoring = scorer, cv = cv_sets, n_jobs = -1)
grid.fit(X_train, y_train)
best_pipe = grid.best_estimator_
#clf = CalibratedClassifierCV(best_pipe.named_steps['clf'], cv= 'prefit', method='isotonic')
clf.fit(best_pipe.named_steps['pca'].transform(X_train), y_train)
dm_reduce = best_pipe.named_steps['pca']
end = time()
print("Trained {} in {:.1f} minutes".format(clf.__class__.__name__, (end - start)/60))
#Make predictions of train data
y_train_pred = clf.predict(best_pipe.named_steps['pca'].transform(X_train))
train_score = accuracy_score(y_train.values, y_train_pred)
print("Score of {} for train set: {:.4f}.".format(clf.__class__.__name__, train_score))
#Make predictions of test data
y_test_pred = clf.predict(best_pipe.named_steps['pca'].transform(X_test))
test_score = accuracy_score(y_test.values, y_test_pred)
print("Score of {} for test set: {:.4f}.".format(clf.__class__.__name__, test_score))
#Append the result to storage
clfs_return.append(clf)
dm_reduce_return.append(dm_reduce)
train_scores.append(train_score)
test_scores.append(test_score)
plot_test_result(clfs, test_score, test_scores)
#Defining the best classifier
best_clf = clfs_return[np.argmax(test_scores)]
best_dm_reduce = dm_reduce_return[np.argmax(test_scores)]
print("The best classifier is {}".format(best_clf.__class__.__name__))
return [best_clf,best_dm_reduce]
print('------Read In True Data----------')
headers = ['Age', 'Gender', 'Bmi', 'Fever', 'Nausea/Vomiting', 'Headache',\
'Diarrhea', 'Fatigue and Bone-ache', 'Jaundice', 'Epigastric pain', 'WBC',\
'RBC', 'HGB', 'Plat', 'AST 1', 'ALT 1',\
'ALT 4', 'ALT 12', 'ALT 24', 'ALT 36', 'ALT 48', 'ALT after 24w',\
'RNA Base', 'RNA 4', 'RNA 12', 'RNA EOT', 'RNA EF',\
'Baseline histological grading', 'Baseline histological staging']
csv_data = pd.read_csv(csv_file_path, skiprows=1, names=headers)
print(csv_data.head())
print('------OK!----------\n\n\n')
print('------Define features and label(s)----------')
feature_cols = ['Age', 'Gender', 'Bmi', 'Fever', 'Nausea/Vomiting', 'Headache',\
'Diarrhea', 'Fatigue and Bone-ache', 'Jaundice', 'Epigastric pain', 'WBC',\
'RBC', 'HGB', 'Plat', 'AST 1', 'ALT 1',\
'ALT 4', 'ALT 12', 'ALT 24', 'ALT 36', 'ALT 48', 'ALT after 24w',\
'RNA Base', 'RNA 4', 'RNA 12', 'RNA EOT', 'RNA EF',\
'Baseline histological grading']
features = csv_data[feature_cols]
label_col = ['Baseline histological staging']
labels = csv_data[label_col]
print('------OK!----------\n\n\n')
num_of_features = features.shape[1]
print("number of features:", num_of_features)
print("\n\n")
print("Unique values of label are:", pd.unique(labels['Baseline histological staging']))
print("\n")
print("number of unique values in labels is" , len(pd.unique(labels['Baseline histological staging'])))
print("\n\n")
print('------Examine label counts (balanced or not?)---------')
print("Count of Stage 1: ", csv_data[csv_data["Baseline histological staging"] == 1].shape[0])
print("Count of Stage 2: ", csv_data[csv_data["Baseline histological staging"] == 2].shape[0])
print("Count of Stage 3: ", csv_data[csv_data["Baseline histological staging"] == 3].shape[0])
print("Count of Stage 4: ", csv_data[csv_data["Baseline histological staging"] == 4].shape[0])
print('------OK!----------\n\n\n')
print('------Inspect 10 most important feautures and print out the top 4----------')
X = features
y = labels
bestfeatures = SelectKBest(score_func=f_classif, k=10)
fit = bestfeatures.fit(X,y)
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(X.columns)
#concat two dataframes for better visualization
featureScores = pd.concat([dfcolumns,dfscores],axis=1)
featureScores.columns = ['Specs','Score'] #naming the dataframe columns
result_df = featureScores.nlargest(4,'Score')
print(result_df)
print('------OK!----------\n\n\n')
#Manually discretize data
features['Age'] = pd.cut(features['Age'], [0, 32, 37, 42, 47, 52, 57, 62, 100], True, labels=[0, 1, 2, 3, 4, 5, 6, 7])
features['Bmi'] = pd.cut(features['Bmi'], [0, 18.5, 25, 30, 35, 40], False, labels=[0, 1, 2, 3, 4])
features['WBC'] = pd.cut(features['WBC'], [0, 4000, 11000, 13000], True, labels=[0, 1, 2])
features['RBC'] = pd.cut(features['RBC'], [0, 3000000, 5000000, 8000000], False, labels=[0, 1, 2])
features['Plat'] = pd.cut(features['Plat'], [0, 100000, 255000, 300000], False, labels=[0, 1, 2])
features['AST 1'] = pd.cut(features['AST 1'], [0, 20, 40, 200], False, labels=[0, 1, 2])
features['ALT 1'] = pd.cut(features['ALT 1'], [0, 20, 40, 200], False, labels=[0, 1, 2])
features['ALT 4'] = pd.cut(features['ALT 4'], [0, 20, 40, 200], False, labels=[0, 1, 2])
features['ALT 12'] = pd.cut(features['ALT 12'], [0, 20, 40, 200], False, labels=[0, 1, 2])
features['ALT 24'] = pd.cut(features['ALT 24'], [0, 20, 40, 200], False, labels=[0, 1, 2])
features['ALT 36'] = pd.cut(features['ALT 36'], [0, 20, 40, 200], False, labels=[0, 1, 2])
features['ALT 48'] = pd.cut(features['ALT 48'], [0, 20, 40, 200], False, labels=[0, 1, 2])
features['ALT after 24w'] = pd.cut(features['ALT after 24w'], [0, 20, 40, 200], False, labels=[0, 1, 2])
features['RNA Base'] = pd.cut(features['RNA Base'], [0, 5, 9000000], True, labels=[0, 1])
features['RNA 4'] = pd.cut(features['RNA 4'], [0, 5, 9000000], True, labels=[0, 1])
features['RNA 12'] = pd.cut(features['RNA 12'], [0, 5, 9000000], True, labels=[0, 1])
features['RNA EOT'] = pd.cut(features['RNA EOT'], [0, 5, 9000000], True, labels=[0, 1])
features['RNA EF'] = pd.cut(features['RNA EF'], [0, 5, 9000000], True, labels=[0, 1])
features.head()
## !!!! Modify the features you want to select !!!! ###
selected_features_cols = []
if len(selected_features_cols) >= 1:
features = csv_data[selected_features_cols]
res = train_models()
## Accuracy score of the best classifier on all the true data.
best_clf = res[0]
best_dm_reduce = res[1]
all_pred_result = best_clf.predict(best_dm_reduce.transform(features))
accu_score = accuracy_score(labels, all_pred_result)
print("The accuracy score of the best classifier on all true data is", accu_score)
print("\n\n")
###Output
The accuracy score of the best classifier on all true data is 0.303971119133574
###Markdown
**Continue to run if you have set up the fabricated file path name at the very top! **
###Code
def discretize(fab_features):
fab_features['Age'] = pd.cut(fab_features['Age'], [0, 32, 37, 42, 47, 52, 57, 62, 100], True, labels=[0, 1, 2, 3, 4, 5, 6, 7])
fab_features['Bmi'] = pd.cut(fab_features['Bmi'], [0, 18.5, 25, 30, 35, 40], False, labels=[0, 1, 2, 3, 4])
fab_features['WBC'] = pd.cut(fab_features['WBC'], [0, 4000, 11000, 13000], True, labels=[0, 1, 2])
fab_features['RBC'] = pd.cut(fab_features['RBC'], [0, 3000000, 5000000, 8000000], False, labels=[0, 1, 2])
fab_features['Plat'] = pd.cut(fab_features['Plat'], [0, 100000, 255000, 300000], False, labels=[0, 1, 2])
fab_features['AST 1'] = pd.cut(fab_features['AST 1'], [0, 20, 40, 200], False, labels=[0, 1, 2])
fab_features['ALT 1'] = pd.cut(fab_features['ALT 1'], [0, 20, 40, 200], False, labels=[0, 1, 2])
fab_features['ALT 4'] = pd.cut(fab_features['ALT 4'], [0, 20, 40, 200], False, labels=[0, 1, 2])
fab_features['ALT 12'] = pd.cut(fab_features['ALT 12'], [0, 20, 40, 200], False, labels=[0, 1, 2])
fab_features['ALT 24'] = pd.cut(fab_features['ALT 24'], [0, 20, 40, 200], False, labels=[0, 1, 2])
fab_features['ALT 36'] = pd.cut(fab_features['ALT 36'], [0, 20, 40, 200], False, labels=[0, 1, 2])
fab_features['ALT 48'] = pd.cut(fab_features['ALT 48'], [0, 20, 40, 200], False, labels=[0, 1, 2])
fab_features['ALT after 24w'] = pd.cut(fab_features['ALT after 24w'], [0, 20, 40, 200], False, labels=[0, 1, 2])
fab_features['RNA Base'] = pd.cut(fab_features['RNA Base'], [0, 5, 9000000], True, labels=[0, 1])
fab_features['RNA 4'] = pd.cut(fab_features['RNA 4'], [0, 5, 9000000], True, labels=[0, 1])
fab_features['RNA 12'] = pd.cut(fab_features['RNA 12'], [0, 5, 9000000], True, labels=[0, 1])
fab_features['RNA EOT'] = pd.cut(fab_features['RNA EOT'], [0, 5, 9000000], True, labels=[0, 1])
fab_features['RNA EF'] = pd.cut(fab_features['RNA EF'], [0, 5, 9000000], True, labels=[0, 1])
return fab_features
###Output
_____no_output_____
###Markdown
Model 1
###Code
fabricated_data_file_name = 'synth1.csv'
fabricated_file_path = path + fabricated_data_file_name
if len(fabricated_data_file_name) > 0:
fab_data = pd.read_csv(fabricated_file_path, skiprows=1, names=headers)
fab_features = fab_data[feature_cols]
fab_labels = fab_data[label_col]
fab_features[fab_features < 0] = 5
fab_features = discretize(fab_features)
fab_pred = best_clf.predict(best_dm_reduce.transform(fab_features))
fab_accu_score = accuracy_score(fab_labels, fab_pred)
print("The accuracy score on original data is:", accu_score)
print("The accuracy score on fabricated data is:", fab_accu_score)
else:
print("Fabricated data file undefined. Please set up the path and file name at the very top of this file.\n\n")
###Output
The accuracy score on original data is: 0.303971119133574
The accuracy score on fabricated data is: 0.2571428571428571
###Markdown
Model 2
###Code
fabricated_data_file_name = 'synth2.csv'
fabricated_file_path = path + fabricated_data_file_name
if len(fabricated_data_file_name) > 0:
fab_data = pd.read_csv(fabricated_file_path, skiprows=1, names=headers)
fab_features = fab_data[feature_cols]
fab_labels = fab_data[label_col]
fab_features[fab_features < 0] = 5
fab_features = discretize(fab_features)
fab_pred = best_clf.predict(best_dm_reduce.transform(fab_features))
fab_accu_score = accuracy_score(fab_labels, fab_pred)
print("The accuracy score on original data is:", accu_score)
print("The accuracy score on fabricated data is:", fab_accu_score)
else:
print("Fabricated data file undefined. Please set up the path and file name at the very top of this file.\n\n")
###Output
The accuracy score on original data is: 0.303971119133574
The accuracy score on fabricated data is: 0.30357142857142855
###Markdown
Model 3
###Code
fabricated_data_file_name = 'synth3.csv'
fabricated_file_path = path + fabricated_data_file_name
if len(fabricated_data_file_name) > 0:
fab_data = pd.read_csv(fabricated_file_path, skiprows=1, names=headers)
fab_features = fab_data[feature_cols]
fab_labels = fab_data[label_col]
fab_pred = best_clf.predict(best_dm_reduce.transform(fab_features))
fab_accu_score = accuracy_score(fab_labels, fab_pred)
print("The accuracy score on original data is:", accu_score)t
print("The accuracy score on fabricated data is:", fab_accu_score)
else:
print("Fabricated data file undefined. Please set up the path and file name at the very top of this file.\n\n")
###Output
The accuracy score on original data is: 0.303971119133574
The accuracy score on fabricated data is: 0.28
###Markdown
Preparing Data
###Code
X = []
Y = []
Y_onehot = []
for x, y in windows.to_keras_sequence(1):
x = x.squeeze(axis=0)
y = y.squeeze(axis=0)
Y_onehot.append(y)
X.append(x)
Y.append(y.argmax())
X = numpy.asarray(X)
Y = numpy.asarray(Y)
Y_onehot = numpy.asarray(Y_onehot)
boundary = round(len(X)*(70/100))
X_train, X_test = X[:boundary], X[boundary:]
Y_train, Y_test = Y[:boundary], Y[boundary:]
Y_train_onehot, Y_test_onehot = Y_onehot[:boundary], Y_onehot[boundary:]
s = X_train.shape
X_train_flat, X_test_flat = X_train.reshape(-1, s[-1] * s[-2]), X_test.reshape(-1, s[-1] * s[-2])
print(f"X_train: {X_train.shape}, X_test: {X_test.shape}")
print(f"Y_train: {Y_train.shape}, Y_test: {Y_test.shape}")
print(f"Y_train_onehot: {Y_train_onehot.shape}, Y_test_onehot: {Y_test_onehot.shape}")
print(f"X_train_flat: {X_train_flat.shape}, X_test_flat: {X_test_flat.shape}")
###Output
X_train: (9514, 100, 6), X_test: (4078, 100, 6)
Y_train: (9514,), Y_test: (4078,)
Y_train_onehot: (9514, 6), Y_test_onehot: (4078, 6)
X_train_flat: (9514, 600), X_test_flat: (4078, 600)
###Markdown
KNN
###Code
from sklearn.neighbors import KNeighborsClassifier
knn_1 = KNeighborsClassifier(n_neighbors=1, n_jobs=-1)
knn_3 = KNeighborsClassifier(n_neighbors=3, n_jobs=-1)
knn_5 = KNeighborsClassifier(n_neighbors=5, n_jobs=-1)
knn_21 = KNeighborsClassifier(n_neighbors=21, n_jobs=-1)
knn_99 = KNeighborsClassifier(n_neighbors=99, n_jobs=-1)
print("Started 1")
knn_1.fit(X_train_flat, Y_train)
print("Started 3")
knn_3.fit(X_train_flat, Y_train)
print("Started 5")
knn_5.fit(X_train_flat, Y_train)
print("Started 21")
knn_21.fit(X_train_flat, Y_train)
print("Started 99")
knn_99.fit(X_train_flat, Y_train)
print("Done")
print(f"knn_1: {knn_1.score(X_test_flat, Y_test):.1%}")
print(f"knn_3: {knn_3.score(X_test_flat, Y_test):.1%}")
print(f"knn_5: {knn_5.score(X_test_flat, Y_test):.1%}")
print(f"knn_21: {knn_21.score(X_test_flat, Y_test):.1%}")
print(f"knn_99: {knn_99.score(X_test_flat, Y_test):.1%}%")
y_pred = knn_1.predict(X_test_flat)
conf = confusion_matrix(Y_test, y_pred, labels=[i for i in range(windows.num_classes)], normalize="pred")
print(conf)
y_score = knn_1.predict_proba(X_test_flat)
from matplotlib import pyplot as plt
from sklearn.metrics import plot_confusion_matrix
disp = plot_confusion_matrix(knn_1, X_test_flat, Y_test,
display_labels=sorted(dataset.ACTIVITIES.values()),
cmap=plt.cm.Blues,
normalize="pred")
plt.show()
# confusion(y_pred, Y_test, windows.num_classes)
from sklearn.metrics import precision_recall_fscore_support
prec, recall, fscore, sup = precision_recall_fscore_support(Y_test, y_pred, average='macro')
print(f"Precision: {prec:.1%}, Recall: {recall:.1%}, Fscore: {fscore:.1%}")
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(Y_test, y_score, multi_class="ovr")
print(f"Roc Auc Score: {auc::.1%}")
###Output
Roc Auc Score: 0.92
###Markdown
Radius Nearest Neighbors
###Code
from sklearn.neighbors import RadiusNeighborsClassifier
rnn_05 = RadiusNeighborsClassifier(radius=0.01, n_jobs=-1, outlier_label="most_frequent")
rnn_1 = RadiusNeighborsClassifier(radius=0.05, n_jobs=-1, outlier_label="most_frequent")
rnn_2 = RadiusNeighborsClassifier(radius=0.1, n_jobs=-1, outlier_label="most_frequent")
rnn_3 = RadiusNeighborsClassifier(radius=0.5, n_jobs=-1, outlier_label="most_frequent")
print("Started 05")
rnn_05.fit(X_train_flat, Y_train)
print("Started 1")
rnn_1.fit(X_train_flat, Y_train)
print("Started 2")
rnn_2.fit(X_train_flat, Y_train)
print("Started 3")
rnn_3.fit(X_train_flat, Y_train)
print("Done")
print(f"rnn_05: {rnn_05.score(X_test_flat, Y_test)*100:2.1f}%")
print(f"rnn_1: {rnn_1.score(X_test_flat, Y_test)*100:2.1f}%")
print(f"rnn_2: {rnn_2.score(X_test_flat, Y_test)*100:2.1f}%")
print(f"rnn_3: {rnn_3.score(X_test_flat, Y_test)*100:2.1f}%")
###Output
rnn_05: 39.9%
rnn_1: 51.3%
rnn_2: 48.8%
rnn_3: 49.6%
###Markdown
SVM
###Code
from sklearn.svm import SVC
from sklearn.model_selection import GridSearchCV
svc = SVC()
parameters = {'kernel':('linear', 'rbf'), 'C':range(1, 10)}
best_svc = GridSearchCV(svc, parameters, n_jobs=-1, verbose=3)
best_svc.fit(X_train_flat, Y_train)
from matplotlib import pyplot as plt
from sklearn.metrics import plot_confusion_matrix
disp = plot_confusion_matrix(best_svc, X_test_flat, Y_test,
display_labels=sorted(dataset.ACTIVITIES.values()),
cmap=plt.cm.Blues,
normalize="pred")
plt.show()
# confusion(y_pred, Y_test, windows.num_classes)
y_pred = best_svc.predict(X_test_flat)
from sklearn.metrics import precision_recall_fscore_support
prec, recall, fscore, sup = precision_recall_fscore_support(Y_test, y_pred, average='macro')
print(f"Precision: {prec:.1%}, Recall: {recall:.1%}, Fscore: {fscore:.1%}")
print(f"best_svc Accuracy: {best_svc.score(X_test_flat, Y_test):.1%}")
###Output
best_svc Accuracy: 83.3%
###Markdown
Categorical Naive Bayes
###Code
from sklearn.naive_bayes import CategoricalNB
cnb_0 = CategoricalNB(alpha=0.0)
cnb_01 = CategoricalNB(alpha=0.1)
cnb_05 = CategoricalNB(alpha=0.5)
cnb_1 = CategoricalNB(alpha=1.)
print("Started cnb_0")
cnb_0.fit(X_train_flat, Y_train)
print("Started cnb_01")
cnb_01.fit(X_train_flat, Y_train)
print("Started cnb_05")
cnb_05.fit(X_train_flat, Y_train)
print("Started cnb_1")
cnb_1.fit(X_train_flat, Y_train)
print("Done")
print(f"cnb_0: {cnb_0.score(X_test_flat, Y_test)*100:2.1f}%")
print(f"cnb_01: {cnb_01.score(X_test_flat, Y_test)*100:2.2f}%")
print(f"cnb_05: {cnb_05.score(X_test_flat, Y_test)*100:2.2f}%")
print(f"cnb_1: {cnb_1.score(X_test_flat, Y_test)*100:2.2f}%")
###Output
cnb_0: 23.3%
cnb_01: 23.27%
cnb_05: 23.27%
cnb_1: 23.27%
###Markdown
Decision Tree
###Code
from sklearn.tree import DecisionTreeClassifier
tree_para = {'criterion':['gini','entropy'], "max_features":["auto", None], 'max_depth':[4,5,6,7,8,9,10,11,12,15,20,30,40,50,70,90,120,150]}
tree = GridSearchCV(DecisionTreeClassifier(), tree_para, verbose=100, n_jobs=-1)
tree.fit(X_train_flat, Y_train)
from matplotlib import pyplot as plt
from sklearn.metrics import plot_confusion_matrix
disp = plot_confusion_matrix(tree, X_test_flat, Y_test,
display_labels=sorted(dataset.ACTIVITIES.values()),
cmap=plt.cm.Blues,
normalize="pred")
plt.show()
print(f"Decision Tree Accuracy: {tree.score(X_test_flat, Y_test):.1%}%")
y_pred = tree.predict(X_test_flat)
y_score = tree.predict_proba(X_test_flat)
from sklearn.metrics import precision_recall_fscore_support
prec, recall, fscore, sup = precision_recall_fscore_support(Y_test, y_pred, average='macro')
print(f"Precision: {prec:.1%}, Recall: {recall:.1%}, Fscore: {fscore:.1%}")
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(Y_test, y_score, multi_class="ovr")
print(f"Roc Auc Score: {auc:.1%}")
###Output
Roc Auc Score: 0.82
###Markdown
MLP
###Code
from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(random_state=1, max_iter=300)
mlp.fit(X_train_flat, Y_train)
from matplotlib import pyplot as plt
from sklearn.metrics import plot_confusion_matrix
disp = plot_confusion_matrix(mlp, X_test_flat, Y_test,
display_labels=sorted(dataset.ACTIVITIES.values()),
cmap=plt.cm.Blues,
normalize="pred")
plt.show()
print(f"MLP accuarcy: {mlp.score(X_test_flat, Y_test):.1%}")
y_pred = mlp.predict(X_test_flat)
y_score = mlp.predict_proba(X_test_flat)
from sklearn.metrics import precision_recall_fscore_support
prec, recall, fscore, sup = precision_recall_fscore_support(Y_test, y_pred, average='macro')
print(f"Precision: {prec:.1%}, Recall: {recall:.1%}, Fscore: {fscore:.1%}")
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(Y_test, y_score, multi_class="ovr")
print(f"Roc Auc Score: {auc:.1%}")
###Output
Roc Auc Score: 98.0%
###Markdown
LSTM LSTM 1
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, Softmax
lstm_1 = Sequential()
lstm_1.add(LSTM(units=100, input_shape=(X_train.shape[1:])))
lstm_1.add(Dropout(0.2))
lstm_1.add(Dense(100, activation="relu"))
lstm_1.add(Dropout(0.2))
lstm_1.add(Dense(windows.num_classes, activation="softmax"))
lstm_1.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=["categorical_accuracy"]
)
lstm_1.fit(X_train, Y_train_onehot, epochs=30, batch_size=32, verbose=2)
f"LSTM_1 Accuracy: {lstm_1.evaluate(X_test, Y_test_onehot)[1]:.1%}"
###Output
4078/4078 [==============================] - 2s 552us/step
###Markdown
LSTM 2 (Success)
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, Softmax
lstm_2 = Sequential()
lstm_2.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1:])))
lstm_2.add(Dropout(0.1))
lstm_2.add(LSTM(units=50))
lstm_2.add(Dropout(0.1))
lstm_2.add(Dense(100, activation="relu"))
lstm_2.add(Dropout(0.2))
lstm_2.add(Dense(windows.num_classes, activation="softmax"))
lstm_2.compile(
loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=["categorical_accuracy"]
)
lstm_2.fit(X_train, Y_train_onehot, epochs=30, batch_size=32, verbose=2)
f"LSTM_2 Accuracy: {lstm_2.evaluate(X_test, Y_test_onehot)[1]:.1%}"
y_pred_onehot = lstm_2.predict(X_test)
y_pred = y_pred_onehot.argmax(axis=1)
confusion(y_pred, Y_test, sorted(dataset.ACTIVITIES.values()), "LSTM")
from sklearn.metrics import precision_recall_fscore_support
prec, recall, fscore, sup = precision_recall_fscore_support(Y_test, y_pred, average='macro')
print(f"Precision: {prec:.1%}, Recall: {recall:.1%}, Fscore: {fscore:.1%}")
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(Y_test_onehot, y_pred_onehot, multi_class="ovr")
print(f"Roc Auc Score: {auc:.1%}")
###Output
Roc Auc Score: 99.1%
###Markdown
LSTM 3
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, Softmax
lstm_3 = Sequential()
lstm_3.add(LSTM(units=70, return_sequences=True, input_shape=(X_train.shape[1:])))
lstm_3.add(Dropout(0.2))
lstm_3.add(LSTM(units=70))
lstm_3.add(Dropout(0.2))
lstm_3.add(Dense(100, activation="relu"))
lstm_3.add(Dropout(0.2))
lstm_3.add(Dense(windows.num_classes, activation="softmax"))
lstm_3.compile(
loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=["categorical_accuracy"]
)
lstm_3.fit(X_train, Y_train_onehot, epochs=40, batch_size=32, verbose=2)
f"LSTM_3 Accuracy: {lstm_3.evaluate(X_test, Y_test_onehot)[1]:.1%}"
###Output
4078/4078 [==============================] - 3s 724us/step
###Markdown
1D CNN CNN 1
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, Softmax, Conv1D, MaxPooling1D, Flatten
cnn_1 = Sequential()
cnn_1.add(Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(X_train.shape[1:])))
cnn_1.add(MaxPooling1D(pool_size=2))
cnn_1.add(Flatten())
cnn_1.add(Dropout(0.2))
cnn_1.add(Dense(100, activation="relu"))
cnn_1.add(Dropout(0.2))
cnn_1.add(Dense(windows.num_classes, activation="softmax"))
cnn_1.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=["categorical_accuracy"]
)
cnn_1.fit(X_train, Y_train_onehot, epochs=30, batch_size=32, verbose=1)
print(f"CNN_1 Accuracy: {cnn_1.evaluate(X_test, Y_test_onehot)[1]:.1%}")
y_pred_onehot = cnn_1.predict(X_test)
y_pred = y_pred_onehot.argmax(axis=1)
from sklearn.metrics import precision_recall_fscore_support
prec, recall, fscore, sup = precision_recall_fscore_support(Y_test, y_pred, average='macro')
print(f"Precision: {prec:.1%}, Recall: {recall:.1%}, Fscore: {fscore:.1%}")
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(Y_test_onehot, y_pred_onehot, multi_class="ovr")
print(f"Roc Auc Score: {auc:.1%}")
###Output
4078/4078 [==============================] - 0s 62us/step
CNN_1 Accuracy: 93.9%
Precision: 93.9%, Recall: 93.1%, Fscore: 93.4%
Roc Auc Score: 99.3%
###Markdown
CNN 2
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, Softmax, Conv1D, MaxPooling1D, Flatten
cnn_2 = Sequential()
cnn_2.add(Conv1D(filters=32, kernel_size=3, activation='relu', input_shape=(X_train.shape[1:])))
cnn.add(Dropout(0.2))
cnn_2.add(MaxPooling1D(pool_size=2))
# cnn.add(Conv1D(filters=32, kernel_size=3, activation='relu'))
# cnn.add(Dropout(0.2))
# cnn.add(MaxPooling1D(pool_size=2))
cnn_2.add(Flatten())
cnn_2.add(Dropout(0.3))
cnn_2.add(Dense(100, activation="relu"))
cnn_2.add(Dropout(0.2))
cnn_2.add(Dense(windows.num_classes, activation="softmax"))
cnn_2.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=["categorical_accuracy"]
)
cnn_2.fit(X_train, Y_train_onehot, epochs=30, batch_size=32, verbose=1)
print(f"CNN_2 Accuracy: {cnn_2.evaluate(X_test, Y_test_onehot)[1]:.1%}")
y_pred_onehot = cnn_2.predict(X_test)
y_pred = y_pred_onehot.argmax(axis=1)
from sklearn.metrics import precision_recall_fscore_support
prec, recall, fscore, sup = precision_recall_fscore_support(Y_test, y_pred, average='macro')
print(f"Precision: {prec:.1%}, Recall: {recall:.1%}, Fscore: {fscore:.1%}")
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(Y_test_onehot, y_pred_onehot, multi_class="ovr")
print(f"Roc Auc Score: {auc:.1%}")
###Output
4078/4078 [==============================] - 0s 54us/step
CNN_2 Accuracy: 93.2%
Precision: 93.3%, Recall: 92.4%, Fscore: 92.8%
Roc Auc Score: 99.4%
###Markdown
CNN 3 (Slightly Better)
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, Softmax, Conv1D, MaxPooling1D, Flatten
cnn_3 = Sequential()
cnn_3.add(Conv1D(filters=42, kernel_size=3, activation='relu', input_shape=(X_train.shape[1:])))
cnn_3.add(Dropout(0.3))
cnn_3.add(MaxPooling1D(pool_size=2))
cnn_3.add(Flatten())
cnn_3.add(Dropout(0.4))
cnn_3.add(Dense(120, activation="relu"))
cnn_3.add(Dropout(0.3))
cnn_3.add(Dense(windows.num_classes, activation="softmax"))
cnn_3.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=["categorical_accuracy"]
)
cnn_3.fit(X_train, Y_train_onehot, epochs=30, batch_size=32, verbose=2)
print(f"CNN_3 Accuracy: {cnn_3.evaluate(X_test, Y_test_onehot)[1]:.1%}")
y_pred_onehot = cnn_3.predict(X_test)
y_pred = y_pred_onehot.argmax(axis=1)
from sklearn.metrics import precision_recall_fscore_support
prec, recall, fscore, sup = precision_recall_fscore_support(Y_test, y_pred, average='macro')
print(f"Precision: {prec:.1%}, Recall: {recall:.1%}, Fscore: {fscore:.1%}")
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(Y_test_onehot, y_pred_onehot, multi_class="ovr")
print(f"Roc Auc Score: {auc:.1%}")
confusion(y_pred, Y_test, sorted(dataset.ACTIVITIES.values()), "1D CNN")
###Output
_____no_output_____
###Markdown
1D CNN with Shorter and Longer Window Shorter
###Code
windows_short = datamanager.create_windows(set(Activity), 50, shuffle=True, seed=1, columns=['xaccel_norm', 'yaccel_norm', 'zaccel_norm', 'xrot_norm', 'yrot_norm', 'zrot_norm'])
X_s = []
Y_s = []
Y_s_onehot = []
for x, y in windows_short.to_keras_sequence(1):
x = x.squeeze(axis=0)
y = y.squeeze(axis=0)
Y_s_onehot.append(y)
X_s.append(x)
Y_s.append(y.argmax())
X_s = numpy.asarray(X_s)
Y_s = numpy.asarray(Y_s)
Y_s_onehot = numpy.asarray(Y_s_onehot)
boundary = round(len(X_s)*(70/100))
X_s_train, X_s_test = X_s[:boundary], X_s[boundary:]
Y_s_train, Y_s_test = Y_s[:boundary], Y_s[boundary:]
Y_s_train_onehot, Y_s_test_onehot = Y_s_onehot[:boundary], Y_s_onehot[boundary:]
s = X_s_train.shape
X_s_train_flat, X_s_test_flat = X_s_train.reshape(-1, s[-1] * s[-2]), X_s_test.reshape(-1, s[-1] * s[-2])
print(f"X_s_train: {X_s_train.shape}, X_s_test: {X_s_test.shape}")
print(f"Y_s_train: {Y_s_train.shape}, Y_s_test: {Y_s_test.shape}")
print(f"Y_s_train_onehot: {Y_s_train_onehot.shape}, Y_s_test_onehot: {Y_s_test_onehot.shape}")
print(f"X_s_train_flat: {X_s_train_flat.shape}, X_s_test_flat: {X_s_test_flat.shape}")
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, Softmax, Conv1D, MaxPooling1D, Flatten
cnn_s_3 = Sequential()
cnn_s_3.add(Conv1D(filters=42, kernel_size=3, activation='relu', input_shape=(X_s_train.shape[1:])))
cnn_s_3.add(Dropout(0.2))
cnn_s_3.add(MaxPooling1D(pool_size=2))
# cnn.add(Conv1D(filters=32, kernel_size=3, activation='relu'))
# cnn.add(Dropout(0.2))
# cnn.add(MaxPooling1D(pool_size=2))
cnn_s_3.add(Flatten())
cnn_s_3.add(Dropout(0.3))
cnn_s_3.add(Dense(120, activation="relu"))
cnn_s_3.add(Dropout(0.2))
cnn_s_3.add(Dense(windows.num_classes, activation="softmax"))
cnn_s_3.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=["categorical_accuracy"]
)
cnn_s_3.fit(X_s_train, Y_s_train_onehot, epochs=30, batch_size=32, verbose=2)
print(f"CNN_S_3 Accuracy: {cnn_s_3.evaluate(X_s_test, Y_s_test_onehot)[1]:.1%}")
y_s_pred_onehot = cnn_s_3.predict(X_s_test)
y_s_pred = y_s_pred_onehot.argmax(axis=1)
from sklearn.metrics import precision_recall_fscore_support
prec, recall, fscore, sup = precision_recall_fscore_support(Y_s_test, y_s_pred, average='macro')
print(f"Precision: {prec:.1%}, Recall: {recall:.1%}, Fscore: {fscore:.1%}")
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(Y_s_test_onehot, y_s_pred_onehot, multi_class="ovr")
print(f"Roc Auc Score: {auc:.1%}")
confusion(y_s_pred, Y_s_test, sorted(dataset.ACTIVITIES.values()), "1D CNN Short")
###Output
_____no_output_____
###Markdown
Longer Window
###Code
windows_long = datamanager.create_windows(set(Activity), 250, shuffle=True, seed=1, columns=['xaccel_norm', 'yaccel_norm', 'zaccel_norm', 'xrot_norm', 'yrot_norm', 'zrot_norm'])
X_l = []
Y_l = []
Y_l_onehot = []
for x, y in windows_long.to_keras_sequence(1):
x = x.squeeze(axis=0)
y = y.squeeze(axis=0)
Y_l_onehot.append(y)
X_l.append(x)
Y_l.append(y.argmax())
X_l = numpy.asarray(X_l)
Y_l = numpy.asarray(Y_l)
Y_l_onehot = numpy.asarray(Y_l_onehot)
boundary = round(len(X_l)*(70/100))
X_l_train, X_l_test = X_l[:boundary], X_l[boundary:]
Y_l_train, Y_l_test = Y_l[:boundary], Y_l[boundary:]
Y_l_train_onehot, Y_l_test_onehot = Y_l_onehot[:boundary], Y_l_onehot[boundary:]
s = X_l_train.shape
X_l_train_flat, X_l_test_flat = X_l_train.reshape(-1, s[-1] * s[-2]), X_l_test.reshape(-1, s[-1] * s[-2])
print(f"X_l_train: {X_l_train.shape}, X_l_test: {X_l_test.shape}")
print(f"Y_l_train: {Y_l_train.shape}, Y_l_test: {Y_l_test.shape}")
print(f"Y_l_train_onehot: {Y_l_train_onehot.shape}, Y_l_test_onehot: {Y_l_test_onehot.shape}")
print(f"X_l_train_flat: {X_l_train_flat.shape}, X_l_test_flat: {X_l_test_flat.shape}")
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, Softmax, Conv1D, MaxPooling1D, Flatten
cnn_l_3 = Sequential()
cnn_l_3.add(Conv1D(filters=42, kernel_size=3, activation='relu', input_shape=(X_l_train.shape[1:])))
cnn_l_3.add(Dropout(0.3))
cnn_l_3.add(MaxPooling1D(pool_size=2))
cnn_l_3.add(Flatten())
cnn_l_3.add(Dropout(0.4))
cnn_l_3.add(Dense(100, activation="relu"))
cnn_l_3.add(Dropout(0.3))
cnn_l_3.add(Dense(windows.num_classes, activation="softmax"))
cnn_l_3.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=["categorical_accuracy"]
)
cnn_l_3.fit(X_l_train, Y_l_train_onehot, epochs=30, batch_size=32, verbose=2)
print(f"CNN_L_3 Accuracy: {cnn_l_3.evaluate(X_l_test, Y_l_test_onehot)[1]:.1%}")
y_l_pred_onehot = cnn_l_3.predict(X_l_test)
y_l_pred = y_l_pred_onehot.argmax(axis=1)
from sklearn.metrics import precision_recall_fscore_support
prec, recall, fscore, sup = precision_recall_fscore_support(Y_l_test, y_l_pred, average='macro')
print(f"Precision: {prec:.1%}, Recall: {recall:.1%}, Fscore: {fscore:.1%}")
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(Y_l_test_onehot, y_l_pred_onehot, multi_class="ovr")
print(f"Roc Auc Score: {auc:.1%}")
confusion(y_l_pred, Y_l_test, sorted(dataset.ACTIVITIES.values()), "1D CNN Long")
###Output
_____no_output_____
###Markdown
Saving Models- KNN { K=1 }- MLP- LSTM with 2 LSTM- 1D CNNs
###Code
from joblib import dump
dump(mlp, 'mlp.joblib')
dump(knn_1, 'knn.joblib')
cnn_3.save("models/classifiers/cnn_classifier.h5")
cnn_s_3.save("models/classifiers/cnn_s_classifier.h5")
cnn_l_3.save("models/classifiers/cnn_l_classifier.h5")
lstm_2.save("models/classifiers/lstm_classifier.h5")
###Output
_____no_output_____
###Markdown
Open Data
###Code
with open("data/processed/processed_data.json") as pfile:
info = json.load(pfile)
df = pd.DataFrame(info["df"])
train_indices = info["train_indices"]
test_indices = info["test_indices"]
val_indices = info["val_indices"]
print(np.intersect1d(train_indices, test_indices))
print(np.intersect1d(train_indices, val_indices))
print(np.intersect1d(test_indices, val_indices))
encoder = {
"gender": LabelEncoder(),
"status": LabelEncoder()
}
gender_vec = encoder["gender"].fit_transform(df["Gender"])
status_vec = encoder["status"].fit_transform(df["Status"])
age_vec = df["Age"].values
X = df["features"].values.tolist()
X = np.asarray([np.array(x) for x in X])
X_train = X[train_indices]
X_val = X[val_indices]
X_test = X[test_indices]
###Output
[]
[]
[]
###Markdown
Gender Classifier
###Code
gender_train = gender_vec[train_indices]
gender_val = gender_vec[val_indices]
gender_test = gender_vec[test_indices]
pipe = Pipeline(steps=[('estimator', SVC())])
params_grid = [{
'estimator':[SVC(max_iter=10000)],
'estimator__C': np.logspace(-3, 6, num=20, base=2),
'estimator__gamma': np.logspace(-3, 6, num=20, base=2),
'estimator__kernel': ['linear', 'rbf']
},
{
'estimator': [RandomForestClassifier()],
'estimator__max_depth': list(range(1, 30))
},
]
gender_clf = GridSearchCV(pipe, params_grid)
gender_clf.fit(np.concatenate((X_train, X_val)),
np.concatenate((gender_train, gender_val)))
gender_clf.best_params_
print(f"Classification report for gender classifier:\n"
f"{classification_report(gender_test, gender_clf.predict(X_test))}\n")
###Output
Classification report for gender classifier:
precision recall f1-score support
0 0.86 0.87 0.87 182
1 0.93 0.92 0.92 327
accuracy 0.90 509
macro avg 0.89 0.90 0.90 509
weighted avg 0.90 0.90 0.90 509
###Markdown
Status Classifier
###Code
status_train = status_vec[train_indices]
status_val = status_vec[val_indices]
status_test = status_vec[test_indices]
pipe = Pipeline(steps=[('estimator', SVC())])
params_grid = [
{
'estimator':[SVC(max_iter=10000)],
'estimator__C': np.logspace(-3, 6, num=20, base=2),
'estimator__gamma': np.logspace(-3, 6, num=20, base=2),
'estimator__kernel': ['linear', 'rbf']
},
{
'estimator': [RandomForestClassifier()],
'estimator__max_depth': list(range(1, 30))
},
{
'estimator': [KNeighborsClassifier()],
'estimator__n_neighbors': list(range(3, 15))
},
{
'estimator': [GaussianNB()],
},
]
status_clf = GridSearchCV(pipe, params_grid)
status_clf.fit(np.concatenate((X_train, X_val)),
np.concatenate((status_train, status_val)))
status_clf.best_params_
print(f"Classification report for status classifier:\n"
f"{classification_report(status_test, status_clf.predict(X_test))}\n")
###Output
Classification report for status classifier:
precision recall f1-score support
0 0.41 0.31 0.35 195
1 0.23 0.25 0.24 154
2 0.33 0.39 0.36 160
accuracy 0.32 509
macro avg 0.32 0.32 0.32 509
weighted avg 0.33 0.32 0.32 509
###Markdown
Age Predictor
###Code
age_train = age_vec[train_indices]
age_val = age_vec[val_indices]
age_test = age_vec[test_indices]
age_regressor = linear_model.LinearRegression()
age_regressor.fit(np.concatenate((X_train, X_val)),
np.concatenate((age_train, age_val)))
print(f"Classification report for age predictor:\n"
f"MSE = {mean_squared_error(age_test, age_regressor.predict(X_test), squared=True)}\n"
f"RMSE = {mean_squared_error(age_test, age_regressor.predict(X_test), squared=False)}\n"
f"R2 Score = {r2_score(age_test, age_regressor.predict(X_test))}\n")
svr_clf = GridSearchCV(SVR(max_iter=10000), {'C': np.linspace(1, 50, num=10),
'epsilon': np.logspace(-10, -3, num=2, base=2)})
svr_clf.fit(np.concatenate((X_train, X_val)),
np.concatenate((age_train, age_val)))
svr_clf.best_params_
print(f"Classification report for age predictor:\n"
f"MSE = {mean_squared_error(age_test, svr_clf.predict(X_test), squared=True)}\n"
f"RMSE = {mean_squared_error(age_test, svr_clf.predict(X_test), squared=False)}\n"
f"R2 Score = {r2_score(age_test, svr_clf.predict(X_test))}\n")
###Output
_____no_output_____
###Markdown
Sentiment analysis of hotel reviewsImplementation of a data science process for predicting the sentiment contained in hotel reviews by building a binary classification model. Libraries
###Code
# Data handling
import numpy as np
import pandas as pd
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
from wordcloud import WordCloud
# Preprocessing
from nltk.tokenize import word_tokenize
from nltk.stem.snowball import ItalianStemmer
from nltk.corpus import stopwords as nltksw
from sklearn.feature_extraction.text import TfidfVectorizer
import re
import emoji
# Classification
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import GridSearchCV
from sklearn.naive_bayes import MultinomialNB, ComplementNB
from sklearn.svm import SVC
sns.set(style="whitegrid")
###Output
_____no_output_____
###Markdown
Data exploration
###Code
df = pd.read_csv("data/dataset.csv")
###Output
_____no_output_____
###Markdown
Structure
###Code
df
###Output
_____no_output_____
###Markdown
Missing values
###Code
n_null = df.isnull().sum().sum()
n_empty = df[df['text'] == ""].shape[0]
print(f"Missing values: {n_null}")
print(f"Number of empty reviews: {n_empty}")
###Output
Missing values: 0
Number of empty reviews: 0
###Markdown
Class distribution
###Code
classes = set(df['class'])
classes
fig, ax = plt.subplots(figsize=(4, 3), dpi=100)
ax = sns.countplot(df['class'])
ax.set_title("Class distribution")
plt.xlabel('Class label')
plt.ylabel('Count')
plt.show()
class_count = df['class'].value_counts()
class_count/class_count.sum()
###Output
_____no_output_____
###Markdown
Length distribution
###Code
df['len'] = df['text'].str.len() # Add a column with review length to the dataframe
pct95 = int(np.ceil(df['len'].quantile(.95)))
pct50 = int(np.ceil(df['len'].quantile(.5)))
print(f"50th percentile: {pct50}, 95th percentile: {pct95}")
fig, ax = plt.subplots(figsize=(8, 3), dpi=100)
ax = sns.distplot(df['len'].loc[(df['class'] == 'pos') & (df['len'] < pct95)], bins=50, norm_hist=True, kde=False, label="Positive")
ax = sns.distplot(df['len'].loc[(df['class'] == 'neg') & (df['len'] < pct95)], bins=50, norm_hist=True, kde=False, label="Negative")
ax.set_title("Review length distribution")
plt.xlabel("Number of characters")
plt.ylabel("Frequency")
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Text contents
###Code
emoticon_chars = ["❤️", "😠", ":\)", ":\("]
count = 0
for c in emoticon_chars:
c_count = df['text'].str.contains(c).sum()
if c_count > 0:
count += c_count
count
###Output
_____no_output_____
###Markdown
Preprocessing Tokenization with stemmer
###Code
stemmer = ItalianStemmer()
stopwords = set(nltksw.words('italian')) - {"non"} | {"hotel"}
re_alphabet = re.compile('[^a-zA-Z]') # Matches any non-alphabet character
repeat_pattern = re.compile(r'^(\w)\1*|(\w)\2*$') # Matches repeating characters at the beginning and at the end of the string
# Tokens to save
punctuation = {'?', '!', '€', '$'}
emoticons = {':)', ':('}
def spell(s):
count = 0
pc = ''
ns = ''
# Correct characters repeating more than once at the beginning and the end of the string
s = repeat_pattern.sub(r'\1\2', s)
# Correct characters repeating more than twice
for c in s:
if pc == c:
count += 1
else:
count = 1
if count <= 2:
ns += c
pc = c
return ns
def tokenizer(s):
tokens = []
for e in emoticons: # Handle text emoticons
for _ in range(0, s.count(e)):
tokens.append(e)
for token in word_tokenize(s): # Tokenize the text and split off punctuation
token = re.split("(\W+|\d+)", token) # Split tokens on any non-alphanumeric character (e.g. apostrophes)
for t in token:
emojis = emoji.emoji_lis(t) # Handle Unicode emojis
if len(emojis) > 0:
for e in emojis:
tokens.append(emoji.demojize(e['emoji']))
continue
if t not in punctuation:
t = re_alphabet.sub('', t) # Delete non-alphabetic characters
if len(t) < 3 or len(t) > 20:
continue
t = spell(t)
if t in stopwords:
continue
t = stemmer.stem(t)
tokens.append(t)
return tokens
###Output
_____no_output_____
###Markdown
Tokenization test
###Code
test_string = "Ciao, questo è un testo di pppproooovaaaa! Proooova! €100 ❤️😠 :)) :( :)"
tokenizer(test_string)
###Output
_____no_output_____
###Markdown
Feature extraction
###Code
vectorizer_unbounded = TfidfVectorizer(input='content', tokenizer=tokenizer, ngram_range=(1,2))
X_unbounded = vectorizer_unbounded.fit_transform(df['text'])
vectorizer = TfidfVectorizer(input='content', tokenizer=tokenizer, min_df=2, max_features=15000, ngram_range=(1,2))
X = vectorizer.fit_transform(df['text'])
###Output
_____no_output_____
###Markdown
Feature exploration
###Code
X_unbounded # without max_features=15000
X # with max_features=15000
###Output
_____no_output_____
###Markdown
Wordclouds
###Code
def word_frequency(X, word_index):
"""Generate a dictionary that maps words to their frequency.
:input X: sparse matrix with count values
:word_index: dictionary (string to int) that maps words to column ids of X
:return word_frequency: dictionary (string to float) that maps words to frequency
"""
X_count = np.squeeze(np.asarray(X.sum(axis=0))) # array with shape (n_features,)
X_total = X_count.sum()
word_frequency = {k: X_count[v]/X_total for k,v in word_index.items()}
return word_frequency
for c in classes:
idx = df[df['class'] == c].index
f = word_frequency(X[idx], vectorizer.vocabulary_)
if c == "pos":
cm = "viridis"
title = "Positive reviews"
elif c == "neg":
cm = "plasma"
title = "Negative reviews"
wc = WordCloud(background_color='white', width=2000, height=1400, colormap=cm)
wordcloud = wc.generate_from_frequencies(f)
fig, ax = plt.subplots(figsize=(7, 5), dpi=100)
ax.set_title(title)
ax.imshow(wordcloud, interpolation='bilinear')
ax.axis("off")
###Output
_____no_output_____
###Markdown
Classification algorithm Train and test split
###Code
X_train, X_test, \
y_train, y_test = train_test_split(X, df['class'], test_size=0.2, stratify=df['class'], random_state=42)
print(f"Training samples: {X_train.shape[0]} Test samples: {X_test.shape[0]}")
def clf_metrics(y_test, y_pred, title=""):
print(classification_report(y_test, y_pred, digits=4))
fig, ax = plt.figure(figsize=(3, 3), dpi=100)
if title:
ax.set_title(title)
conf_mat = confusion_matrix(y_test, y_pred)
label_names = ["neg", "pos"]
conf_mat_df = pd.DataFrame(conf_mat, index = label_names, columns = label_names)
conf_mat_df.index.name = 'Actual'
conf_mat_df.columns.name = 'Predicted'
sns.heatmap(conf_mat_df, annot=True, cmap='GnBu',
annot_kws={"size": 18}, fmt='g', cbar=False)
###Output
_____no_output_____
###Markdown
Naive Bayes Dataset normalization (min-max scaling)
###Code
scaler = MinMaxScaler(feature_range=(0,1))
X_train_scaled = scaler.fit_transform(X_train.todense())
X_test_scaled = scaler.transform(X_test.todense())
###Output
_____no_output_____
###Markdown
Multinomial naive Bayes (without cross validation; for preprocessing tuning)
###Code
clf = MultinomialNB()
clf.fit(X_train_scaled, y_train)
y_pred = clf.predict(X_test_scaled)
clf_metrics(y_test, y_pred)
###Output
precision recall f1-score support
neg 0.9169 0.9452 0.9308 1844
pos 0.9738 0.9596 0.9666 3907
accuracy 0.9550 5751
macro avg 0.9453 0.9524 0.9487 5751
weighted avg 0.9555 0.9550 0.9551 5751
###Markdown
Multinomial naive Bayes
###Code
params = {'alpha': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]}
nb = MultinomialNB()
clf = GridSearchCV(nb, param_grid=params, scoring='f1_weighted', cv=5, n_jobs=3)
clf.fit(X_train_scaled, y_train)
clf.best_params_
y_pred = clf.predict(X_test_scaled) # call predict on the estimator with the best found parameters
clf_metrics(y_test, y_pred, "Multinomial naive Bayes")
###Output
precision recall f1-score support
neg 0.9200 0.9420 0.9309 1844
pos 0.9723 0.9614 0.9668 3907
accuracy 0.9551 5751
macro avg 0.9462 0.9517 0.9488 5751
weighted avg 0.9555 0.9551 0.9553 5751
###Markdown
Complement naive Bayes
###Code
params = {'alpha': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]}
nb = ComplementNB()
clf = GridSearchCV(nb, param_grid=params, scoring='f1_weighted', cv=5, n_jobs=3)
clf.fit(X_train_scaled, y_train)
clf.best_params_
y_pred = clf.predict(X_test_scaled)
clf_metrics(y_test, y_pred, "Complement naive Bayes")
###Output
precision recall f1-score support
neg 0.9109 0.9534 0.9316 1844
pos 0.9775 0.9560 0.9666 3907
accuracy 0.9551 5751
macro avg 0.9442 0.9547 0.9491 5751
weighted avg 0.9561 0.9551 0.9554 5751
###Markdown
Support vector machine
###Code
params = {
'C': [0.01, 0.1, 1, 10, 100],
'kernel': ['linear', 'poly', 'rbf', 'sigmoid']
}
svc = SVC(random_state=42)
clf = GridSearchCV(svc, param_grid=params, scoring='f1_weighted', cv=5, n_jobs=3)
clf.fit(X_train, y_train)
clf.best_params_
y_pred = clf.predict(X_test)
clf_metrics(y_test, y_pred, "Support vector machine")
###Output
precision recall f1-score support
neg 0.9496 0.9403 0.9450 1844
pos 0.9720 0.9765 0.9742 3907
accuracy 0.9649 5751
macro avg 0.9608 0.9584 0.9596 5751
weighted avg 0.9648 0.9649 0.9648 5751
###Markdown
1. 데이터 로드!
###Code
import os
import json
import h5py
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
우리가 분류해야되는 Class를 확인해보자 ❕ 원래 대/중/소/세분류로 나누어져 있지만, 대분류만 Classify할 예정이다!
###Code
cate = json.loads(open('./data/cate1.json', 'rb').read().decode('utf-8'))
count_big = len(cate['b']);
print(f'대분류 갯수 : {count_big}')
str(cate['b'])[:500] # 대분류 조금만 살펴보기!
cate_dict = {v:k for k,v in cate['b'].items()} # key-value바꾼 딕셔너리 만들기!
cate_dict[27]
###Output
_____no_output_____
###Markdown
우리가 traing해야되는 data를 살펴보자
###Code
data = h5py.File('./data/train.chunk.01', 'r')
train = data['train']
train.keys() # 상품의 feature
###Output
_____no_output_____
###Markdown
bcateid | brand | dcateid | img_feat | maker | mcateid | model | pid | price | product | scateid | updttm-------- | ------ | -------- | -------- | ------ | -------- | ------ | ------ | ---- | ------ | --------- | -------------대분류ID | 브랜드 | 세분류ID | 이미지특징 | 제조사 | 중분류ID | 상품ID | 상품ID | 가격 | 상품명 | 소분류ID | 업데이트 시간
###Code
# 첫 데이터만 살펴보자!
for i in train.keys():
sample = train[i][0]
if i in ['brand', 'product', 'maker', 'model']:
sample = sample.decode('utf-8')
print(f'{i}: {sample}');
train["img_feat"]
###Output
_____no_output_____
###Markdown
~~오픈소스 플젝은 이미지 Classification을 해서 재밌게 해야지 라고 생각했는데 이미지가 픽셀 정보가 아니라 이미 처리가 되어있었다.. 이걸 원본 이미지로 돌릴 순 없겠죠..?😂 ~~
###Code
count_data = len(train["bcateid"])
print(f'데이터 갯수 : {count_data}')
###Output
데이터 갯수 : 1000000
###Markdown
2. Train Data를 Train / Validation / Test로 분리!
###Code
from keras.utils.np_utils import to_categorical
on_hot_label = to_categorical(train["bcateid"]) # 대분류를 one-hot 인코딩!
on_hot_label
# train : vlidation : test = 640000 : 160000 : 200000
X_train = train["img_feat"][:6400]
X_val = train["img_feat"][6400:6400+1600]
X_test = train["img_feat"][6400+1600:]
y_train = on_hot_label[:6400]
y_val = on_hot_label[6400:6400+1600]
y_test = on_hot_label[6400+1600:]
y_train
###Output
_____no_output_____
###Markdown
3. 모델 training데이터 전처리를 해야되지만.. 앞에서 마주친 작지만 나를 힘들게한 오류들 때문에.. 시간이 부족하니 냅다 훈련부터 해본다..! ❕ 모델은 img_feat만 가지고 훈련해볼 것이다!☑️ Hyperparameter| Hyperparameter | My Model | | :---------- | :---------------------------------------------- || input neurons | 2048(img_feat의 colums) | | hidden layers | 1 || neurons per hidden layer | 100 || output neurons | 57(대분류 갯수) || output layer activation | Softmax(1개의 class에만 할당될 수 있어서!) || Loss function | Cross entropy(Classification 문제라서!) |*왜인지 모르겠지만 output neurons을 53으로 바꾸라고 오류가 나서 57 -> 53으로 바꾸었다!!*
###Code
from keras.models import Sequential
from keras import layers
model = Sequential()
model.add(layers.Dense(2048, activation='relu'))# input layer
model.add(layers.Dense(100, activation='relu')) # hidden layer
model.add(layers.Dense(53, activation='softmax')) # output layer
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
history = model.fit(X_train,
y_train,
epochs=20,
validation_data=(X_val, y_val))
###Output
Epoch 1/20
200/200 [==============================] - 29s 128ms/step - loss: 1.3636 - accuracy: 0.6205 - val_loss: 1.6307 - val_accuracy: 0.5938
Epoch 2/20
200/200 [==============================] - 24s 122ms/step - loss: 1.0001 - accuracy: 0.7055 - val_loss: 1.8160 - val_accuracy: 0.5494
Epoch 3/20
200/200 [==============================] - 26s 131ms/step - loss: 0.7387 - accuracy: 0.7770 - val_loss: 1.7530 - val_accuracy: 0.6031
Epoch 4/20
200/200 [==============================] - 28s 138ms/step - loss: 0.5332 - accuracy: 0.8352 - val_loss: 1.8920 - val_accuracy: 0.5944
Epoch 5/20
200/200 [==============================] - 26s 129ms/step - loss: 0.3823 - accuracy: 0.8752 - val_loss: 2.0299 - val_accuracy: 0.5900
Epoch 6/20
200/200 [==============================] - 26s 131ms/step - loss: 0.2854 - accuracy: 0.9048 - val_loss: 2.1851 - val_accuracy: 0.6225
Epoch 7/20
200/200 [==============================] - 26s 129ms/step - loss: 0.2231 - accuracy: 0.9267 - val_loss: 2.5337 - val_accuracy: 0.6112
Epoch 8/20
200/200 [==============================] - 26s 132ms/step - loss: 0.2001 - accuracy: 0.9328 - val_loss: 2.5250 - val_accuracy: 0.6081
Epoch 9/20
200/200 [==============================] - 27s 135ms/step - loss: 0.1728 - accuracy: 0.9459 - val_loss: 2.7144 - val_accuracy: 0.5950
Epoch 10/20
200/200 [==============================] - 25s 127ms/step - loss: 0.1657 - accuracy: 0.9495 - val_loss: 3.0994 - val_accuracy: 0.6175
Epoch 11/20
200/200 [==============================] - 26s 132ms/step - loss: 0.1601 - accuracy: 0.9544 - val_loss: 2.9843 - val_accuracy: 0.6156
Epoch 12/20
200/200 [==============================] - 26s 133ms/step - loss: 0.1312 - accuracy: 0.9631 - val_loss: 3.4283 - val_accuracy: 0.6156
Epoch 13/20
200/200 [==============================] - 27s 136ms/step - loss: 0.1190 - accuracy: 0.9633 - val_loss: 3.5594 - val_accuracy: 0.5856
Epoch 14/20
200/200 [==============================] - 24s 123ms/step - loss: 0.1246 - accuracy: 0.9630 - val_loss: 3.6832 - val_accuracy: 0.6275
Epoch 15/20
200/200 [==============================] - 26s 132ms/step - loss: 0.1180 - accuracy: 0.9656 - val_loss: 4.3800 - val_accuracy: 0.6212
Epoch 16/20
200/200 [==============================] - 26s 129ms/step - loss: 0.1147 - accuracy: 0.9664 - val_loss: 4.1363 - val_accuracy: 0.6094
Epoch 17/20
200/200 [==============================] - 27s 134ms/step - loss: 0.1193 - accuracy: 0.9686 - val_loss: 4.6733 - val_accuracy: 0.6144
Epoch 18/20
200/200 [==============================] - 26s 129ms/step - loss: 0.1090 - accuracy: 0.9702 - val_loss: 5.4830 - val_accuracy: 0.6269
Epoch 19/20
200/200 [==============================] - 27s 133ms/step - loss: 0.1156 - accuracy: 0.9711 - val_loss: 4.7037 - val_accuracy: 0.6288
Epoch 20/20
200/200 [==============================] - 26s 129ms/step - loss: 0.1095 - accuracy: 0.9703 - val_loss: 5.0579 - val_accuracy: 0.6144
###Markdown
결과를 한 번 그려봅시다!
###Code
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and Validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and Validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
헿 오버피팅됐다..!! 😅
###Code
model.evaluate(X_test, y_test)
###Output
31000/31000 [==============================] - 287s 9ms/step - loss: 4.9703 - accuracy: 0.6281
###Markdown
Submission of [shopping-category-classifier](https://github.com/im9uri/shopping-category-classifier) Read data
###Code
import pandas as pd
train_df = pd.read_pickle("../data/soma_goods_train.df")
train_df.shape
train_df.head(2)
###Output
_____no_output_____
###Markdown
* cate1 는 대분류, cate2 는 중분류, cate3 는 소분류 * 총 10000개의 학습 데이터 * 위 id 에 해당되는 이미지 데이터 다운받기 https://www.dropbox.com/s/q0qmx3qlc6gfumj/soma_train.tar.gz
###Code
test_df = pd.read_pickle("../data/test.df")
test_df.shape
test_df.tail(2)
###Output
_____no_output_____
###Markdown
Feature Engineering * 최대한 다양한 방식으로 name의 조합을 만들어서, dataframe의 column으로 넣어준다 1. 한글 형태소 [KoNLPy](http://konlpy.org/)로 형태소 분리
###Code
from konlpy.tag import Kkma
from konlpy.utils import pprint
def remove_big_words(s):
#'ㅁㅁㅁㅁ...'가 있으면 뺀다 (KoNLPy의 버그로 글자가 너무 길면 OOM으로 죽으므로, 일단은 가볍게 예외처리)
ns = ''
for i in s.split():
if(u'ㅁㅁㅁㅁ' not in i):
ns += i
return ns
def get_korean_nouns(s):
s = remove_big_words(s)
return ' '.join(kkma.nouns(s))
kkma = Kkma()
train_df['name_korean_nouns'] = train_df['name'].map(get_korean_nouns)
test_df['name_korean_nouns'] = test_df['name'].map(get_korean_nouns)
train_df.head(2)
###Output
_____no_output_____
###Markdown
2. 숫자랑 알파벳만 분리한다
###Code
def get_is_alnum(s):
alnum = ''
for c in str(s):
if c.isalnum():
alnum += c
else:
alnum += ' '
return alnum
train_df['name_alnum'] = train_df['name'].map(get_is_alnum)
test_df['name_alnum'] = test_df['name'].map(get_is_alnum)
train_df.head(2)
###Output
_____no_output_____
###Markdown
3. 원래 name 데이터와 위에서 만든 데이터를 더해서 하나의 column으로 만든다
###Code
train_df['name_total'] = train_df['name'] + ' ' + train_df['name_korean_nouns'] + ' ' + train_df['name_alnum']
test_df['name_total'] = test_df['name'] + ' ' + test_df['name_korean_nouns'] + ' ' + test_df['name_alnum']
train_df.head(2)
###Output
_____no_output_____
###Markdown
4. [Gensim의 word2vec](http://rare-technologies.com/word2vec-tutorial/)을 사용해서 단어들을 neural network의 output으로 뽑아낸다
###Code
#만든 단어의 조합을 한 list에 넣는다
sentences = []
for s in train_df['name_total'].tolist():
s = s.lower()
sentences.append(s.split())
for s in test_df['name_total'].tolist():
s = s.lower()
sentences.append(s.split())
#Word2Vec model을 만든다
from gensim.models import Word2Vec
#parameter는 적당히 대입한 값으로, tuning 여지가 있다
size = 1000
min_count = 1
model = Word2Vec(sentences, size=size, min_count=1)
###Output
Using gpu device 0: GRID K520 (CNMeM is enabled with initial size: 95.0% of memory, cuDNN 4007)
###Markdown
4.1 name_total 안에 있는 각각 단어의 output을 더해서 matrix에 넣는다
###Code
import numpy as np
word_matrix = np.zeros(shape=(train_df.shape[0], size))
index = 0
for s in train_df['name_total'].tolist():
s = s.lower()
s_matrix = np.zeros(shape=(size,))
for w in s.split():
try:
s_matrix += model[w]
break
except KeyError:
print('KeyError ' + w)
word_matrix[index] = s_matrix
index += 1
word_matrix.shape
word_matrix_test = np.zeros(shape=(test_df.shape[0], size))
index = 0
for s in test_df['name_total'].tolist():
s = s.lower()
s_matrix = np.zeros(shape=(size,))
for w in s.split():
try:
s_matrix += model[w]
break
except KeyError:
print('KeyError ' + w)
word_matrix_test[index] = s_matrix
index += 1
word_matrix_test.shape
###Output
_____no_output_____
###Markdown
4.2 다차원의 output을 PCA를 이용해서 차원을 낮춰준다
###Code
from sklearn.decomposition import PCA
#parameter는 적당히 대입한 값으로, tuning 여지가 있다
size = 20
pca = PCA(n_components=size, whiten=True)
pca.fit(word_matrix)
pca.explained_variance_ratio_
word_matrix = pca.transform(word_matrix)
word_matrix_test = pca.transform(word_matrix_test)
print(word_matrix.shape, word_matrix_test.shape)
###Output
(10000, 20) (4807, 20)
###Markdown
4.3 PCA로 transform한 data를 column으로 넣어준다
###Code
columns = ['word2vec_' + str(i) for i in range(0,size)]
train_word_df = pd.DataFrame(word_matrix, columns=columns, index=train_df.index)
train_df = pd.concat([train_df, train_word_df], axis=1)
test_word_df = pd.DataFrame(word_matrix_test, columns=columns, index=test_df.index)
test_df = pd.concat([test_df, test_word_df], axis=1)
train_df.head(2)
###Output
_____no_output_____
###Markdown
5. [VGG19 model](https://gist.github.com/baraldilorenzo/8d096f48a1be4a2d660d)을 이용해서 이미지를 다른 output으로 변환시킨다 5.1 Define Model
###Code
from keras.models import Sequential
from keras.layers.core import Flatten, Dense, Dropout
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD
import numpy as np
import cv2
def VGG_19(weights_path=None):
model = Sequential()
model.add(ZeroPadding2D((1,1),input_shape=(3,224,224)))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Flatten())
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1000, activation='softmax'))
if weights_path:
model.load_weights(weights_path)
return model
model = VGG_19('../model/vgg19_weights.h5')
#train된 model은 1000개의 output을 뱉지만, 이 경우에는 그럴 필요가 없으므로 마지막 2개의 layer를 뺀다
model.layers.pop()
model.layers.pop()
#현재 keras 버전(1.0.8)에서는 pop()으로는 layer가 없어지지 않으므로, 밑에 코드를 넣는다
#https://github.com/fchollet/keras/issues/2371#issuecomment-211120172
model.outputs = [model.layers[-1].output]
model.layers[-1].outbound_nodes = []
###Output
_____no_output_____
###Markdown
5.2 이미지의 output을 저장하고, 이미지가 없는 경우 평균값을 넣어준다
###Code
start = 0
no_img_list = []
fc_matrix = np.zeros(shape=(10000, 4096))
for i in train_df.index.values:
f = '../data/soma_train/' + str(i) + '.jpg'
f = cv2.imread(f)
if(f is None):
no_img_list.append(start)
start += 1
continue
#resize image to fit input
im = cv2.resize(f, (224, 224)).astype(np.float32)
im[:,:,0] -= 103.939
im[:,:,1] -= 116.779
im[:,:,2] -= 123.68
im = im.transpose((2,0,1))
im = np.expand_dims(im, axis=0)
#add to array
fc_matrix[start] = model.predict_proba(im, verbose=False)
start += 1
print(str(len(no_img_list)) + " data with no images")
# no_img_list인 index에 평균값을 넣어준다
avg = np.average(fc_matrix, axis=0)
for i in no_img_list:
fc_matrix[i] = avg
print(fc_matrix.shape)
start = 0
no_img_list = []
fc_matrix_test = np.zeros(shape=(4807, 4096))
for i in test_df.index.values:
f = '../data/soma_test/' + str(i) + '.jpg'
f = cv2.imread(f)
if(f is None):
no_img_list.append(start)
start += 1
continue
#resize image to fit input
im = cv2.resize(f, (224, 224)).astype(np.float32)
im[:,:,0] -= 103.939
im[:,:,1] -= 116.779
im[:,:,2] -= 123.68
im = im.transpose((2,0,1))
im = np.expand_dims(im, axis=0)
#add to array
fc_matrix_test[start] = model.predict_proba(im, verbose=False)
start += 1
print(str(len(no_img_list)) + " data with no images")
# no_img_list인 index에 평균값을 넣어준다
avg = np.average(fc_matrix_test, axis=0)
for i in no_img_list:
fc_matrix_test[i] = avg
print(fc_matrix_test.shape)
###Output
/usr/local/lib/python3.4/dist-packages/keras/models.py:760: UserWarning: Network returning invalid probability values. The last layer might not normalize predictions into probabilities (like softmax or sigmoid would).
warnings.warn('Network returning invalid probability values. '
###Markdown
5.3 다차원의 output을 PCA를 이용해서 차원을 낮춰준다
###Code
from sklearn.decomposition import PCA
#parameter는 적당히 대입한 값으로, tuning 여지가 있다
size = 20
pca = PCA(n_components=size, whiten=True)
pca.fit(fc_matrix)
pca.explained_variance_ratio_
fc_matrix = pca.transform(fc_matrix)
fc_matrix_test = pca.transform(fc_matrix_test)
print(fc_matrix.shape, fc_matrix_test.shape)
###Output
(10000, 20) (4807, 20)
###Markdown
5.4 PCA로 transform한 data를 column으로 넣어준다
###Code
columns = ['img_' + str(i) for i in range(0,size)]
train_img_df = pd.DataFrame(fc_matrix, columns=columns, index=train_df.index)
train_df = pd.concat([train_df, train_img_df], axis=1)
test_img_df = pd.DataFrame(fc_matrix_test, columns=columns, index=test_df.index)
test_df = pd.concat([test_df, test_img_df], axis=1)
train_df.head(2)
###Output
_____no_output_____
###Markdown
현재까지 output을 파일로 저장한다
###Code
import pickle
with open('../data/train_feat_eng.df', 'wb') as handle:
pickle.dump(train_df, handle)
with open('../data/test_feat_eng.df', 'wb') as handle:
pickle.dump(test_df, handle)
###Output
_____no_output_____
###Markdown
6. Model에 넣을 수 있는 데이터로 바꾼다 6.1 단어를 n-gram으로 조합을 만들어서 숫자의 형태로 저장하기 위해 TfidfVectorizer를 사용한다. * CountVectorizer 는 일반 text 를 이에 해당되는 숫자 id 와, 빈도수 형태의 데이터로 변환 해주는 역할을 해준다. * 이 역할을 하기 위해서 모든 단어들에 대해서 id 를 먼저 할당한다. * 그리고 나서, 학습 데이터에서 해당 단어들과, 그것의 빈도수로 데이터를 변환 해준다. (보통 이런 과정을 통해서 우리가 이해하는 형태를 컴퓨터가 이해할 수 있는 형태로 변환을 해준다고 보면 된다) * 예를 들어서 '베네통키즈 키즈 러블리 키즈' 라는 상품명이 있고, 각 단어의 id 가 , 베네통키즈 - 1, 키즈 - 2, 러블리 -3 이라고 한다면 이 상품명은 (1,1), (2,2), (3,1) 형태로 변환을 해준다. (첫번째 단어 id, 두번째 빈도수) * TfidfVectorizer는 CountVectorizer에 tf-idf로 한번 transform 해주는 model이다.
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(analyzer='char_wb', ngram_range=(2, 6), strip_accents='unicode')
x_list = vectorizer.fit_transform(train_df['name_total'].tolist())
print(x_list.shape)
###Output
(10000, 553489)
###Markdown
6.2 word2vec, img column을 x_list에 더한다
###Code
from scipy import sparse
from scipy.sparse import hstack
#add word2vec columns
word_columns = ['word2vec_' + str(i) for i in range(0,20)]
word_matrix = sparse.csr_matrix(train_df[word_columns].values)
x_list = hstack([x_list, word_matrix])
#add image columns
img_columns = ['img_' + str(i) for i in range(0,20)]
img_matrix = sparse.csr_matrix(train_df[img_columns].values)
x_list = hstack([x_list, img_matrix])
x_list.shape
###Output
_____no_output_____
###Markdown
6.3 3개의 카테고리를 제출 형태의 shape로 만든다
###Code
y_list = []
for each in train_df.iterrows():
s = ';'.join([each[1]['cate1'], each[1]['cate2'], each[1]['cate3']])
y_list.append(s)
###Output
_____no_output_____
###Markdown
Train Model * 심플하게 Support Vector Machine을 사용한다.
###Code
from sklearn.svm import LinearSVC
from sklearn.grid_search import GridSearchCV
from sklearn.cross_validation import cross_val_score
###Output
_____no_output_____
###Markdown
1. GridSearch로 제일 적합한 C값을 찾는다
###Code
svc_param = {'C':np.logspace(-1, 1.5, 30)}
print(svc_param['C'])
gs_svc = GridSearchCV(LinearSVC(),svc_param,cv=4,n_jobs=-1, verbose=1)
gs_svc.fit(x_list, y_list)
print(gs_svc.best_params_, gs_svc.best_score_)
###Output
Fitting 4 folds for each of 30 candidates, totalling 120 fits
###Markdown
2. Cross validation 점수를 확인한다
###Code
svc_clf = LinearSVC(C=gs_svc.best_params_['C'])
svc_score = cross_val_score(svc_clf, x_list, y_list, cv=5, n_jobs=-1).mean()
print("LinearSVC = {0:.6f}".format(svc_score))
###Output
_____no_output_____
###Markdown
3. Predict test data 3.1 Get test_list
###Code
test_list = vectorizer.transform(test_df['name_total'].tolist())
#add word2vec columns
word_matrix = sparse.csr_matrix(test_df[word_columns].values)
test_list = hstack([test_list, word_matrix])
#add image columns
img_matrix = sparse.csr_matrix(test_df[img_columns].values)
test_list = hstack([test_list, img_matrix])
print(test_list.shape)
###Output
_____no_output_____
###Markdown
3.2 Get prediction and save to file
###Code
svc_clf.fit(x_list, y_list)
pred = svc_clf.predict(test_list)
test_df['pred'] = pred
pred_df = pd.Series(test_df.pred.values,index=test_df.name)
pred_df.head(2)
with open('../submission/pred.df', 'wb') as handle:
pickle.dump(pred_df, handle)
###Output
_____no_output_____
###Markdown
Setup server, and check final score * http://somaeval.hoot.co.kr:8880/eval?url=http://52.41.52.48:8887 (check only two categories) * http://somaeval.hoot.co.kr:8880/eval?url=http://52.41.52.48:8887&mode=all&name=임규리 (full test) * http://somaeval.hoot.co.kr:8869/score (leaderboard)
###Code
d = pred_df.to_dict()
%%capture
from bottle import route, run, template,request,get, post
import re
import time
from threading import Condition
_CONDITION = Condition()
@route('/classify')
def classify():
img = request.query.img
name = request.query.name
pred = d[name]
return {'cate':pred}
run(host='0.0.0.0', port=8887)
###Output
_____no_output_____
###Markdown
###Code
import torchvision.models as models
import torch.nn as nn
from torchsummary import summary
from torch.autograd import Variable
import torch.optim as optim
import torchvision
import numpy as np
import torch
from PIL import Image
from torchvision import transforms
from numpy import loadtxt
import os
import random
import time
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
for data in trainloader:
inputs, label = data
print(inputs, inputs.shape)
print(label, label.shape)
break
# Build new Network
class MyNetwork(nn.Module):
def __init__(self):
super(MyNetwork, self).__init__()
self.l1= nn.Linear(14, 7)
self.l2= nn.Linear(7, 3)
self.sig = nn.Sigmoid()
self.tan = nn.Tanh()
self.soft = nn.Softmax()
def forward(self, x):
#x = x.view(x.shape[0], -1)
x = self.sig(self.l1(x))
x = self.soft(self.l2(x))
return x
# working crossentropy
mynet = MyNetwork()
x = torch.rand(1,14)
out = mynet(x)
print(out.shape)
we = torch.tensor([1])
criterion = nn.CrossEntropyLoss()
loss = criterion(out, we)
def train(data_set):
criterion = nn.CrossEntropyLoss() # cross entropy
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) # optimizer
for i in data_set:
inputs, labels = data
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
print(data)
input = torch.randn(14)
target = torch.empty(3, dtype=torch.long).random_(5)
print(input.shape, target.shape)
output = loss(input, target)
import csv
with open('employee_file.csv', mode='w') as employee_file:
employee_writer = csv.writer(employee_file, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
employee_writer.writerow(['John Smith', 'Accounting', 'November'])
employee_writer.writerow(['Erica Meyers', 'IT', 'March'])
###Output
_____no_output_____
###Markdown
Feature Extraction:
###Code
# get unigrams and bigrams:
stopwords = nltk.corpus.stopwords.words("english")
liwc_cat_all = ['Total Function Words', 'Total Pronouns', 'Personal Pronouns',
'First Person Singular', 'First Person Plural', 'Second Person',
'Third Person Singular', 'Third Person Plural', ' Impersonal Pronouns',
'Articles', 'Common Verbs', 'Auxiliary Verbs', 'Past Tense', 'Present Tense',
'Future Tense', 'Adverbs', 'Prepositions', 'Conjunctions', 'Negations', 'Quantifiers',
'Number', 'Swear Words', 'Social Processes', 'Family', 'Friends', 'Humans',
'Affective Processes', 'Positive Emotion', 'Negative Emotion', 'Anxiety', 'Anger',
'Sadness', 'Cognitive Processes', 'Insight', 'Causation', 'Discrepancy', 'Tentative',
'Certainty', 'Inhibition', 'Inclusive', 'Exclusive', 'Perceptual Processes', 'See',
'Hear', 'Feel', 'Biological Processes', 'Body', 'Health', 'Sexual', 'Ingestion',
'Relativity', 'Motion', 'Space', 'Time', 'Work', 'Achievement', 'Leisure', 'Home',
'Money', 'Religion', 'Death', 'Assent', 'Nonfluencies', 'Fillers',
'Total first person', 'Total third person', 'Positive feelings', 'Optimism and energy',
'Communication', 'Other references to people', 'Up', 'Down', 'Occupation', 'School',
'Sports', 'TV', 'Music', 'Metaphysical issues', 'Physical states and functions',
'Sleeping', 'Grooming']
liwc_cat_subset = ['Cognitive Processes','Humans', 'Present Tense','Space','Auxiliary Verbs',
'Exclusive','Adverbs','Social Processes','Insight','Motion','Quantifiers',
'Achievement']
liwc_cat_binned_subset = ['Death','Anxiety','Social Processes']
def normalize(text):
tokenized_text = []
tags = []
for sent in nltk.sent_tokenize(text):
intermediate = [word for word in nltk.word_tokenize(sent)
if (word not in stopwords) and re.search(r"\w", word)]
for word, pos in nltk.pos_tag(intermediate):
tokenized_text.append(word.lower())
tags.append(pos)
return tokenized_text, tags
def bin_value(value, cutoff):
return math.ceil( min( math.floor(value), cutoff ) )
def get_ngrams(tokens):
unigrams = nltk.FreqDist(tokens)
bigrams = nltk.FreqDist(nltk.bigrams(tokens))
feature_vector = {}
for token, freq in unigrams.items():
feature_vector["UNI_%s" %(token)] = 1#float(freq)/unigrams.N()
for (token1, token2), freq in bigrams.items():
feature_vector["BI_(%s,%s)" %(token1,token2)] = bin_value(float(freq)/bigrams.N() *30, 5)
return feature_vector
#"%s ahhhhh! %s" %("sdflks", "sdff")
def get_pos(tags):
unigrams = nltk.FreqDist(tags)
bigrams = nltk.FreqDist(nltk.bigrams(tags))
feature_vector = {}
for token, freq in unigrams.items():
feature_vector["UNIPOS_%s" %(token)] = bin_value(float(freq)/unigrams.N() *10, 5)
for (token1, token2), freq in bigrams.items():
feature_vector["BIPOS_(%s,%s)" %(token1,token2)] = bin_value(float(freq)/bigrams.N() *30, 5)
return feature_vector
def get_liwc_features(tokens):
"""
Adds all possible LIWC derived feature
:param words:
:return:
"""
text = u" ".join(tokens)
liwc_cat = list(set(liwc_cat_binned_subset + liwc_cat_subset)) #liwc_cat_all
feature_vectors = {}
liwc_scores = word_category_counter.score_text(text)
for cat in liwc_cat:
if cat in liwc_scores:
label = cat.lower().replace(" ", "_")
feature_vectors["liwc_%s" %label] = bin_value(liwc_scores[cat], 10)
return feature_vectors
def get_word_embeddings(text):
feature_dict = W2vecextractor.get_doc2vec_feature_dict(text)
return feature_dict
start = time.time()
funny_feature_tuples = []
set_size= 4000 #int(min(len(funny_joke_list), len(not_funny_joke_list))*4/5)
division_size = int(set_size*4/5)
# all_tokens = []
# for joke in funny_joke_list[:division_size]+not_funny_joke_list[:division_size]:
# tokens, tags = normalize(joke["body"])
# all_tokens+=tokens
# freqDist = nltk.FreqDist(all_tokens)
# frequent_words = []
# for token, freq in freqDist.items():
# if freq >= 3:
# frequent_words.append(freq)
# print(len(frequent_words))
# print(len(freqDist.items()))
for joke in funny_joke_list[:set_size]:
tokens, tags = normalize(joke["body"])
freq_tokens = [token for token in tokens if token in frequent_words]
funny_feature_tuples.append(({**get_ngrams(tokens), **get_pos(tags), **get_liwc_features(tokens)},"funny"))
unfunny_feature_tuples = []
for joke in not_funny_joke_list[:set_size]:
tokens, tags = normalize(joke["body"])
freq_tokens = [token for token in tokens if token in frequent_words]
unfunny_feature_tuples.append(({**get_ngrams(tokens), **get_pos(tags), **get_liwc_features(tokens)},"unfunny"))
time.time() - start
funny_feature_tuples[200]
feature_name = "UNIPOS_NN"
tuples = [feature_tuple[0] for feature_tuple
in funny_feature_tuples + unfunny_feature_tuples
if feature_name in feature_tuple[0]]
values = [t[feature_name]*10 for t in tuples]
plt.hist(values);
###Output
_____no_output_____
###Markdown
Partitioning
###Code
division_size = int(set_size*4/5)
train = funny_feature_tuples[:division_size]+unfunny_feature_tuples[:division_size]
dev = funny_feature_tuples[division_size:set_size]+unfunny_feature_tuples[division_size:set_size]
###Output
_____no_output_____
###Markdown
Training:
###Code
# classifier = nltk.classify.NaiveBayesClassifier.train(train)
# classifier.most_informative_features(100)
classifier = nltk.classify.scikitlearn\
.SklearnClassifier(naive_bayes.BernoulliNB(binarize=False)).train(train)
#classifier = nltk.classify.scikitlearn.SklearnClassifier(svm.SVC()).train(train)
accuracy = nltk.classify.accuracy(classifier, dev)
accuracy
features_only = []
labels_only = []
for vector, label in dev:
features_only.append(vector)
labels_only.append(label)
predicted_labels = classifier.classify_many(features_only)
confusion_matrix = nltk.ConfusionMatrix(labels_only, predicted_labels)
print(confusion_matrix)
###Output
| u |
| n |
| f f |
| u u |
| n n |
| n n |
| y y |
--------+---------+
funny |<407>393 |
unfunny | 317<483>|
--------+---------+
(row = reference; col = test)
###Markdown
Author: Emine Darı About The Challenge- In this challenge, the goal is to classify the audio records of professors of Turkish Academy using Machine Learning methods. Each record is approximately 5 seconds long. The dataset containing the records to train and test the models can be found on this [link](https://www.kaggle.com/c/turkishacademyvoicechallenge/overview) to the in-class competition published on Kaggle. Dependencies
###Code
from scipy.io import wavfile
import numpy as np
import pandas as pd
import librosa
import csv
import os
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
Step 1* In this step, we first create a function to initialize a csv file to keep extracted features to use when train and test sets are processed. Saving the extracted features into a file instead of storing them in variables will save us time, as after we decide on the features we can go on to try different models with these saved files.
###Code
def create_file(filename,header,train=True):
file = open(filename, 'w', newline='')
with file:
writer = csv.writer(file)
#append the label to header only for the training stage
if train:
header.append("label")
writer.writerow(header)
###Output
_____no_output_____
###Markdown
- Now we create a function that will be used to extract features using the [Librosa](https://librosa.org/doc/latest/feature.htmlfeature-extraction) library. This function will save the features in a file by appending them to the end the file for each audio.
###Code
def extract_and_save_features(audio_name, audio_path, file_to_save, train=True,label=0):
#Load the audio file
y, sr = librosa.load(audio_path, mono=True)
#Extract features
rms = librosa.feature.rms(y=y)
chroma_stft = librosa.feature.chroma_stft(y=y, sr=sr)
spec_cent = librosa.feature.spectral_centroid(y=y, sr=sr)
spec_bw = librosa.feature.spectral_bandwidth(y=y, sr=sr)
rolloff = librosa.feature.spectral_rolloff(y=y, sr=sr)
zcr = librosa.feature.zero_crossing_rate(y)
mfcc = librosa.feature.mfcc(y=y, sr=sr)
#Append features in a list
extracted_features = f'{audio_name} {np.mean(chroma_stft)} {np.mean(rms)} {np.mean(spec_cent)} {np.mean(spec_bw)} {np.mean(rolloff)} {np.mean(zcr)}'
for freq in mfcc:
extracted_features += f' {np.mean(freq)}'
#Append label of the audio, if in train mode
if train:
extracted_features += f' {label}'
#Save the features in a file, in append mode
file = open(file_to_save, 'a', newline='')
with file:
writer = csv.writer(file)
writer.writerow(extracted_features.split())
###Output
_____no_output_____
###Markdown
- Last step of preparation is to write a function to normalize extracted features in the dataset
###Code
def normalize(dataset):
data_normalized = ((dataset-dataset.min())/(dataset.max()-dataset.min()))
return data_normalized
###Output
_____no_output_____
###Markdown
Step 2- As the next step we process the train set which is structured as: _**train/train/class_id/**sample.wav_- The dataset is assumed to be saved in the same directory with the notebook/code.- **Warning:** _This step might take a **long time.**_
###Code
class_count = 2
#Create a list with names of the features to be extracted
features = ["filename","rms", "chroma_stft","spec_cent", "spec_bw", "rolloff", "zcr"]
for i in range(20):
features.append("mfcc_" + str(i+1))
#Call the function to create the file to save the train set's features
create_file('train_processed.csv',features)
#Extract features for each audio in the train set
for i in range(class_count):
for file in os.listdir("train/train/" + str(i)):
file_path = "train/train/" + str(i) + "/" + file
extract_and_save_features(file, file_path,'train_processed.csv',True,i)
###Output
_____no_output_____
###Markdown
Step 3- Now we process the test set that is structed as : _**test/test/**sample.wav_
###Code
#Call the function to create the file to save the test set's features
create_file('test_processed.csv',features,train=False)
#Extract features for each audio in the test set
for file in os.listdir("test/test"):
file_path = "test/test/" + file
extract_and_save_features(file, file_path,'test_processed.csv',train=False)
###Output
_____no_output_____
###Markdown
Step 4- In the final step we pick a model and train it with the features we extracted. Then we predict the labels for the test set and write them into a file in submission format.
###Code
#read the processed train data
train_set = pd.read_csv("train_processed.csv")
#split x and y data, normalize x data
x_train = normalize(train_set.drop(["filename","label"],axis=1))
y_train = train_set.label.values
#choose a model and train it with the labels
model = LogisticRegression()
model.fit(x_train,y_train)
#read the processed test data and drop the filename column, then normalize the dataset
test_set = pd.read_csv("test_processed.csv")
x_test_norm = normalize( test_set.drop(["filename"],axis=1))
#predict the test data with the model
predicted_data = model.predict(x_test_norm)
#FileName,Class
predicted_files = []
for filename in os.listdir("test/test"):
predicted_files.append(filename)
prediction = pd.DataFrame({"FileName" : predicted_files, "Class":predicted_data} )
prediction.to_csv("submission.csv", index = False, header=True)
print("Prediction complete. Please check submission.csv file")
###Output
Prediction complete. Please check submission.csv file
###Markdown
Data Preparation
###Code
file_path = Path("crypto_data.csv")
crypto_df = pd.read_csv(file_path)
crypto_df
#Only coins that are trading
crypto_df = crypto_df[crypto_df['IsTrading']==True]
#Drop IsTrading Column
crypto_df = crypto_df.drop(columns=['IsTrading'])
#Remove rows that have at least one null value
crypto_df = crypto_df.dropna(how='any',axis=0)
#TotalCoinsMined > 0
crypto_df = crypto_df[crypto_df['TotalCoinsMined']>0]
#Remove Coin Name
crypto_df = crypto_df.drop(columns=['CoinName'])
crypto_df = crypto_df.drop(columns=['Unnamed: 0'])
crypto_df
algorithms = {}
algorithmsList = crypto_df['Algorithm'].unique().tolist()
for i in range(len(algorithmsList)):
algorithms[algorithmsList[i]] = i
proofType = {}
proofTypeList = crypto_df['ProofType'].unique().tolist()
for i in range(len(proofTypeList)):
proofType[proofTypeList[i]] = i
crypto_df = crypto_df.replace(({'Algorithm':algorithms}))
crypto_df = crypto_df.replace(({'ProofType':proofType}))
crypto_df.dtypes
# Standarize data with StandarScaler
scaler = StandardScaler()
scaled_data = scaler.fit_transform(crypto_df[['TotalCoinsMined', 'TotalCoinSupply']])
new_df_crypto = pd.DataFrame(scaled_data, columns=crypto_df.columns[2:])
new_df_crypto['Algorithm']=crypto_df['Algorithm'].values
new_df_crypto['ProofType']=crypto_df['ProofType'].values
new_df_crypto
###Output
_____no_output_____
###Markdown
Dimensionality Reduction
###Code
#PCA
pca = PCA(n_components=.99)
crypto_pca = pca.fit_transform(new_df_crypto)
pca.explained_variance_ratio_.sum()
inertia = []
# Same as k = list(range(1, 11))
k = [1,2,3,4,5,6,7,8,9,10]
# Looking for the best k
for i in k:
km = KMeans(n_clusters=i, random_state=0)
km.fit(new_df_crypto)
inertia.append(km.inertia_)
# Define a DataFrame to plot the Elbow Curve using hvPlot
elbow_data = {"k": k, "inertia": inertia}
df_elbow = pd.DataFrame(elbow_data)
plt.plot(df_elbow['k'], df_elbow['inertia'])
plt.xticks(range(1,11))
plt.xlabel('Number of clusters')
plt.ylabel('Inertia')
plt.show()
df_crypto_pca = pd.DataFrame(
data=crypto_pca,
columns=["principal component 1", "principal component 2"],
)
df_crypto_pca.head()
# Initialize the K-Means model
model = KMeans(n_clusters=2, random_state=0)
# Fit the model
model.fit(df_crypto_pca)
# Predict clusters
predictions = model.predict(df_crypto_pca)
# Add the predicted class columns
df_crypto_pca["class"] = model.labels_
df_crypto_pca.head()
import matplotlib.pyplot as plt
fig = plt.figure()
ax = plt.axes(projection='3d')
#z = df_crypto_pca['principal component 1']
y = df_crypto_pca['principal component 1']
x = df_crypto_pca['principal component 2']
p = ax.scatter3D(x,y,z,c=df_crypto_pca['class'],cmap='viridis')
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
#ax.set_zlabel('PC3')
cbar = fig.colorbar(p,ticks=[0,1,0],pad=0.2)
cbar.set_label('class')
new_df_crypto
model.fit(new_df_crypto)
# Predict clusters
predictions = model.predict(new_df_crypto)
df = new_df_crypto
# Add the predicted class columns
df["class"] = model.labels_
df2 = df.drop(['class'], axis=1)
labels = df['class']
df2
###Output
_____no_output_____
###Markdown
TSNE
###Code
# Initialize t-SNE model
tsne = TSNE(learning_rate=35)
# Reduce dimensions
tsne_features = tsne.fit_transform(df2)
# The dataset has 2 columns
tsne_features.shape
# Prepare to plot the dataset
# The first column of transformed features
df2['x'] = tsne_features[:,0]
# The second column of transformed features
df2['y'] = tsne_features[:,1]
# Visualize the clusters
plt.scatter(df2['x'], df2['y'])
plt.show()
labels.value_counts()
# Visualize the clusters with color
plt.scatter(df2['x'], df2['y'], c=labels)
plt.show()
inertia = []
# Same as k = list(range(1, 11))
k = [1,2,3,4,5,6,7,8,9,10]
# Looking for the best k
for i in k:
km = KMeans(n_clusters=i, random_state=0)
km.fit(df2)
inertia.append(km.inertia_)
# Define a DataFrame to plot the Elbow Curve using hvPlot
elbow_data = {"k": k, "inertia": inertia}
df_elbow = pd.DataFrame(elbow_data)
plt.plot(df_elbow['k'], df_elbow['inertia'])
plt.xticks(range(1,11))
plt.xlabel('Number of clusters')
plt.ylabel('Inertia')
plt.show()
###Output
_____no_output_____
###Markdown
Forensics, project F3 Experiments on high entropy files _Author_RAFFLIN Corentin 5) Classifier construction The results of the experiments are saved in a file named `results.csv`, this notebook focuses on the processing of the data and the building of a classifier. 1. Loading and treating the data
###Code
#Diverses libraries
%matplotlib inline
import random
from time import time
import pickle
# Data and plotting imports
import pandas as pd
import numpy as np
#Neural network libraries
from sklearn import metrics
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import ParameterGrid
#statistical libraries
from sklearn.preprocessing import LabelEncoder, RobustScaler
from sklearn.model_selection import train_test_split# GridSearchCV, KFold
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
Loading the data
###Code
#Path to the CSV file
resultsPath = 'results.csv'
#Header to associate to the CSV file
tests = ['File_type','File_bytes','Entropy','Chi_square','Mean','Monte_Carlo_Pi','Serial_Correlation']
cols = tests + [str(i) for i in range(0,256)]
#Loading data
data = pd.read_csv(resultsPath, sep=',', header=None, names=cols)
print('There are {} files analyzed'.format(len(data)))
###Output
There are 6220 files analyzed
###Markdown
Removing outliers and balancing the data
###Code
countBefore = data['File_type'].value_counts().to_frame().rename(index=str, columns={'File_type':'Count_before'})
#Removing outliers by keeping only files with high entropy
data = data[data.Entropy>7.6]
countAfter = data['File_type'].value_counts().to_frame().rename(index=str, columns={'File_type':'Count_After'})
count = pd.concat([countBefore, countAfter], axis=1, sort=False)
display(count)
#List of each file type
file_types = data['File_type'].sort_values().unique()
#List of dataframe for each file type
files = [ data[data.File_type==file_type] for file_type in file_types]
#Colors to associate to the file types
colors = ['r', 'b', 'g', 'y', 'm']
# In case more colors are needed for addition of other file type
'''
colors = list(pltcolors._colors_full_map.values())
random.seed(2)
random.shuffle(colors)
'''
print("File types :", file_types)
#Removing some data (lower entropy) to have the same count for each file type
minCount = data['File_type'].value_counts().iloc[-1]
for i in range(len(files)):
f = files[i]
f = f.sort_values(by="Entropy")
files[i] = f[len(f)-minCount:]
#Updating the full dataframe
data = pd.concat(files)
print('There are {} files analyzed'.format(len(data)))
###Output
There are 5145 files analyzed
###Markdown
Checking for missing (possible errors)
###Code
def getMissing(dataframe):
''' Printing the missing data in the dataframe with the total of missing and the corresponding percentage '''
total = dataframe.isnull().sum().sort_values(ascending=False)
percent = (dataframe.isnull().sum()/dataframe.isnull().count()).sort_values(ascending=False)
missing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
return missing_data[missing_data['Total']>0]
#Checking for missing in the tests or bytes distribution
display(getMissing(data))
###Output
_____no_output_____
###Markdown
No missing data in the tests which is great. 2. Data Pre-processing Now we will prepare the data for the clasifier. Dropping and separating input-ouput
###Code
#Dropping not useful information
data = data.drop('File_bytes', axis=1)
#Separating the output
y = data['File_type']
data = data.drop('File_type', axis=1)
###Output
_____no_output_____
###Markdown
Splitting into training and testing sets
###Code
#Splitting into training and testing sets
X_train, X_test, Y_train, Y_test = train_test_split(data, y, test_size = 0.1, random_state=7)
###Output
_____no_output_____
###Markdown
Standardization The standardization of a dataset is a common requirement for many machine learning estimators. We use the RobustScaler more robust to outliers as it is possible that we have many outliers in this data set. The centering and scaling statistics of this scaler are based on percentiles and are therefore not influenced by a few number of very large marginal outliers. The outliers themselves are still present in the transformed data. > Typically this is done by removing the mean and scaling to unit variance. However, outliers can often influence the sample mean / variance in a negative way. In such cases, the median and the interquartile range often give better results.
###Code
#Scaling features using statistics that are robust to outliers.
scaler = RobustScaler()
#Fitting on the training set, then transforming both training and testing sets
X_train = scaler.fit(X_train).transform(X_train)
X_test = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
There is no need for dimensionality reduction nor for decorrelating the data using PCA. Encoding the output
###Code
lbencoder = LabelEncoder()
lbencoder.fit(Y_train)
Y_train = lbencoder.transform(Y_train)
Y_test = lbencoder.transform(Y_test)
#Printing the shapes
print("Shape x_train", X_train.shape)
print("Shape y_train", Y_train.shape)
print("Shape x_test", X_test.shape)
print("Shape y_test", Y_test.shape)
###Output
Shape x_train (4630, 261)
Shape y_train (4630,)
Shape x_test (515, 261)
Shape y_test (515,)
###Markdown
3. Model Selection Several classifiers could be used for this problem. I decided to focus on SVM which is good for limited data and a Neural Network (Multi Layer Perceptron Classifier) which is fast for prediction and therefore could be better to implement in the `ent` program. >In short:* Boosting - often effective when a large amount of training data is available.* Random trees - often very effective and can also perform regression.* K-nearest neighbors - simplest thing you can do, often effective but slow and requires lots of memory.* Neural networks - Slow to train but very fast to run, still optimal performer for letter recognition.* SVM - Among the best with limited data, but losing against boosting or random trees only when large data sets are available.https://stackoverflow.com/questions/2595176/which-machine-learning-classifier-to-choose-in-general In this part I will not focus on the optimization of the parameters.I used the f1_score as a metric to give weights to all classes and see the accuracy for each class. https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html a. MultiLayer Perceptron (MLP) > MLPs are suitable for classification prediction problems where inputs are assigned a class or label. They are very flexible and can be used generally to learn a mapping from inputs to outputs.
###Code
mlp = MLPClassifier(solver='adam',hidden_layer_sizes=(50), random_state=1, max_iter=50000, activation='relu',
learning_rate_init=0.00001, verbose=False)
mlp.fit(X_train, Y_train)
#Results
print("Training set score: %f" % mlp.score(X_train, Y_train))
print("Testing set score: %f" % mlp.score(X_test, Y_test))
Y_pred = mlp.predict(X_test)
print("F1 test set score:", metrics.f1_score(Y_test, Y_pred , average=None))
print("Corresponding classes:", lbencoder.classes_)
print("F1 mean test set score:", metrics.f1_score(Y_test, Y_pred , average='macro'))
###Output
Training set score: 0.982721
Testing set score: 0.941748
F1 test set score: [0.95789474 1. 0.98039216 0.87700535 0.87628866]
Corresponding classes: ['jpg' 'mp3' 'pdf' 'png' 'zip']
F1 mean test set score: 0.9383161802184494
###Markdown
b. SVC > SVC and NuSVC implement the “one-against-one” approach (Knerr et al., 1990) for multi- class classification. If n_class is the number of classes, then n_class * (n_class - 1) / 2 classifiers are constructed and each one trains data from two classes. https://scikit-learn.org/stable/modules/svm.htmlA one against one approach may be better to differentiate two close classes like zip and png.
###Code
svc = SVC(gamma='scale', decision_function_shape='ovo')
svc.fit(X_train, Y_train)
#Results
print("Training set score: %f" % svc.score(X_train, Y_train))
print("Testing set score: %f" % svc.score(X_test, Y_test))
Y_pred = svc.predict(X_test)
print("F1 test set score:", metrics.f1_score(Y_test, Y_pred , average=None))
print("Corresponding classes:", lbencoder.classes_)
print("F1 mean test set score:", metrics.f1_score(Y_test, Y_pred , average='macro'))
###Output
Training set score: 0.964579
Testing set score: 0.953398
F1 test set score: [0.97916667 1. 0.98007968 0.89617486 0.9 ]
Corresponding classes: ['jpg' 'mp3' 'pdf' 'png' 'zip']
F1 mean test set score: 0.951084242265909
###Markdown
4. Parameter Optimisation a. MultiLayer Perceptron (MLP) I kept the activation function `relu` which from experience and theory gives good results.
###Code
# Define the hyperparameters
hyperparameters = {
'solver': ['adam', 'lbfgs'],
'hidden_layer_sizes' : [(20), (50), (100), (10,10)],
'lr' : [0.00005, 0.0001]
}
# Compute all combinations
parameter_grid = list(ParameterGrid(hyperparameters))
# Just a table to save the results
resultsDF = pd.DataFrame(columns=['solver', 'hidden_layer_sizes', 'lr', 'test_score', 'train_score', 'f1_mean'])
for p in parameter_grid:
mlp = MLPClassifier(solver=p['solver'],hidden_layer_sizes=p['hidden_layer_sizes'], random_state=1, max_iter=50000,
activation='relu', learning_rate_init=p['lr'], early_stopping=True, tol=1e-7 )
mlp.fit(X_train, Y_train)
test_score = mlp.score(X_test, Y_test)
p['test_score'] = test_score
train_score = mlp.score(X_train, Y_train)
p['train_score'] = train_score
Y_pred = mlp.predict(X_test)
f1_mean = metrics.f1_score(Y_test, Y_pred , average='macro')
p['f1_mean']=f1_mean
resultsDF = resultsDF.append(p, ignore_index=True)
display(resultsDF.sort_values('test_score', ascending=False).head())
###Output
_____no_output_____
###Markdown
b. SVC
###Code
# Define the hyperparameters
hyperparameters = {
'C':[0.5, 1, 5],
'kernel':['rbf', 'linear', 'poly']
}
# Compute all combinations
parameter_grid = list(ParameterGrid(hyperparameters))
# Just a table to save the results
resultsDF = pd.DataFrame(columns=['C', 'kernel', 'test_score', 'train_score', 'f1_mean'])
for p in parameter_grid:
svc = SVC(gamma='scale', C=p['C'], kernel=p['kernel'], decision_function_shape='ovo')
svc.fit(X_train, Y_train)
test_score = svc.score(X_test, Y_test)
p['test_score'] = test_score
train_score = svc.score(X_train, Y_train)
p['train_score'] = train_score
Y_pred = svc.predict(X_test)
f1_mean = metrics.f1_score(Y_test, Y_pred , average='macro')
p['f1_mean']=f1_mean
resultsDF = resultsDF.append(p, ignore_index=True)
display(resultsDF.sort_values('test_score', ascending=False).head())
###Output
_____no_output_____
###Markdown
5. Final Model Though there are not big differences of accuracy for these two models, MLP classifier has slighlty better score.
###Code
clf = MLPClassifier(solver='lbfgs',hidden_layer_sizes=(50), random_state=1, max_iter=60000, activation='relu',
learning_rate_init=0.000005, early_stopping=True, tol=1e-7)
clf.fit(X_train, Y_train)
#Results
print("Training set score: %f" % clf.score(X_train, Y_train))
print("Testing set score: %f" % clf.score(X_test, Y_test))
Y_pred = clf.predict(X_test)
f1_score = metrics.f1_score(Y_test, Y_pred, average=None)
Y_pred = clf.predict(X_test)
print("F1 test set score:", metrics.f1_score(Y_test, Y_pred , average=None))
print("Corresponding classes:", lbencoder.classes_)
print("F1 mean test set score:", metrics.f1_score(Y_test, Y_pred , average='macro'))
###Output
Training set score: 1.000000
Testing set score: 0.957282
F1 test set score: [0.95789474 1. 0.99224806 0.91397849 0.90625 ]
Corresponding classes: ['jpg' 'mp3' 'pdf' 'png' 'zip']
F1 mean test set score: 0.954074258696253
###Markdown
We noticed that the mp3 class is always correctly predicted, and almost always for pdf. It was to be expected with the distributions insofar as most of their distributions were distinct from the others. Similarly, we notice that png and zip have the lowest accuracy which is certainly due to the fact that they are difficult to distinguish and therefore the classifier may make mistakes between these two classes.There is a bit of overfitting as the testing set accuracy does not reach a perfect accuracy such as the training set. We would need to increase the number of samples for each class to improve the accuracy on the testing set.If we added more classes, it is likely that the global accuracy would lower due to similarity between some file types like we have for png and zip. Saving
###Code
filename = "scaler_lb_clf.sav"
modlist = [scaler, lbencoder, clf]
s = pickle.dump(modlist, open(filename, 'wb'))
###Output
_____no_output_____
###Markdown
- в обучающую выборку нужно добавить как можно больше примеров не руки, самый разный мусор, который они будут грузить- почистить руки и оставить только ладони
###Code
### 10 изображений для 2х классов надо
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('classifier_data/train',
target_size = (64, 64),
batch_size = 16,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('classifier_data/test',
target_size = (64, 64),
batch_size = 16,
class_mode = 'binary')
classifier = Sequential()
classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.fit_generator(training_set,
steps_per_epoch = 2,
epochs = 2,
validation_data = test_set,
validation_steps = 0)
classifier.save('models/classifier')
# classifier.evaluate_generator(test_set,steps=2)
def get_prediction(url):
raw_test_image = image.load_img(url, target_size = (64, 64))
test_image = image.img_to_array(raw_test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict(test_image)
if result[0][0] == 1:
print('no')
else:
print('hand')
display(raw_test_image)
files_1 = glob.glob('images/*')
files_2 = glob.glob('data/*')
files_3 = glob.glob('classifier_data/test_cd/dog/*')
files = files_1 + files_2 + files_3
files
for file in files:
get_prediction(file)
###Output
no
###Markdown
Ευριπίδης Παντελαίος - 1115201600124
###Code
import pandas as pd
import numpy as np
import scipy
import nltk
from sklearn.feature_extraction.text import TfidfVectorizer,CountVectorizer
from sklearn import svm, datasets
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB, MultinomialNB
from sklearn.metrics import accuracy_score, f1_score
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
pd.options.display.max_colwidth = None
###Output
_____no_output_____
###Markdown
Some useful functions 1) Cleaning 2) Lemmatization 3) Remove stop words 4) Part-of-Speech Tag
###Code
#clean data and remove symbols, urls, unnecessary words
def cleanData(comments):
StoredComments = []
for line in comments:
line = line.lower()
#replace some words, symbols and letters that appear frequently and are useless
line = line.replace('-', '')
line = line.replace('_', '')
line = line.replace('0', '')
line = line.replace("\n", '')
line = line.replace("\\", '')
line = line.replace('XD', '')
line = line.replace('..', '')
line = line.replace(' ', ' ')
line = line.replace('https', '')
line = line.replace('http', '')
removeList = ['@', r'\x', '\\', 'corrup', '^', '#', '$', '%', '&']
#for line in comments:
words = ' '.join([word for word in line.split() if not any([phrase in word for phrase in removeList]) ])
StoredComments.append(words)
return StoredComments
#lemmatize the comments
def lemmatizer (comments):
lemma = WordNetLemmatizer()
StoredComments = []
for line in comments:
line = ' '.join([lemma.lemmatize(w) for w in nltk.word_tokenize(line)])
StoredComments.append(line)
return StoredComments
#remove stop words
def removeStopWords (comments):
StoredComments=[]
for line in comments:
line = ' '.join([w for w in nltk.word_tokenize(line) if w not in stop_words])
StoredComments.append(line)
return StoredComments
#calculate Pos tags and the frequency of them
def posTag(comments):
adjectiveFrequency=[]
adverbFrequency=[]
nounFrequency=[]
verbFrequency=[]
for comment in comments:
adjectiveCounter=0
adverbCounter=0
nounCounter=0
verbCounter=0
#Pos tagging the words
words = nltk.word_tokenize(comment)
words = nltk.pos_tag(words)
cnt = len(words)
for word in words:
if(word[1][:1] == 'NN'):
nounCounter = nounCounter+1
elif(word[1][:1] == 'VV'):
verbCounter = verbCounter+1
elif(word[1][:1] == 'RR'):
adverbCounter = adverbCounter+1
elif(word[1][:1] == 'JJ'):
adjectiveCounter = adjectiveCounter+1
#not divide with zero
if(cnt!=0): #calculate the frequency of each tag
nounFrequency.append(nounCounter/cnt)
verbFrequency.append(verbCounter/cnt)
adverbFrequency.append(adverbCounter/cnt)
adjectiveFrequency.append(adjectiveCounter/cnt)
else:
nounFrequency.append(0)
verbFrequency.append(0)
adverbFrequency.append(0)
adjectiveFrequency.append(0)
return nounFrequency, verbFrequency, adverbFrequency, adjectiveFrequency
###Output
_____no_output_____
###Markdown
Read csv files for train and test set and cleaning the data
###Code
trainSet = pd.read_csv("data/train.csv")
testSet = pd.read_csv("data/impermium_verification_labels.csv") #I dont use the file 'impermium_verification_set.csv' at all,
#because the other file named 'impermium_verification_labels.csv'
#covers completely the requirements of the exercise.
#Cleaning the data and test set
trainSet['Comment'] = cleanData(trainSet['Comment'])
testSet['Comment'] = cleanData(testSet['Comment'])
###Output
_____no_output_____
###Markdown
Train the train data with Bag of Words
###Code
countVectorizer = CountVectorizer()
BagOfWordsTrain = countVectorizer.fit_transform(trainSet['Comment'].values)
BagOfWordsTrainArray = BagOfWordsTrain.toarray()
###Output
_____no_output_____
###Markdown
Train the test data with Bag of Words
###Code
BagOfWordsTest = countVectorizer.transform(testSet['Comment'].values)
BagOfWordsTestArray = BagOfWordsTest.toarray()
###Output
_____no_output_____
###Markdown
Gaussian Naive Bayes classifier
###Code
classifierNB = GaussianNB()
classifierNB.fit(BagOfWordsTrainArray, trainSet['Insult'])
BoWprediction = classifierNB.predict(BagOfWordsTestArray)
y_test = testSet['Insult']
###Output
_____no_output_____
###Markdown
Gaussian Naive Bayes Scores
###Code
print ('Accuracy Score:', accuracy_score(y_test, BoWprediction))
print('F1 Score:', f1_score(y_test, BoWprediction))
###Output
Accuracy Score: 0.5266219239373602
F1 Score: 0.5208333333333333
###Markdown
Now I am doing 4 optimizations for Naive Bayes (Lemmatization, Remove stop words, Bigrams, Laplace Smoothing 1) Lemmatization
###Code
trainSet['commentLemmatization'] = lemmatizer(trainSet['Comment'])
testSet['commentLemmatization'] = lemmatizer(testSet['Comment'])
lemmazationTrain = countVectorizer.fit_transform(trainSet['commentLemmatization'])
lemmazationTrainArray = lemmazationTrain.toarray()
lemmazationTest = countVectorizer.transform(testSet['commentLemmatization'])
lemmazationTestArray = lemmazationTest.toarray()
classifierNB.fit(lemmazationTrainArray,trainSet['Insult'])
lemmatizationPredict = classifierNB.predict(lemmazationTestArray)
print('Accuracy Score:', accuracy_score(y_test, lemmatizationPredict))
print('F1 Score:', f1_score(y_test, lemmatizationPredict))
###Output
Accuracy Score: 0.5257270693512305
F1 Score: 0.5276292335115864
###Markdown
2) Remove stop words
###Code
trainSet['commentStopWords'] = removeStopWords(trainSet['Comment'])
testSet['commentStopWords'] = removeStopWords(testSet['Comment'])
stopWordsTrain = countVectorizer.fit_transform(trainSet['commentStopWords'])
stopWordsTrainArray = stopWordsTrain.toarray()
stopWordsTest = countVectorizer.transform(testSet['commentStopWords'])
stopWordsTestArray = stopWordsTest.toarray()
classifierNB.fit(stopWordsTrainArray,trainSet['Insult'])
stopWordPredict = classifierNB.predict(stopWordsTestArray)
print ('Accuracy Score:', accuracy_score(y_test, stopWordPredict))
print('F1 Score:', f1_score(y_test, stopWordPredict))
###Output
Accuracy Score: 0.5243847874720358
F1 Score: 0.5174761688606445
###Markdown
3) Bigrams
###Code
bigramVectorizer = CountVectorizer(ngram_range=(2,2))
bigramTrain = bigramVectorizer.fit_transform(trainSet['Comment'])
bigramTrainArray = bigramTrain.toarray()
bigramTest= bigramVectorizer.transform(testSet['Comment'])
bigramTestArray = bigramTest.toarray()
classifierNB.fit(bigramTrainArray,trainSet['Insult'])
bigramPredict = classifierNB.predict(bigramTestArray)
print ('Accuracy Score:', accuracy_score(y_test, bigramPredict))
print('F1 Score:', f1_score(y_test, bigramPredict))
###Output
Accuracy Score: 0.556599552572707
F1 Score: 0.5292161520190024
###Markdown
4) Laplace Smoothing
###Code
classifierMultinomialNB = MultinomialNB(alpha=1.0)
classifierMultinomialNB.fit(BagOfWordsTrainArray,trainSet['Insult'])
laplacePredict = classifierMultinomialNB.predict(BagOfWordsTestArray)
print ('Accuracy Score:', accuracy_score(y_test, laplacePredict))
print('F1 Score:', f1_score(y_test, laplacePredict))
###Output
Accuracy Score: 0.6769574944071588
F1 Score: 0.6143162393162394
###Markdown
Tf-idf Vectorizer
###Code
TfIdf = TfidfVectorizer()
TfIdfTrain = TfIdf.fit_transform(trainSet['Comment'])
TfIdfTest = TfIdf.transform(testSet['Comment'])
###Output
_____no_output_____
###Markdown
Part-of-Speech features for Train set
###Code
AdjectiveTrain, AdverbTrain, NounTrain, VerbTrain = posTag(trainSet['Comment'])
###Output
_____no_output_____
###Markdown
Append tf-idf and Part-of-Speech features for train set
###Code
posTrainVectorizer = scipy.sparse.hstack((TfIdfTrain, scipy.sparse.csr_matrix(NounTrain).T))
posTrainVectorizer = scipy.sparse.hstack((posTrainVectorizer, scipy.sparse.csr_matrix(AdjectiveTrain).T))
posTrainVectorizer = scipy.sparse.hstack((posTrainVectorizer, scipy.sparse.csr_matrix(AdverbTrain).T))
posTrainVectorizer = scipy.sparse.hstack((posTrainVectorizer, scipy.sparse.csr_matrix(VerbTrain).T))
###Output
_____no_output_____
###Markdown
Part-of-Speech features for Test set
###Code
AdjectiveTest, AdverbTest, NounTest, VerbTest = posTag(testSet['Comment'])
###Output
_____no_output_____
###Markdown
Append tf-idf and Part-of-Speech features for test set
###Code
posTestVectorizer = scipy.sparse.hstack((TfIdfTest, scipy.sparse.csr_matrix(NounTest).T))
posTestVectorizer = scipy.sparse.hstack((posTestVectorizer, scipy.sparse.csr_matrix(AdjectiveTest).T))
posTestVectorizer = scipy.sparse.hstack((posTestVectorizer, scipy.sparse.csr_matrix(AdverbTest).T))
posTestVectorizer = scipy.sparse.hstack((posTestVectorizer, scipy.sparse.csr_matrix(VerbTest).T))
###Output
_____no_output_____
###Markdown
Test score for Tf-idf PoS model
###Code
classifierMultinomialNB.fit(posTrainVectorizer, trainSet['Insult'])
posVectorizerPredict = classifierMultinomialNB.predict(posTestVectorizer)
print('Accuracy Score:', accuracy_score(y_test, posVectorizerPredict))
print('F1 Score:', f1_score(y_test, posVectorizerPredict))
###Output
Accuracy Score: 0.545413870246085
F1 Score: 0.11343804537521815
###Markdown
SVM
###Code
svc = svm.SVC(kernel='linear', C=1.0, gamma=0.9)
svc.fit(posTrainVectorizer,trainSet['Insult'])
posVectorizerSVM = svc.predict(posTestVectorizer)
print ('Accuracy Score:', accuracy_score(y_test, posVectorizerSVM))
print ('Test F1:', f1_score(y_test, posVectorizerSVM))
###Output
Accuracy Score: 0.6926174496644295
Test F1: 0.6094371802160318
###Markdown
Random Decision Forest
###Code
randomDecisionForest = RandomForestClassifier(n_estimators = 150)
randomDecisionForest.fit(posTrainVectorizer, trainSet['Insult'])
posVectorizerRandomForest = randomDecisionForest.predict(posTestVectorizer)
print ('Accuracy Score:', accuracy_score(y_test, posVectorizerRandomForest))
print ('Test F1:', f1_score(y_test, posVectorizerRandomForest))
###Output
Accuracy Score: 0.6259507829977629
Test F1: 0.42024965325936203
###Markdown
Beat the benchmark with proper data processing with lemmatization, remove stop words and using Tf-idf and SVM
###Code
#I couldn't improve the scores much ...
#as there are many slang words and methods that are impossible to understand,
#even with modern improved algorithms, if these words are offensive or not.
#If the values of dataset were labeled correct I could produce better results.
TfIdf = TfidfVectorizer(ngram_range=(1, 2))
trainSet['commentLemmatization'] = removeStopWords(trainSet['commentLemmatization'])
testSet['commentLemmatization'] = removeStopWords(testSet['commentLemmatization'])
TfIdfTrain = TfIdf.fit_transform(trainSet['commentLemmatization'])
TfIdfTest = TfIdf.transform(testSet['commentLemmatization'])
svc.fit(TfIdfTrain,trainSet['Insult'])
TfIdfPredict = svc.predict(TfIdfTest)
print ('Accuracy Score:', accuracy_score(y_test, TfIdfPredict))
print ('F1 Score:', f1_score(y_test, TfIdfPredict))
###Output
Accuracy Score: 0.6917225950782998
F1 Score: 0.6005797101449276
###Markdown
Imports
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pickle
import lightgbm as lgb
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score, f1_score
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data Processing Patients CSV preprocessing
###Code
dtypes = {"patient_id": np.uint16, "marital_status": bool, "height_in": np.uint8,
"num_cohabitants": np.uint8, "dist_to_doctor_mi": np.float16, "annual_household_income": np.uint32}
df_patients = pd.read_csv('data/patients_v2.csv', index_col = 0, dtype = dtypes, parse_dates = ['dob'])
df_patients = pd.get_dummies(df_patients, columns=['sex'], prefix = 'gender', dtype = bool)
bin_labels = ['low_income', 'middle_income', 'high_income']
df_patients['income_bracket'] = pd.qcut(df_patients.annual_household_income,
q = [0, .25, .75, 1], labels = bin_labels)
df_patients = pd.get_dummies(df_patients, columns=['income_bracket'], prefix = 'bracket', dtype = bool)
df_patients = df_patients.rename(columns={"dob": "patient_dob", "gender_F": "gender_female", "gender_M": "gender_male"})
###Output
_____no_output_____
###Markdown
Daily Measurements preprocessing
###Code
dtypes = {"patient_id": np.uint16, "steps": np.int16, "diastolic": np.float16,
"systolic": np.float16, "weight_lb": np.float16}
df_daily_measurements = pd.read_csv('data/daily_measurements_v2.csv', index_col = 0,
dtype = dtypes, parse_dates = ['date'])
df_daily_measurements.steps = df_daily_measurements.steps.apply(lambda x: x * -1 if x < 0 else x)
bin_labels = ['low', 'medium', 'high']
df_daily_measurements['diastolic_range'] = pd.qcut(df_daily_measurements.diastolic,
q = [0, .05, .95, 1], labels = bin_labels)
df_daily_measurements = pd.get_dummies(df_daily_measurements, columns=['diastolic_range'], dtype = bool)
df_daily_measurements['systolic_range'] = pd.qcut(df_daily_measurements.systolic,
q = [0, .05, .95, 1], labels = bin_labels)
df_daily_measurements = pd.get_dummies(df_daily_measurements, columns=['systolic_range'], dtype = bool)
bin_labels = ['very_low', 'low', 'medium', 'high', 'very_high']
df_daily_measurements['steps_bracket'] = pd.qcut(df_daily_measurements.systolic,
q = [0, .2, .4, .6, .8, 1], labels = bin_labels)
df_daily_measurements = pd.get_dummies(df_daily_measurements, columns=['steps_bracket'], dtype = bool)
df_daily_measurements = df_daily_measurements.rename( columns = {"date": "measurement_date"})
###Output
_____no_output_____
###Markdown
Appointments preprocessing
###Code
dtypes = {"patient_id": np.uint16, "attended": bool}
df_appointments = pd.read_csv('data/appointments_v2.csv', index_col = 0, dtype = dtypes, parse_dates = ['date'])
df_appointments = pd.get_dummies(df_appointments, columns=['weather'], dtype = bool)
df_appointments = df_appointments.rename(columns={"date": "appointment_date"})
###Output
_____no_output_____
###Markdown
DataFrame Merging & Prep for Model
###Code
# filter measurements to day of appointment only (time-series task here; could use WMA/EWMA, days till appointment, etc.)
day_of_appointment = df_daily_measurements[(df_daily_measurements.measurement_date.isin(np.unique(df_appointments.appointment_date)))]
# now join into appointments for same day measurements, and drop the measurement date (redundant with appointment_date)
df_appointments = pd.merge(df_appointments.reset_index(), day_of_appointment.reset_index(), how='inner',
left_on = ['patient_id', 'appointment_date'], right_on = ['patient_id','measurement_date'])
df_appointments.drop(columns=['measurement_date'], inplace=True)
df_appointments = df_appointments.set_index('patient_id')
df_appointments = df_appointments.merge(df_patients, how = 'inner', left_index = True, right_index = True)
# finally, calculate the age in years of the patient (at time of appointment) and drop the dates
def calculateAgeAtAppointment(appointment_date, birth_date):
return int((appointment_date - birth_date).days / 365.2425)
df_appointments['age_year_at_time_of_appointment'] = df_appointments.apply(lambda x: calculateAgeAtAppointment(x.appointment_date, x.patient_dob), axis = 1)
df_appointments.age_year_at_time_of_appointment = df_appointments.age_year_at_time_of_appointment.astype(np.uint8)
df_appointments.drop(columns=['appointment_date', 'patient_dob'], inplace=True)
# create our x,y dataframes for model fitting and drop all prior (unnecessary) dataframes
x = df_appointments.copy().reset_index()
x = x.drop(columns=['patient_id'])
y = x.attended.values
x = x.drop(columns=['attended'])
x = x.values
del day_of_appointment, df_daily_measurements, df_patients, dtypes, bin_labels
###Output
_____no_output_____
###Markdown
Model Fitting & KFold Evaluation
###Code
# kfold with 10 splits
kf = KFold(n_splits = 10)
# set basic prameters for LightGBM classifier and instanciate model
params = {'n_jobs':-1, 'random_state': 42, 'n_estimators': 500, 'learning_rate': 0.01}
model = lgb.LGBMClassifier(**params)
# prep dict object for saving performance of each k run
results, idx = {"k":[],"acc":[],"f1":[]}, -1
# train and measure performance of the model k-times
for train_index, test_index in kf.split(x):
idx += 1
model.fit(x[train_index], y[train_index], verbose = False)
pred = model.predict(x[test_index])
results["k"].append(idx)
results["acc"].append(accuracy_score(y[test_index], pred))
results["f1"].append(f1_score(y[test_index], pred))
# store final results into pandas dataframe
results = pd.DataFrame(results)
# show average accuracy and f1-score
print(f"mu acc: {np.mean(results.acc)}, mu f1: {np.mean(results.f1)}")
###Output
mu acc: 0.9420000000000002, mu f1: 0.9657482248910492
###Markdown
Plot Performance of Model over K iterations
###Code
mu_acc, mu_f1 = round(np.mean(results.acc) * 100, 2), round(np.mean(results.f1) * 100, 2)
###########################################################################################################
## Plotting Palette
###########################################################################################################
# Create a dict object containing U.C. Berkeley official school colors for plot palette
# reference : https://alumni.berkeley.edu/brand/color-palette
berkeley_palette = {'berkeley_blue' : '#003262',
'california_gold' : '#FDB515',
'metallic_gold' : '#BC9B6A',
'founders_rock' : '#2D637F',
'medalist' : '#E09E19',
'bay_fog' : '#C2B9A7',
'lawrence' : '#00B0DA',
'sather_gate' : '#B9D3B6',
'pacific' : '#53626F',
'soybean' : '#9DAD33',
'california_purple' : '#5C3160',
'south_hall' : '#6C3302'}
plt.rc('text', usetex = True)
plt.rc('font', family = 'sans-serif')
f = plt.figure(figsize=(15,10), dpi = 100)
ax = f.add_subplot(111)
ax.plot(results.k, results.acc, color = berkeley_palette['founders_rock'], label = 'Accuracy')
ax.scatter(results.k, results.acc, color = berkeley_palette['berkeley_blue'], marker = 'D', s = 50)
ax.plot(results.k, results.f1, color = berkeley_palette['california_gold'], label = 'F1 Score')
ax.scatter(results.k, results.f1, color = berkeley_palette['medalist'], marker = 'D', s = 50)
# set the title and axis labels
ax.set_title(r"\textbf{LightGBM Classifier Performance - Accuracy and F1-Score (K=10)}",
color = berkeley_palette['berkeley_blue'], fontsize = 25, fontweight = 'bold')
ax.set_xlabel("Kth iteration", fontsize = 20, horizontalalignment='right', x = 1.0, color = berkeley_palette['berkeley_blue'])
ax.set_ylabel("Performance", fontsize = 20, horizontalalignment='right', y = 1.0, color = berkeley_palette['berkeley_blue'])
# pretty up the plot axis labels and borders
ax.set_xticks(range(len(results)))
ax.set_yticks(np.arange(0.8, 1.01, .05))
ax.set_xticklabels(results.k + 1)
ax.set_yticklabels([str(int(round(i,2) * 100)) + "\%" for i in np.arange(0.8, 1.01, .05)])
ax.spines["top"].set_alpha(.0)
ax.spines["bottom"].set_alpha(.3)
ax.spines["right"].set_alpha(.0)
ax.spines["left"].set_alpha(.3)
# add average annotations
ax.annotate(r"$\mu\,\,Accuracy: " + str(mu_acc) + "\%$", xy=(.105, .5), xycoords='axes fraction',
horizontalalignment='left', verticalalignment='top',
fontsize = 25, color = berkeley_palette['founders_rock'])
ax.annotate(r"$\mu\,\,f1\,score: " + str(mu_f1) + "\%$", xy=(.105, .43), xycoords='axes fraction',
horizontalalignment='left', verticalalignment='top',
fontsize = 25, color = berkeley_palette['medalist'])
# plot the legend
ax.legend(loc='lower right', fontsize = 'xx-large', fancybox = True, edgecolor = berkeley_palette['pacific'])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Plot the Feature Importance of the LGB Classifier
###Code
# train the model over the entire data-set
x = df_appointments.copy().reset_index()
x = x.drop(columns=['patient_id'])
y = x.attended.values
x = x.drop(columns=['attended'])
model.fit(x, y, verbose = False)
# plot the importance
plt.rc('text', usetex = False)
plt.style.use('seaborn-white')
f = plt.figure(figsize=(15,10), dpi = 100)
ax = f.add_subplot(111)
lgb.plot_importance(model, ax = ax, height = .7, dpi = 100)
plt.show()
###Output
_____no_output_____
###Markdown
Pickle the LightGBM Classifier
###Code
with open('model/bby.pkl', 'wb') as f:
pickle.dump(model, f)
###Output
_____no_output_____
###Markdown
**Global Variables**
###Code
ID = {}
Firstname = {}
Lastname = {}
Degree = {}
###Output
_____no_output_____
###Markdown
Import Libraries
###Code
import cv2 as cv2
import numpy as np
import pickle
import cv2 as cv2
import os
import glob
from tqdm import tqdm
import math
###Output
_____no_output_____
###Markdown
Read Forms & Extract Cells
###Code
cd drive/My\ Drive/Cv_Final_Project
def empty_cell(cell):
x1 = int(cell.shape[0] * 0.2)
x2 = int(cell.shape[0] * 0.8)
y1 = int(cell.shape[1] * 0.2)
y2 = int(cell.shape[1] * 0.8)
cell = cell[x1:x2, y1:y2]
gray = cv2.cvtColor(cell, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 57, 7)
total_white = cv2.countNonZero(thresh)
ratio = total_white / float((x2-x1) * (y2-y1))
if ratio > 0.98:
return True
return False
def detect_degree(options):
min_ratio = math.inf
min_index = 0
for index, opt in enumerate(options):
x1 = int(opt.shape[0] * 0.2)
x2 = int(opt.shape[0] * 0.8)
y1 = int(opt.shape[1] * 0.2)
y2 = int(opt.shape[1] * 0.8)
opt = opt[x1:x2, y1:y2]
gray = cv2.cvtColor(opt, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 57, 5)
total_white = cv2.countNonZero(thresh)
ratio = total_white / float(thresh.shape[0] * thresh.shape[1])
if (ratio < min_ratio):
min_ratio = ratio
min_index = index
return min_index
def extracted_form_test(path):
I = cv2.imread(path)
dictionary = cv2.aruco.Dictionary_get(cv2.aruco.DICT_6X6_250)
parameters = cv2.aruco.DetectorParameters_create()
markerCorners, markerIds, rejectedCandidates = cv2.aruco.detectMarkers(I, dictionary, parameters=parameters)
aruco_list = {}
for k in range(len(markerCorners)):
temp_1 = markerCorners[k]
temp_1 = temp_1[0]
temp_2 = markerIds[k]
temp_2 = temp_2[0]
aruco_list[temp_2] = temp_1
p1 = aruco_list[34][3]
p2 = aruco_list[35][2]
p3 = aruco_list[33][0]
p4 = aruco_list[36][1]
width = 500
height = 550
points2 = np.array([(0, 0),
(width, 0),
(0, height),
(width, height)]).astype(np.float32)
points1 = np.array([p1, p2, p3, p4], dtype=np.float32)
output_size = (width, height)
H = cv2.getPerspectiveTransform(points1, points2)
J = cv2.warpPerspective(I, H, output_size)
gray = cv2.cvtColor(J, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 57, 7)
kernelOpen = np.ones((2, 2), np.uint8)
open = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernelOpen)
kernel = np.ones((2, 2), np.uint8)
close = cv2.morphologyEx(open, cv2.MORPH_CLOSE, kernel)
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
min_x = 5
max_x = 50
dir = path.split(os.path.sep)[-1]
folder_name = dir[:-4]
image_number = 1
count = 0
index = 0
degree = ["دکتری", "کارشناسی ارشد", "کارشناسی"]
info = ['ID','FN', 'LN']
degree_option = []
sorted_Y = sorted(cnts, key=lambda ctr: cv2.boundingRect(ctr)[1])
sorted_X = sorted(cnts, key=lambda ctr: cv2.boundingRect(ctr)[0])
for c in sorted_Y:
x, y, w, h = cv2.boundingRect(c)
if (x > min_x and x < max_x and y < 280 and w>10):
square_w = w // 9 + 5
h -= 5
for i in range(0, 8):
x_start = x + square_w * i
cell = J[y:y + h, x_start:x_start + square_w]
if not os.path.exists('extracted_form_test/' + str(folder_name)):
os.makedirs('extracted_form_test/' + str(folder_name))
if not empty_cell(cell):
cv2.imwrite('extracted_form_test/{}/{}.png'.format(str(folder_name) , info[index] + str(image_number)), cell)
image_number += 1
index += 1
image_number = 1
for c in sorted_X:
x, y, w, h = cv2.boundingRect(c)
if (w > 16 and w < 22 and y > 300 and h / w > 0.95):
cell = J[y:y + h, x:x + w]
if not os.path.exists('extracted_form_test/' + str(folder_name)):
os.makedirs('extracted_form_test/' + str(folder_name))
cv2.imwrite('extracted_form_test/{}/{}.png'.format(str(folder_name), degree[count]), cell)
degree_option.append(cell)
count += 1
return degree[detect_degree(degree_option)]
form_test_dir = glob.glob("form_test/*")
for test_dir in tqdm(form_test_dir):
# Degree.append(extracted_form_test(test_dir))
dir = test_dir.split(os.path.sep)[-1]
folder_name = dir[:-4]
Degree[folder_name] = extracted_form_test(test_dir)
###Output
100%|██████████| 20/20 [05:27<00:00, 16.40s/it]
###Markdown
Prepare Dataset Create Dataset Folders
###Code
codes = ['0', '1', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26',
'2', '3',
'4', '5', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '6',
'7', '8', '9'
]
folder_counter = np.zeros(42)
def extract_cells(f,file):
I = f
dictionary = cv2.aruco.Dictionary_get(cv2.aruco.DICT_6X6_250)
parameters = cv2.aruco.DetectorParameters_create()
markerCorners, markerIds, rejectedCandidates = cv2.aruco.detectMarkers(I, dictionary, parameters=parameters)
aruco_list = {}
for k in range(len(markerCorners)):
temp_1 = markerCorners[k]
temp_1 = temp_1[0]
temp_2 = markerIds[k]
temp_2 = temp_2[0]
aruco_list[temp_2] = temp_1
p1 = aruco_list[30][0]
p2 = aruco_list[31][1]
p3 = aruco_list[32][3]
p4 = aruco_list[33][2]
points2 = np.array([(0, 0),
(392, 0),
(0, 588),
(392, 588)]).astype(np.float32)
points1 = np.array([p1, p2, p3, p4], dtype=np.float32)
m = 588
n = 392
output_size = (n, m)
H = cv2.getPerspectiveTransform(points1, points2)
J = cv2.warpPerspective(I, H, output_size)
col_number = n // 28
row_number = m // 28
form_num = file[8]
for i in range(row_number):
for j in range(col_number):
ROI = J[i * 28:(i * 28) + 28, j * 28:(j * 28) + 28]
index = i if form_num == '1' else i + 21
if not os.path.exists('dataset/'+codes[index]):
os.makedirs('dataset/'+ codes[index])
if not (i < 2 and (j < 2 or j > 11)) and not (i > 18 and (j < 2 or j > 11)):
cv2.imwrite('dataset/{}/{}.png'.format(codes[index], str(folder_counter[int(codes[index])])), ROI)
folder_counter[int(codes[index])] += 1
###Output
_____no_output_____
###Markdown
Whatch Out
###Code
raw_data = './dataset/'
images_list = os.listdir(raw_data)
for file in images_list:
f = cv2.imread(raw_data + file)
extract_cells(f,file)
###Output
_____no_output_____
###Markdown
Labeling
###Code
train_dirs_digits = glob.glob("dataset/digits/*")
train_dirs_digits.sort(key=lambda f: int(''.join(filter(str.isdigit, f))))
data_digit = []
labels_digit = []
for train_dir in tqdm(train_dirs_digits):
imgPaths = glob.glob(train_dir + "/*.png")
imgPaths.sort()
for imgPath in tqdm(imgPaths):
image = load_img(imgPath, target_size=(28, 28), grayscale=True)
image = img_to_array(image)
data_digit.append(image)
label = imgPath.split(os.path.sep)[-2]
label = int(label)
labels_digit.append(label)
with open('data_digit.pkl', 'wb') as f:
pickle.dump(data_digit, f)
with open('labels_digit.pkl', 'wb') as fi:
pickle.dump(labels_digit, fi)
train_dirs = glob.glob("dataset/letter/*")
train_dirs.sort(key=lambda f: int(''.join(filter(str.isdigit, f))))
data_letter = []
labels_letter = []
for train_dir in tqdm(train_dirs):
imgPaths = glob.glob(train_dir + "/*.png")
imgPaths.sort()
for imgPath in tqdm(imgPaths):
image = load_img(imgPath, target_size=(28, 28), grayscale=True)
image = img_to_array(image)
data_letter.append(image)
label = imgPath.split(os.path.sep)[-2]
label = int(label)
labels_letter.append(label)
with open('data_letter.pkl', 'wb') as f:
pickle.dump(data_letter, f)
with open('labels_letter.pkl', 'wb') as fi:
pickle.dump(labels_letter, fi)
###Output
_____no_output_____
###Markdown
Reading Data & Label Files
###Code
with open('data_digit.pkl', 'rb') as f:
data_digit = pickle.load(f)
with open('labels_digit.pkl', 'rb') as fi:
labels_digit = pickle.load(fi)
with open('data_letter.pkl', 'rb') as f:
data_letter = pickle.load(f)
with open('labels_letter.pkl', 'rb') as fi:
labels_letter = pickle.load(fi)
print("digits data size:")
print(len(data_digit),len(labels_digit))
print("letters data size:")
print(len(data_letter),len(labels_letter))
###Output
digits data size:
15848 15848
letters data size:
65884 65884
###Markdown
Digit Neural Network
###Code
num_classes_digit = 10
EPOCHS_digit = 10
BS_digit = 32
data_digit = np.array(data_digit, dtype=np.float) / 255.
labels_digit = np.array(labels_digit)
from sklearn.model_selection import train_test_split
train_input_digit, test_input_digit, train_target_digit, test_target_digit = train_test_split(data_digit,
labels_digit,
test_size=0.05,
random_state=123)
train_input_digit, valid_input_digit, train_target_digit, valid_target_digit = train_test_split(train_input_digit,
train_target_digit,
test_size=0.25,
random_state=123)
from keras.utils import to_categorical
train_target_digit = to_categorical(train_target_digit, num_classes=num_classes_digit)
valid_target_digit = to_categorical(valid_target_digit, num_classes=num_classes_digit)
from keras.preprocessing.image import ImageDataGenerator
aug = ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1)
from keras.layers import Conv2D, MaxPool2D, Flatten, Dense
from keras.models import Model
def build_model(inputs):
x = inputs
x = Conv2D(filters=20, kernel_size=(5, 5), padding="same", activation="relu")(x)
x = MaxPool2D(pool_size=(2, 2), strides=(2, 2))(x)
x = Conv2D(filters=50, kernel_size=(5, 5), padding="same", activation="relu")(x)
x = MaxPool2D(pool_size=(2, 2), strides=(2, 2))(x)
x = Flatten()(x)
x = Dense(500, activation="relu")(x)
outputs = Dense(num_classes_digit, activation="softmax")(x)
model = Model(inputs, outputs, name="LeNet")
model.summary()
return model
from keras.layers import Input
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
input = Input((28, 28, 1))
model_digit = build_model(input)
opt = Adam()
model_digit.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["acc"])
checkpoint = ModelCheckpoint(filepath="model_digit.h5",
monitor="val_acc",
verbose=1,
save_best_only=True)
batches = aug.flow(train_input_digit, train_target_digit, batch_size=BS_digit)
training_log = model_digit.fit_generator(batches,
samples_per_epoch=batches.n,
steps_per_epoch=len(train_input_digit) // BS_digit,
validation_data=[valid_input_digit, valid_target_digit],
epochs=EPOCHS_digit,
callbacks=[checkpoint])
from matplotlib import pyplot as plt
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(EPOCHS_digit), training_log.history["loss"], label="train_loss")
plt.plot(np.arange(EPOCHS_digit), training_log.history["acc"], label="train_acc")
plt.plot(np.arange(EPOCHS_digit), training_log.history["val_loss"], label="val_loss")
plt.plot(np.arange(EPOCHS_digit), training_log.history["val_acc"], label="val_acc")
plt.xlabel("Epochs")
plt.ylabel("loss/accuracy")
plt.title("training plot digit")
plt.legend(loc="lower left")
plt.savefig("training_plot_digit.png")
correct = 0
for idx,test in enumerate(test_input_digit):
test = np.expand_dims(test, 0)
model_digit.load_weights("model_digit.h5")
predictions = model_digit.predict(test)[0]
label = np.argmax(predictions)
if (label == test_target_digit[idx]):
correct += 1
test_accuracy = correct/len(test_target_digit)
print("test_accuracy_digit:" , test_accuracy)
###Output
test_accuracy_digit: 0.9773013871374527
###Markdown
Letter Neural Network
###Code
num_classes_letter = 32
EPOCHS_letter = 20
BS_letter = 32
data_letter = np.array(data_letter, dtype=np.float) / 255.
labels_letter = np.array(labels_letter)
from sklearn.model_selection import train_test_split
train_input_letter, test_input_letter, train_target_letter, test_target_letter = train_test_split(data_letter,
labels_letter,
test_size=0.05,
random_state=123)
train_input_letter, valid_input_letter, train_target_letter, valid_target_letter = train_test_split(train_input_letter,
train_target_letter,
test_size=0.25,
random_state=123)
from keras.utils import to_categorical
train_target_letter = to_categorical(train_target_letter, num_classes=num_classes_letter)
valid_target_letter = to_categorical(valid_target_letter, num_classes=num_classes_letter)
from keras.preprocessing.image import ImageDataGenerator
aug = ImageDataGenerator(rotation_range=15, width_shift_range=0.1, height_shift_range=0.1)
from keras.layers import Conv2D, MaxPool2D, Flatten, Dense
from keras.models import Model
def build_model_letter(inputs):
x = inputs
x = Conv2D(filters=20, kernel_size=(5, 5), padding="same", activation="relu")(x)
x = MaxPool2D(pool_size=(2, 2), strides=(2, 2))(x)
x = Conv2D(filters=50, kernel_size=(5, 5), padding="same", activation="relu")(x)
x = MaxPool2D(pool_size=(2, 2), strides=(2, 2))(x)
x = Flatten()(x)
x = Dense(500, activation="relu")(x)
outputs = Dense(num_classes_letter, activation="softmax")(x)
model = Model(inputs, outputs, name="LeNet")
model.summary()
return model
from keras.layers import Input
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
input = Input((28, 28, 1))
model_letter = build_model_letter(input)
opt = Adam()
model_letter.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["acc"])
checkpoint = ModelCheckpoint(filepath="model_letter.h5",
monitor="val_acc",
verbose=1,
save_best_only=True)
batches = aug.flow(train_input_letter, train_target_letter, batch_size=BS_letter)
training_log = model_letter.fit_generator(batches,
samples_per_epoch=batches.n,
steps_per_epoch=len(train_input_letter) // BS_letter,
validation_data=[valid_input_letter, valid_target_letter],
epochs=EPOCHS_letter,
callbacks=[checkpoint])
from matplotlib import pyplot as plt
plt.style.use("ggplot")
plt.figure()
plt.plot(np.arange(EPOCHS_letter), training_log.history["loss"], label="train_loss")
plt.plot(np.arange(EPOCHS_letter), training_log.history["acc"], label="train_acc")
plt.plot(np.arange(EPOCHS_letter), training_log.history["val_loss"], label="val_loss")
plt.plot(np.arange(EPOCHS_letter), training_log.history["val_acc"], label="val_acc")
plt.xlabel("Epochs")
plt.ylabel("loss/accuracy")
plt.title("training plot digit")
plt.legend(loc="lower left")
plt.savefig("training_plot_letter.png")
correct = 0
false=0
for idx,test in enumerate(test_input_letter):
test = np.expand_dims(test, 0)
model_letter.load_weights("model_letter.h5")
predictions = model_letter.predict(test)[0]
label = np.argmax(predictions)
if (label == test_target_letter[idx]):
correct += 1
test_accuracy = correct/len(test_target_letter)
print("test_accuracy_letter:" , test_accuracy)
###Output
test_accuracy_letter: 0.9138088012139606
###Markdown
Resualts
###Code
def predict(model,model_type, imgPath):
image = load_img(imgPath, target_size=(28, 28), grayscale=True)
image = img_to_array(image) / 255.
orig_img = image.copy()
image = np.expand_dims(image, 0)
model.load_weights(model_type)
predictions = model.predict(image)[0]
label = np.argmax(predictions)
return label
def decode(code):
codes={ 0:'ا',1:'ب',2:'پ',3:'ت',4:'ث',5:'ج',6:'چ',7:'ح',8:'خ',9:'د',10:'ذ',11:'ر',
12:'ز',13:'ژ',14:'س',15:'ش',16:'ص',17:'ض',18:'ط',19:'ظ',20:'ع',21:'غ',22:'ف',23:'ق',
24:'ک',25:'گ',26:'ل',27:'م',28:'ن',29:'و',30:'ه',31:'ی'}
plain = codes[code]
return plain
from keras.layers import Input
input = Input((28, 28, 1))
model_digit = build_model(input)
model_letter = build_model_letter(input)
from keras.preprocessing.image import load_img, img_to_array
model_type = None
detected_ID = ''
detected_FN = ''
detected_LN = ''
test_forms = glob.glob("extracted_form_test/*")
test_forms.sort(key=lambda f: int(''.join(filter(str.isdigit, f))))
for test_dir in test_forms:
form_id = test_dir.split(os.path.sep)[-1]
imgPaths = glob.glob(test_dir + "/*.png")
imgPaths.sort(key=lambda f: int(''.join(filter(str.isdigit, f))))
for imgPath in tqdm(imgPaths):
image = load_img(imgPath, target_size=(28, 28), grayscale=True)
image = img_to_array(image) / 255.
orig_img = image.copy()
image = np.expand_dims(image, 0)
if(imgPath.find('ID') != -1):
model_type = 'model_digit.h5'
detected_ID += str(predict(model_digit,model_type,imgPath))
if(imgPath.find('FN') != -1):
model_type = 'model_letter.h5'
detected_FN += decode(predict(model_letter,model_type,imgPath))
if(imgPath.find('LN')!=-1):
model_type = 'model_letter.h5'
detected_LN += decode(predict(model_letter,model_type,imgPath))
ID[form_id] = detected_ID
detected_ID = ''
Firstname[form_id] = detected_FN[::-1]
detected_FN = ''
Lastname[form_id] = detected_LN[::-1]
detected_LN = ''
for key,value in Lastname.items():
print("-------------------------------------")
print("Image:", key)
print("ID",ID[key])
print("Firstname",Firstname[key])
print("Lastname",Lastname[key])
print("Degree", Degree[key])
import timeit
start = timeit.default_timer()
form_test_dir = glob.glob("form_test/*")
for test_dir in tqdm(form_test_dir):
dir = test_dir.split(os.path.sep)[-1]
folder_name = dir[:-4]
Degree[folder_name] = extracted_form_test(test_dir)
input = Input((28, 28, 1))
model_digit = build_model(input)
model_letter = build_model_letter(input)
from keras.preprocessing.image import load_img, img_to_array
model_type = None
detected_ID = ''
detected_FN = ''
detected_LN = ''
test_forms = glob.glob("extracted_form_test/*")
test_forms.sort(key=lambda f: int(''.join(filter(str.isdigit, f))))
for test_dir in test_forms:
form_id = test_dir.split(os.path.sep)[-1]
imgPaths = glob.glob(test_dir + "/*.png")
imgPaths.sort(key=lambda f: int(''.join(filter(str.isdigit, f))))
for imgPath in tqdm(imgPaths):
image = load_img(imgPath, target_size=(28, 28), grayscale=True)
image = img_to_array(image) / 255.
orig_img = image.copy()
image = np.expand_dims(image, 0)
if(imgPath.find('ID') != -1):
model_type = 'model_digit.h5'
detected_ID += str(predict(model_digit,model_type,imgPath))
if(imgPath.find('FN') != -1):
model_type = 'model_letter.h5'
detected_FN += decode(predict(model_letter,model_type,imgPath))
if(imgPath.find('LN')!=-1):
model_type = 'model_letter.h5'
detected_LN += decode(predict(model_letter,model_type,imgPath))
ID[form_id] = detected_ID
detected_ID = ''
Firstname[form_id] = detected_FN[::-1]
detected_FN = ''
Lastname[form_id] = detected_LN[::-1]
detected_LN = ''
for key,value in Lastname.items():
print("-------------------------------------")
print("Form:", key)
print("ID",ID[key])
print("Firstname",Firstname[key])
print("Lastname",Lastname[key])
print("degree:",Degree[key])
stop = timeit.default_timer()
print('Time: ', stop - start)
###Output
0%| | 0/20 [00:00<?, ?it/s][A[A[A
5%|▌ | 1/20 [00:00<00:05, 3.37it/s][A[A[A
10%|█ | 2/20 [00:00<00:05, 3.48it/s][A[A[A
15%|█▌ | 3/20 [00:00<00:04, 3.77it/s][A[A[A
20%|██ | 4/20 [00:00<00:04, 4.00it/s][A[A[A
25%|██▌ | 5/20 [00:01<00:03, 3.83it/s][A[A[A
30%|███ | 6/20 [00:01<00:03, 3.67it/s][A[A[A
35%|███▌ | 7/20 [00:01<00:03, 3.72it/s][A[A[A
40%|████ | 8/20 [00:02<00:03, 3.70it/s][A[A[A
45%|████▌ | 9/20 [00:02<00:02, 3.78it/s][A[A[A
50%|█████ | 10/20 [00:02<00:02, 3.82it/s][A[A[A
55%|█████▌ | 11/20 [00:02<00:02, 3.72it/s][A[A[A
60%|██████ | 12/20 [00:03<00:02, 3.99it/s][A[A[A
65%|██████▌ | 13/20 [00:03<00:01, 3.91it/s][A[A[A
70%|███████ | 14/20 [00:03<00:01, 3.88it/s][A[A[A
75%|███████▌ | 15/20 [00:03<00:01, 3.93it/s][A[A[A
80%|████████ | 16/20 [00:04<00:00, 4.02it/s][A[A[A
85%|████████▌ | 17/20 [00:04<00:00, 3.87it/s][A[A[A
90%|█████████ | 18/20 [00:04<00:00, 3.73it/s][A[A[A
95%|█████████▌| 19/20 [00:04<00:00, 3.75it/s][A[A[A
100%|██████████| 20/20 [00:05<00:00, 3.79it/s]
0%| | 0/19 [00:00<?, ?it/s][A[A[A/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/utils.py:107: UserWarning: grayscale is deprecated. Please use color_mode = "grayscale"
warnings.warn('grayscale is deprecated. Please use '
|
Data Science Resources/Jose portila - ML/04-Matplotlib/.ipynb_checkpoints/00-Matplotlib-Basics-checkpoint.ipynb | ###Markdown
___ ___ MATPLOTLIB---- Matplotlib Basics Introduction Matplotlib is the "grandfather" library of data visualization with Python. It was created by John Hunter. He created it to try to replicate MatLab's (another programming language) plotting capabilities in Python. So if you happen to be familiar with matlab, matplotlib will feel natural to you.It is an excellent 2D and 3D graphics library for generating scientific figures. Some of the major Pros of Matplotlib are:* Generally easy to get started for simple plots* Support for custom labels and texts* Great control of every element in a figure* High-quality output in many formats* Very customizable in generalMatplotlib allows you to create reproducible figures programmatically. Let's learn how to use it! Before continuing this lecture, I encourage you just to explore the official Matplotlib web page: http://matplotlib.org/ Installation If you are using our environment, its already installed for you. If you are not using our environment (not recommended), you'll need to install matplotlib first with either: conda install matplotlibor pip install matplotlib Importing Import the `matplotlib.pyplot` module under the name `plt` (the tidy way):
###Code
# COMMON MISTAKE!
# DON'T FORGET THE .PYPLOT part
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
**NOTE: If you are using an older version of jupyter, you need to run a "magic" command to see the plots inline with the notebook. Users of jupyter notebook 1.0 and above, don't need to run the cell below:**
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
**NOTE: For users running .py scripts in an IDE like PyCharm or Sublime Text Editor. You will not see the plots in a notebook, instead if you are using another editor, you'll use: *plt.show()* at the end of all your plotting commands to have the figure pop up in another window.** Basic ExampleLet's walk through a very simple example using two numpy arrays: Basic Array PlotLet's walk through a very simple example using two numpy arrays. You can also use lists, but most likely you'll be passing numpy arrays or pandas columns (which essentially also behave like arrays).**The data we want to plot:**
###Code
import numpy as np
x = np.arange(0,10)
y = 2*x
x
y
###Output
_____no_output_____
###Markdown
Using Matplotlib with plt.plot() function calls Basic Matplotlib CommandsWe can create a very simple line plot using the following ( I encourage you to pause and use Shift+Tab along the way to check out the document strings for the functions we are using).
###Code
plt.plot(x, y)
plt.xlabel('X Axis Title Here')
plt.ylabel('Y Axis Title Here')
plt.title('String Title Here')
plt.show() # Required for non-jupyter users , but also removes Out[] info
###Output
_____no_output_____
###Markdown
Editing more figure parameters
###Code
plt.plot(x, y)
plt.xlabel('X Axis Title Here')
plt.ylabel('Y Axis Title Here')
plt.title('String Title Here')
plt.xlim(0,6) # Lower Limit, Upper Limit
plt.ylim(0,12) # Lower Limit, Upper Limit
plt.show() # Required for non-jupyter users , but also removes Out[] info
###Output
_____no_output_____
###Markdown
Exporting a plot
###Code
help(plt.savefig)
plt.plot(x,y)
plt.savefig('example.png')
###Output
_____no_output_____ |
docs/07a-M2-Parameters.ipynb | ###Markdown
M² and Beam Quality Parameters**Scott Prahl****July 2020**In this notebook, the basic definitions of the beam waist, beam divergence, beam product, and M² are introduced.As Ross points out in his book, *Laser Beam Quality Metrics*, describing a laser beam by a few numbers is an approximation that discards quite a lot of information.> Any attempt to reduce the behavior of a seven-dimensional object to a single number inevitably results in loss of information.where the seven dimensions consist of three-amplitude, three-phase, and time. Nevertheless, M² is a simple, widely-used metric for characterizing laser beams.
###Code
import numpy as np
import matplotlib.pyplot as plt
import laserbeamsize as lbs
###Output
_____no_output_____
###Markdown
Minimum Beam RadiusThe minimum beam radius $w_0$ (and its location $z_0$) tell us a lot about a laser beam. We know that a laser beam must have a minimum beam radius somewhere. If we assume that the beam obeys Gaussian beam propagation rules then we can make a few observations.> It would seem that $w$ should stand for *width*, but it doesn't. This means that $w$ is not the diameter but the radius. Go figure.A laser cavity with a flat mirror will have minimum beam radius at that mirror. For diode lasers, beam exits through a cleaved flat surface. Since the gain medium in a diode laser usually has a rectangular cross section, there are two different minimum beam radii associated with the exit aperture. These are often assumed to correspond to the dimensions of the gain medium. In general, though, the beam waist happens somewhere inside the laser and both its location and radius is unknown. To determine the beam waist an aberration-free focusing lens is used to create an new beam waist external to the cavity that can be measured. Gaussian Beam RadiusThe parameter $w(z)$ represents the beam radius at an axial location $z$. When $z = z_0$, the beam reaches its minimum radius $w_0$,$$w^2(z)=w_0^2\left[1+\left(\frac{z-z_0}{z_R}\right)^2\right]$$where $z_R=\pi w_0^2/\lambda M^2$.Therefore, for a simple gaussian beam M²=1, the minimum radius $w_0$ and its location $z_0$ determine the beam size everywhere (assuming, of course, that the wavelength is known). As can be seen in the plot below, the beam reaches a minimum and then expands symmetrically about the axial location of the minimum.
###Code
w0=0.1e-3 # radius of beam waist [m]
lambda0=632.8e-9 # wavelength again in m
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance in m
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
plt.vlines(0,0,w0,color='black',lw=1)
plt.text(0,w0/2,' $w_0$', va='center')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Beam Divergence $\Theta$All beams diverge or spread out as the beam propagates along the $z$ direction. The far-field divergence is defined as the half-angle$$\theta=\lim_{z\rightarrow\infty}\frac{w(z)}{z} = \frac{w_0}{z_R}$$where $w(z)$ is the beam radius at a distance $z$. The full angle $\Theta$ is$$\Theta=\lim_{z\rightarrow\infty}\frac{d(z)}{z}= \frac{2 w_0}{z_R}$$
###Code
w0=0.1e-3 # radius of beam waist [m]
lambda0=632.8e-9 # wavelength again in m
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance in m
theta = w0/zR
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.plot(z,theta*z,'--b')
plt.plot(z,-theta*z,'--b')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Beam Divergence")
plt.text(120e-3, 0e-3, r'$\Theta=2\theta$', fontsize=14, va='center',color='blue')
plt.annotate('',xy=(100e-3,0.2e-3),xytext=(100e-3,-0.2e-3),arrowprops=dict(connectionstyle="arc3,rad=0.2", arrowstyle="<->",color='blue'))
#plt.xticks([])
#plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
For a perfect Gaussian beam, the beam divergence is completely determined by its minimum beam radius $w_{00}$$$\Theta_{00} = \frac{\lambda}{\pi w_{00}}$$where the 00 subscript indicates that these only apply to the TEM$_{00}$ or fundamental gaussian mode. Beam Parameter ProductLaser beam quality can be described by combining the previous two metrics into a single beam parameter product (BPP) or$$\mathrm{BPP} = w \cdot \Theta$$where $w$ is the radius of the beam (at its waist/narrowest point) and $\Theta$ is the half-angle measure of the beam divergence in the far-field.This is not unlike the throughput parameter (area $\times$ solid angle) from radiometry which captures both the angular expansion of light and focusing into a single variable. The BPP represents, for instance, the amount of light that can be coupled into a fiber. For practical use of the BPP, seeWang, [Fiber coupled diode laser beam parameter product calculation and rules for optimized design](https://www.researchgate.net/publication/253527159_Fiber_Coupled_Diode_Laser_Beam_Parameter_Product_Calculation_and_Rules_for_Optimized_Design), *Proc. SPIE*, **7918**, 9 (2011) M² or the beam propagation factorIt turns out that real beams differ from perfect Gaussian beams. Specifically, they diverge more quickly or don't focus to the same size spot.The beam propagation factor M² is a measure of how close a beam is to Gaussian (TEM$_{00}$ mode).Johnston and Sasnett write in their chapter "Characterization of Laser Beams: The M² Model" in the *Handbook of Optical and Laser Scanning*, Marcel Dekker, (2004)::> Unlike the fundamental mode beam where the 1/e$^2$-diameter definition is universally understood and applied, for mixed modes a number of different diameter definitions have been employed. The different definitions have in common that they all reduce to the 1/e$^2$-diameter when applied to an $M^2=1$ fundamental mode beam, but when applied to a mixed mode with higher order mode content, they in general give different numerical values. As M² always depends on a product of two measured diameters, its numerical value changes also as the square of that for diameters. It is all the same beam, but different methods provide results in different currencies; one has to specify what currency is in use and know the exchange rate.M² is defined as the ratio of the beam parameter product (BPP)$$M^2 = \frac{\mathrm{BPP}}{\mathrm{BPP}_{00}} = \frac{\Theta \cdot w_0}{\Theta_{00}\cdot w_{00}}$$where $\Theta$ is the far-field beam divergence and $w_0$ is the minimum beam radius of the real beam. The beam divergence of a perfect Gaussian is$$\Theta_{00} = \frac{\lambda}{\pi w_{00}}$$and therefore the beam quality factor becomes$$M^2 = \frac{\Theta \cdot w_0}{\lambda\cdot \pi}$$where radius $w_0$ is the minimum radius for the real beam. A Gaussian beam has M²=1, while all other beams will have M²>1. Moreover,* for a given *beam radius*, the Gaussian beam has the smallest possible beam divergence* for a given *beam divergence*, the Gaussian beam has the smallest possible beam radius. A multimode beam has a beam waist which is M² times larger than a fundamental Gaussian beam with the same beam divergence, or a beam divergence which is M² times larger than that of a fundamental Gaussian beam with the same beam waist. Astigmatic or Elliptical BeamsA simple stigmatic beam has rotational symmetry — any cross section will display a circular profile. However, a simple astigmatic beam will have elliptical cross-sections. It is *simple* because the major and minor axes of the ellipse remain in the same plane (a general astigmatic beam will have elliptical cross-sections that rotate with propagation distance). For an elliptical beam, the beam waist radius, beam waist location, and Rayleigh distance will differ on the semi-major and semi-minor axes. Unsurprisingly, the M² values may differ as well$$w_x^2(z) = w_{0x}^2\left[1 + \left(\frac{z-z_0}{z_{Rx}} \right)^2\right]$$and$$w_y^2(z) = w_{0y}^2\left[1 + \left(\frac{z-z_0}{z_{Ry}} \right)^2\right]$$Two different M² values for the major and minor axes of the elliptical beam shape arise from the two Rayleigh distances$$z_{Rx} = \frac{\pi w_{0x}^2}{\lambda M_x^2} \qquad\mbox{and}\qquad z_{Ry} = \frac{\pi w_{0y}^2}{\lambda M_y^2}$$ Rayleigh Distance $z_R$The Rayleigh distance $z_R$ is the distance from the beam waist to the point where the beam area has doubled. This means that the irradiance (power/area) has dropped 50% or that beam radius has increased by a factor of $\sqrt{2}$.Interestingly, the radius of curvature of the beam is largest at one Rayleigh distance from the beam waist.The Rayleigh distance for a real beam defined as$$z_R=\frac{\pi w_0^2}{\lambda M^2}$$where $w_0$ is the minimum beam radius of the beam.
###Code
w0=0.1 # radius of beam waist [mm]
lambda0=0.6328/1000 # again in mm
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance
theta = w0/zR
z = np.linspace(-3*zR,3*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
#plt.axvline(z0,color='black',lw=1)
plt.axvline(zR,color='blue', linestyle=':')
plt.axvline(-zR,color='blue', linestyle=':')
plt.text(zR, -3*w0, ' $z_R$')
plt.text(-zR, -3*w0, '$-z_R$ ', ha='right')
plt.text(0,w0/2,' $w_0$', va='center')
plt.text(zR,w0/2,' $\sqrt{2}w_0$', va='center')
plt.vlines(0,0,w0,color='black',lw=2)
plt.vlines(zR,0,np.sqrt(2)*w0,color='black',lw=2)
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Rayleigh Distance")
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
M² and Beam Quality Parameters**Scott Prahl****Mar 2021**In this notebook, the basic definitions of the beam waist, beam divergence, beam product, and M² are introduced.As Ross points out in his book, *Laser Beam Quality Metrics*, describing a laser beam by a few numbers is an approximation that discards quite a lot of information.> Any attempt to reduce the behavior of a seven-dimensional object to a single number inevitably results in loss of information.where the seven dimensions consist of three-amplitude, three-phase, and time. Nevertheless, M² is a simple, widely-used metric for characterizing laser beams.---*If* `` laserbeamsize `` *is not installed, uncomment the following cell (i.e., delete the initial ) and execute it with* `` shift-enter ``. *Afterwards, you may need to restart the kernel/runtime before the module will import successfully.*
###Code
#!pip install --user laserbeamsize
import numpy as np
import matplotlib.pyplot as plt
try:
import laserbeamsize as lbs
except ModuleNotFoundError:
print('laserbeamsize is not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
###Output
_____no_output_____
###Markdown
Minimum Beam RadiusThe minimum beam radius $w_0$ (and its location $z_0$) tell us a lot about a laser beam. We know that a laser beam must have a minimum beam radius somewhere. If we assume that the beam obeys Gaussian beam propagation rules then we can make a few observations.> It would seem that $w$ should stand for *width*, but it doesn't. This means that $w$ is not the diameter but the radius. Go figure.A laser cavity with a flat mirror will have minimum beam radius at that mirror. For diode lasers, beam exits through a cleaved flat surface. Since the gain medium in a diode laser usually has a rectangular cross section, there are two different minimum beam radii associated with the exit aperture. These are often assumed to correspond to the dimensions of the gain medium. In general, though, the beam waist happens somewhere inside the laser and both its location and radius is unknown. To determine the beam waist an aberration-free focusing lens is used to create an new beam waist external to the cavity that can be measured. Gaussian Beam RadiusThe parameter $w(z)$ represents the beam radius at an axial location $z$. When $z = z_0$, the beam reaches its minimum radius $w_0$,$$w^2(z)=w_0^2\left[1+\left(\frac{z-z_0}{z_R}\right)^2\right]$$where $z_R=\pi w_0^2/\lambda M^2$.Therefore, for a simple gaussian beam M²=1, the minimum radius $w_0$ and its location $z_0$ determine the beam size everywhere (assuming, of course, that the wavelength is known). As can be seen in the plot below, the beam reaches a minimum and then expands symmetrically about the axial location of the minimum.
###Code
w0=0.1e-3 # radius of beam waist [m]
lambda0=632.8e-9 # wavelength again in m
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance in m
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
plt.vlines(0,0,w0,color='black',lw=1)
plt.text(0,w0/2,' $w_0$', va='center')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Beam Divergence $\Theta$All beams diverge or spread out as the beam propagates along the $z$ direction. The far-field divergence is defined as the half-angle$$\theta=\lim_{z\rightarrow\infty}\frac{w(z)}{z} = \frac{w_0}{z_R}$$where $w(z)$ is the beam radius at a distance $z$. The full angle $\Theta$ is$$\Theta=\lim_{z\rightarrow\infty}\frac{d(z)}{z}= \frac{2 w_0}{z_R}$$
###Code
w0=0.1e-3 # radius of beam waist [m]
lambda0=632.8e-9 # wavelength again in m
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance in m
theta = w0/zR
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.plot(z,theta*z,'--b')
plt.plot(z,-theta*z,'--b')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Beam Divergence")
plt.text(120e-3, 0e-3, r'$\Theta=2\theta$', fontsize=14, va='center',color='blue')
plt.annotate('',xy=(100e-3,0.2e-3),xytext=(100e-3,-0.2e-3),arrowprops=dict(connectionstyle="arc3,rad=0.2", arrowstyle="<->",color='blue'))
#plt.xticks([])
#plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
For a perfect Gaussian beam, the beam divergence is completely determined by its minimum beam radius $w_{00}$$$\Theta_{00} = \frac{\lambda}{\pi w_{00}}$$where the 00 subscript indicates that these only apply to the TEM$_{00}$ or fundamental gaussian mode. Beam Parameter ProductLaser beam quality can be described by combining the previous two metrics into a single beam parameter product (BPP) or$$\mathrm{BPP} = w \cdot \Theta$$where $w$ is the radius of the beam (at its waist/narrowest point) and $\Theta$ is the half-angle measure of the beam divergence in the far-field.This is not unlike the throughput parameter (area $\times$ solid angle) from radiometry which captures both the angular expansion of light and focusing into a single variable. The BPP represents, for instance, the amount of light that can be coupled into a fiber. For practical use of the BPP, seeWang, [Fiber coupled diode laser beam parameter product calculation and rules for optimized design](https://www.researchgate.net/publication/253527159_Fiber_Coupled_Diode_Laser_Beam_Parameter_Product_Calculation_and_Rules_for_Optimized_Design), *Proc. SPIE*, **7918**, 9 (2011) M² or the beam propagation factorIt turns out that real beams differ from perfect Gaussian beams. Specifically, they diverge more quickly or don't focus to the same size spot.The beam propagation factor M² is a measure of how close a beam is to Gaussian (TEM$_{00}$ mode).Johnston and Sasnett write in their chapter "Characterization of Laser Beams: The M² Model" in the *Handbook of Optical and Laser Scanning*, Marcel Dekker, (2004)::> Unlike the fundamental mode beam where the 1/e$^2$-diameter definition is universally understood and applied, for mixed modes a number of different diameter definitions have been employed. The different definitions have in common that they all reduce to the 1/e$^2$-diameter when applied to an $M^2=1$ fundamental mode beam, but when applied to a mixed mode with higher order mode content, they in general give different numerical values. As M² always depends on a product of two measured diameters, its numerical value changes also as the square of that for diameters. It is all the same beam, but different methods provide results in different currencies; one has to specify what currency is in use and know the exchange rate.M² is defined as the ratio of the beam parameter product (BPP)$$M^2 = \frac{\mathrm{BPP}}{\mathrm{BPP}_{00}} = \frac{\Theta \cdot w_0}{\Theta_{00}\cdot w_{00}}$$where $\Theta$ is the far-field beam divergence and $w_0$ is the minimum beam radius of the real beam. The beam divergence of a perfect Gaussian is$$\Theta_{00} = \frac{\lambda}{\pi w_{00}}$$and therefore the beam quality factor becomes$$M^2 = \frac{\Theta \cdot w_0}{\lambda\cdot \pi}$$where radius $w_0$ is the minimum radius for the real beam. A Gaussian beam has M²=1, while all other beams will have M²>1. Moreover,* for a given *beam radius*, the Gaussian beam has the smallest possible beam divergence* for a given *beam divergence*, the Gaussian beam has the smallest possible beam radius. A multimode beam has a beam waist which is M² times larger than a fundamental Gaussian beam with the same beam divergence, or a beam divergence which is M² times larger than that of a fundamental Gaussian beam with the same beam waist. Astigmatic or Elliptical BeamsA simple stigmatic beam has rotational symmetry — any cross section will display a circular profile. However, a simple astigmatic beam will have elliptical cross-sections. It is *simple* because the major and minor axes of the ellipse remain in the same plane (a general astigmatic beam will have elliptical cross-sections that rotate with propagation distance). For an elliptical beam, the beam waist radius, beam waist location, and Rayleigh distance will differ on the semi-major and semi-minor axes. Unsurprisingly, the M² values may differ as well$$w_x^2(z) = w_{0x}^2\left[1 + \left(\frac{z-z_0}{z_{Rx}} \right)^2\right]$$and$$w_y^2(z) = w_{0y}^2\left[1 + \left(\frac{z-z_0}{z_{Ry}} \right)^2\right]$$Two different M² values for the major and minor axes of the elliptical beam shape arise from the two Rayleigh distances$$z_{Rx} = \frac{\pi w_{0x}^2}{\lambda M_x^2} \qquad\mbox{and}\qquad z_{Ry} = \frac{\pi w_{0y}^2}{\lambda M_y^2}$$ Rayleigh Distance $z_R$The Rayleigh distance $z_R$ is the distance from the beam waist to the point where the beam area has doubled. This means that the irradiance (power/area) has dropped 50% or that beam radius has increased by a factor of $\sqrt{2}$.Interestingly, the radius of curvature of the beam is largest at one Rayleigh distance from the beam waist.The Rayleigh distance for a real beam defined as$$z_R=\frac{\pi w_0^2}{\lambda M^2}$$where $w_0$ is the minimum beam radius of the beam.
###Code
w0=0.1 # radius of beam waist [mm]
lambda0=0.6328/1000 # again in mm
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance
theta = w0/zR
z = np.linspace(-3*zR,3*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
#plt.axvline(z0,color='black',lw=1)
plt.axvline(zR,color='blue', linestyle=':')
plt.axvline(-zR,color='blue', linestyle=':')
plt.text(zR, -3*w0, ' $z_R$')
plt.text(-zR, -3*w0, '$-z_R$ ', ha='right')
plt.text(0,w0/2,' $w_0$', va='center')
plt.text(zR,w0/2,' $\sqrt{2}w_0$', va='center')
plt.vlines(0,0,w0,color='black',lw=2)
plt.vlines(zR,0,np.sqrt(2)*w0,color='black',lw=2)
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Rayleigh Distance")
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
M² and Beam Quality Parameters**Scott Prahl****Mar 2021**In this notebook, the basic definitions of the beam waist, beam divergence, beam product, and M² are introduced.As Ross points out in his book, *Laser Beam Quality Metrics*, describing a laser beam by a few numbers is an approximation that discards quite a lot of information.> Any attempt to reduce the behavior of a seven-dimensional object to a single number inevitably results in loss of information.where the seven dimensions consist of three-amplitude, three-phase, and time. Nevertheless, M² is a simple, widely-used metric for characterizing laser beams.---*If* `` laserbeamsize `` *is not installed, uncomment the following cell (i.e., delete the initial ) and execute it with* `` shift-enter ``. *Afterwards, you may need to restart the kernel/runtime before the module will import successfully.*
###Code
#!pip install --user laserbeamsize
import numpy as np
import matplotlib.pyplot as plt
try:
import laserbeamsize as lbs
except ModuleNotFoundError:
print('laserbeamsize is not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
###Output
_____no_output_____
###Markdown
Minimum Beam RadiusThe minimum beam radius $w_0$ (and its location $z_0$) tell us a lot about a laser beam. We know that a laser beam must have a minimum beam radius somewhere. If we assume that the beam obeys Gaussian beam propagation rules then we can make a few observations.> It would seem that $w$ should stand for *width*, but it doesn't. This means that $w$ is not the diameter but the radius. Go figure.A laser cavity with a flat mirror will have minimum beam radius at that mirror. For diode lasers, beam exits through a cleaved flat surface. Since the gain medium in a diode laser usually has a rectangular cross section, there are two different minimum beam radii associated with the exit aperture. These are often assumed to correspond to the dimensions of the gain medium. In general, though, the beam waist happens somewhere inside the laser and both its location and radius is unknown. To determine the beam waist an aberration-free focusing lens is used to create an new beam waist external to the cavity that can be measured. Gaussian Beam RadiusThe parameter $w(z)$ represents the beam radius at an axial location $z$. When $z = z_0$, the beam reaches its minimum radius $w_0$,$$w^2(z)=w_0^2\left[1+\left(\frac{z-z_0}{z_R}\right)^2\right]$$where $z_R=\pi w_0^2/\lambda M^2$.Therefore, for a simple gaussian beam M²=1, the minimum radius $w_0$ and its location $z_0$ determine the beam size everywhere (assuming, of course, that the wavelength is known). As can be seen in the plot below, the beam reaches a minimum and then expands symmetrically about the axial location of the minimum.
###Code
w0=0.1e-3 # radius of beam waist [m]
lambda0=632.8e-9 # wavelength again in m
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance in m
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
plt.vlines(0,0,w0,color='black',lw=1)
plt.text(0,w0/2,' $w_0$', va='center')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Beam Divergence $\Theta$All beams diverge or spread out as the beam propagates along the $z$ direction. The far-field divergence is defined as the half-angle$$\theta=\lim_{z\rightarrow\infty}\frac{w(z)}{z} = \frac{w_0}{z_R}$$where $w(z)$ is the beam radius at a distance $z$. The full angle $\Theta$ is$$\Theta=\lim_{z\rightarrow\infty}\frac{d(z)}{z}= \frac{2 w_0}{z_R}$$
###Code
w0=0.1e-3 # radius of beam waist [m]
lambda0=632.8e-9 # wavelength again in m
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance in m
theta = w0/zR
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.plot(z,theta*z,'--b')
plt.plot(z,-theta*z,'--b')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Beam Divergence")
plt.text(120e-3, 0e-3, r'$\Theta=2\theta$', fontsize=14, va='center',color='blue')
plt.annotate('',xy=(100e-3,0.2e-3),xytext=(100e-3,-0.2e-3),arrowprops=dict(connectionstyle="arc3,rad=0.2", arrowstyle="<->",color='blue'))
#plt.xticks([])
#plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
For a perfect Gaussian beam, the beam divergence is completely determined by its minimum beam radius $w_{00}$$$\Theta_{00} = \frac{\lambda}{\pi w_{00}}$$where the 00 subscript indicates that these only apply to the TEM$_{00}$ or fundamental gaussian mode. Beam Parameter ProductLaser beam quality can be described by combining the previous two metrics into a single beam parameter product (BPP) or$$\mathrm{BPP} = w \cdot \Theta$$where $w$ is the radius of the beam (at its waist/narrowest point) and $\Theta$ is the half-angle measure of the beam divergence in the far-field.This is not unlike the throughput parameter (area $\times$ solid angle) from radiometry which captures both the angular expansion of light and focusing into a single variable. The BPP represents, for instance, the amount of light that can be coupled into a fiber. For practical use of the BPP, seeWang, [Fiber coupled diode laser beam parameter product calculation and rules for optimized design](https://www.researchgate.net/publication/253527159_Fiber_Coupled_Diode_Laser_Beam_Parameter_Product_Calculation_and_Rules_for_Optimized_Design), *Proc. SPIE*, **7918**, 9 (2011) M² or the beam propagation factorIt turns out that real beams differ from perfect Gaussian beams. Specifically, they diverge more quickly or don't focus to the same size spot.The beam propagation factor M² is a measure of how close a beam is to Gaussian (TEM$_{00}$ mode).Johnston and Sasnett write in their chapter "Characterization of Laser Beams: The M² Model" in the *Handbook of Optical and Laser Scanning*, Marcel Dekker, (2004)::> Unlike the fundamental mode beam where the 1/e$^2$-diameter definition is universally understood and applied, for mixed modes a number of different diameter definitions have been employed. The different definitions have in common that they all reduce to the 1/e$^2$-diameter when applied to an $M^2=1$ fundamental mode beam, but when applied to a mixed mode with higher order mode content, they in general give different numerical values. As M² always depends on a product of two measured diameters, its numerical value changes also as the square of that for diameters. It is all the same beam, but different methods provide results in different currencies; one has to specify what currency is in use and know the exchange rate.M² is defined as the ratio of the beam parameter product (BPP)$$M^2 = \frac{\mathrm{BPP}}{\mathrm{BPP}_{00}} = \frac{\Theta \cdot w_0}{\Theta_{00}\cdot w_{00}}$$where $\Theta$ is the far-field beam divergence and $w_0$ is the minimum beam radius of the real beam. The beam divergence of a perfect Gaussian is$$\Theta_{00} = \frac{\lambda}{\pi w_{00}}$$and therefore the beam quality factor becomes$$M^2 = \frac{\Theta \cdot w_0}{\lambda\cdot \pi}$$where radius $w_0$ is the minimum radius for the real beam. A Gaussian beam has M²=1, while all other beams will have M²>1. Moreover,* for a given *beam radius*, the Gaussian beam has the smallest possible beam divergence* for a given *beam divergence*, the Gaussian beam has the smallest possible beam radius. A multimode beam has a beam waist which is M² times larger than a fundamental Gaussian beam with the same beam divergence, or a beam divergence which is M² times larger than that of a fundamental Gaussian beam with the same beam waist. Astigmatic or Elliptical BeamsA simple stigmatic beam has rotational symmetry — any cross section will display a circular profile. However, a simple astigmatic beam will have elliptical cross-sections. It is *simple* because the major and minor axes of the ellipse remain in the same plane (a general astigmatic beam will have elliptical cross-sections that rotate with propagation distance). For an elliptical beam, the beam waist radius, beam waist location, and Rayleigh distance will differ on the semi-major and semi-minor axes. Unsurprisingly, the M² values may differ as well$$w_x^2(z) = w_{0x}^2\left[1 + \left(\frac{z-z_0}{z_{Rx}} \right)^2\right]$$and$$w_y^2(z) = w_{0y}^2\left[1 + \left(\frac{z-z_0}{z_{Ry}} \right)^2\right]$$Two different M² values for the major and minor axes of the elliptical beam shape arise from the two Rayleigh distances$$z_{Rx} = \frac{\pi w_{0x}^2}{\lambda M_x^2} \qquad\mbox{and}\qquad z_{Ry} = \frac{\pi w_{0y}^2}{\lambda M_y^2}$$ Rayleigh Distance $z_R$The Rayleigh distance $z_R$ is the distance from the beam waist to the point where the beam area has doubled. This means that the irradiance (power/area) has dropped 50% or that beam radius has increased by a factor of $\sqrt{2}$.Interestingly, the radius of curvature of the beam is largest at one Rayleigh distance from the beam waist.The Rayleigh distance for a real beam defined as$$z_R=\frac{\pi w_0^2}{\lambda M^2}$$where $w_0$ is the minimum beam radius of the beam.
###Code
w0=0.1 # radius of beam waist [mm]
lambda0=0.6328/1000 # again in mm
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance
theta = w0/zR
z = np.linspace(-3*zR,3*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
#plt.axvline(z0,color='black',lw=1)
plt.axvline(zR,color='blue', linestyle=':')
plt.axvline(-zR,color='blue', linestyle=':')
plt.text(zR, -3*w0, ' $z_R$')
plt.text(-zR, -3*w0, '$-z_R$ ', ha='right')
plt.text(0,w0/2,' $w_0$', va='center')
plt.text(zR,w0/2,' $\sqrt{2}w_0$', va='center')
plt.vlines(0,0,w0,color='black',lw=2)
plt.vlines(zR,0,np.sqrt(2)*w0,color='black',lw=2)
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Rayleigh Distance")
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Beam Quality Parameters**Scott Prahl****June 2020, version 3**In this notebook, the basic definitions of the beam waist, beam divergence, beam product, and M² are introduced.
###Code
import numpy as np
import matplotlib.pyplot as plt
import laserbeamsize as lbs
###Output
_____no_output_____
###Markdown
Minimum Beam RadiusThe minimum beam radius $w_0$ (and its location $z_0$) tell us a lot about a laser beam. We know that a laser beam must have a minimum beam radius somewhere. If we assume that the beam obeys Gaussian beam propagation rules then we can make a few observations.> It would seem that $w$ should stand for *width*, but it doesn't. This means that $w$ is not the diameter but the radius. Go figure.A laser cavity with a flat mirror will have minimum beam radius at that mirror. For diode lasers, beam exits through a cleaved flat surface. Since the gain medium in a diode laser usually has a rectangular cross section, there are two different minimum beam radii associated with the exit aperture. These are often assumed to correspond to the dimensions of the gain medium. In general, though, the beam waist happens somewhere inside the laser and both its location and radius is unknown. To determine the beam waist an aberration-free focusing lens is used to create an new beam waist external to the cavity that can be measured. Gaussian Beam RadiusThe parameter $w()z)$ represents the beam radius at an axial location $z$. When $z = z_0$, the beam reaches its minimum value $w_0$,$$w^2(z)=w_0^2\left(1+\frac{(z-z_0)^2}{z_R^2}\right)$$where $z_R=\pi w_0^2/\lambda$.Therefore, for a simple gaussian beam, the minimum radius $w_0$ and its location $z_0$ determine the beam size everywhere (assuming, of course, that the wavelength is known).As can be seen in the graph below, the beam reaches a minimum and then expands symmetrically about the axial location of the minimum.
###Code
w0=0.1 # radius of beam waist [mm]
lambda0=0.6328/1000 # again in mm
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance
theta = w0/zR
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
plt.vlines(0,0,w0,color='black',lw=1)
plt.text(0,w0/2,' $w_0$', va='center')
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Minimum Beam Radius")
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Beam Divergence $\Theta$All beams diverge or spread out as the beam propagates along the $z$ direction. The far-field divergence is defined as the half-angle$$\theta=\lim_{z\rightarrow\infty}\frac{w(z)}{z}$$where $w(z)$ is the beam radius at a distance $z$. The graph below illustrates this.
###Code
w0=0.1 # radius of beam waist [mm]
lambda0=0.6328/1000 # again in mm
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance
theta = w0/zR
z = np.linspace(-5*zR,5*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.plot(z,theta*z,'--b')
plt.plot(z,-theta*z,'--b')
plt.axhline(0,color='black',lw=1)
plt.axvline(0,color='black',lw=1)
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Beam Divergence")
plt.text(80, 0.08, r'$\Theta$', fontsize=14, va='center',color='blue')
plt.annotate('',xy=(100,0.2),xytext=(115,0),arrowprops=dict(connectionstyle="arc3,rad=0.2", arrowstyle="<->",color='blue'))
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
For a perfect Gaussian beam, the beam divergence is completely determined by its minimum beam radius $w_{00}$$$\Theta_{00} = \frac{\lambda}{\pi w_{00}}$$where the 00 subscript indicates that these only apply to the TEM$_{00}$ or fundamental gaussian mode. Beam Parameter ProductLaser beam quality can be described by combining the previous two metrics into a single beam parameter product (BPP) or$$\mathrm{BPP} = w \Theta$$where $w$ is the radius of the beam (at its waist/narrowest point) and $\Theta$ is the half-angle measure of the beam divergence in the far-field.This is not unlike the throughput parameter (area $\times$ solid angle) from radiometry which captures both the angular expansion of light and focusing into a single variable. The BPP represents, for instance, the amount of light that can be coupled into a fiber. For practical use of the BPP, seeWang, [Fiber coupled diode laser beam parameter product calculation and rules for optimized design](https://www.researchgate.net/publication/253527159_Fiber_Coupled_Diode_Laser_Beam_Parameter_Product_Calculation_and_Rules_for_Optimized_Design), *Proc. SPIE*, **7918**, 9 (2011) M² or the beam propagation factorIt turns out that real beams differ from perfect Gaussian beams. Specifically, they diverge more quickly or don't focus to the same size spot.The beam propagation factor M² is a measure of how close a beam is to Gaussian (TEM$_{00}$ mode).Johnston and Sasnett write in their chapter "Characterization of Laser Beams: The M² Model" in the *Handbook of Optical and Laser Scanning*, Marcel Dekker, (2004)::> Unlike the fundamental mode beam where the 1/e$^2$-diameter definition is universally understood and applied, for mixed modes a number of different diameter definitions have been employed. The different definitions have in common that they all reduce to the 1/e$^2$-diameter when applied to an $M^2=1$ fundamental mode beam, but when applied to a mixed mode with higher order mode content, they in general give different numerical values. As M² always depends on a product of two measured diameters, its numerical value changes also as the square of that for diameters. It is all the same beam, but different methods provide results in different currencies; one has to specify what currency is in use and know the exchange rate.M² is defined as the ratio of the beam parameter product (BPP)$$M^2 = \frac{\mathrm{BPP}}{\mathrm{BPP}_{00}} = \frac{\Theta \cdot w}{\Theta_{00}\cdot w_{00}}$$where $\Theta$ is the far-field beam divergence and $w$ is the minimum beam radius. The beam divergence of a perfect Gaussian is$$\Theta_{00} = \frac{\lambda}{\pi w_{00}}$$and therefore the beam quality factor becomes$$M^2 = \frac{\Theta \cdot w}{\lambda\cdot \pi}$$where radius $w$ is the minimum radius for the beam of interest. A Gaussian beam has $M^2=1$, while all other beams will have $M^2>1$. Moreover,* for a given *beam radius*, the Gaussian beam has the smallest possible beam divergence* for a given *beam divergence*, the Gaussian beam has the smallest possible beam radius. We find that the multimode beam has a beam waist which is M² times larger than a fundamental Gaussian beam with the same beam divergence, or a beam divergence which is M² times larger than that of a fundamental Gaussian beam with the same beam waist. Rayleigh Distance $z_R$The Rayleigh distance $z_R$ is the distance from the beam waist to the point where the beam area has doubled. This means that the irradiance (power/area) has dropped 50% or that beam radius has increased by a factor of $\sqrt{2}$.The Rayleigh distance for a Gaussian beam is $$z_R=\frac{\pi w_0^2}{\lambda}$$
###Code
w0=0.1 # radius of beam waist [mm]
lambda0=0.6328/1000 # again in mm
zR=lbs.z_rayleigh(w0,lambda0) # Rayleigh Distance
theta = w0/zR
z = np.linspace(-3*zR,3*zR,100)
r = lbs.beam_radius(w0,lambda0,z)
plt.fill_between(z,-r,r,color='pink')
plt.axhline(0,color='black',lw=1)
#plt.axvline(z0,color='black',lw=1)
plt.axvline(zR,color='blue', linestyle=':')
plt.axvline(-zR,color='blue', linestyle=':')
plt.text(zR, -3*w0, ' $z_R$')
plt.text(-zR, -3*w0, '$-z_R$ ', ha='right')
plt.text(0,w0/2,' $w_0$', va='center')
plt.text(zR,w0/2,' $\sqrt{2}w_0$', va='center')
plt.vlines(0,0,w0,color='black',lw=2)
plt.vlines(zR,0,np.sqrt(2)*w0,color='black',lw=2)
plt.xlabel("Position on optical axis, $z$")
plt.ylabel("Beam radius, $w(z)$")
plt.title("Rayleigh Distance")
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.