path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Ch3/08_Training_Dov2Vec_using_Gensim.ipynb | ###Markdown
In this notebook we demonstrate how to train a doc2vec model on your custom corpus.
###Code
import warnings
warnings.filterwarnings('ignore')
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from nltk.tokenize import word_tokenize
from pprint import pprint
import nltk
nltk.download('punkt')
data = ["dog bites man",
"man bites dog",
"dog eats meat",
"man eats food"]
tagged_data = [TaggedDocument(words=word_tokenize(word.lower()), tags=[str(i)]) for i, word in enumerate(data)]
tagged_data
#dbow
model_dbow = Doc2Vec(tagged_data,vector_size=20, min_count=1, epochs=2,dm=0)
print(model_dbow.infer_vector(['man','eats','food']))#feature vector of man eats food
model_dbow.wv.most_similar("man",topn=5)#top 5 most simlar words.
model_dbow.wv.n_similarity(["dog"],["man"])
#dm
model_dm = Doc2Vec(tagged_data, min_count=1, vector_size=20, epochs=2,dm=1)
print("Inference Vector of man eats food\n ",model_dm.infer_vector(['man','eats','food']))
print("Most similar words to man in our corpus\n",model_dm.wv.most_similar("man",topn=5))
print("Similarity between man and dog: ",model_dm.wv.n_similarity(["dog"],["man"]))
###Output
Inference Vector of man eats food
[-1.6232852e-02 7.2173858e-03 -1.8149856e-02 1.9396329e-02
-1.0752306e-02 2.1854490e-02 -1.0387184e-02 5.0630077e-04
-1.0485582e-02 -2.3733964e-02 -2.1500139e-02 1.1494617e-02
-7.5761047e-05 -9.6794488e-03 -1.1162374e-02 2.3743976e-02
5.5664619e-03 -2.3691194e-02 1.7469568e-02 -8.0082249e-03]
Most similar words to man in our corpus
[('dog', 0.1856406182050705), ('meat', 0.12032049894332886), ('bites', 0.037392228841781616), ('food', -0.027777723968029022), ('eats', -0.29439008235931396)]
Similarity between man and dog: 0.1856406
###Markdown
What happens when we compare between words which are not in the vocabulary?
###Code
model_dm.wv.n_similarity(['covid'],['man'])
###Output
_____no_output_____
###Markdown
Doc2VecIn this notebook we demonstrate how to train a doc2vec model on a custom corpus.
###Code
# To install only the requirements of this notebook, uncomment the lines below and run this cell
# ===========================
!pip install gensim==3.6.0
!pip install spacy==2.2.4
!pip install nltk==3.2.5
# ===========================
# To install the requirements for the entire chapter, uncomment the lines below and run this cell
# ===========================
# try :
# import google.colab
# !curl https://raw.githubusercontent.com/practical-nlp/practical-nlp/master/Ch3/ch3-requirements.txt | xargs -n 1 -L 1 pip install
# except ModuleNotFoundError :
# !pip install -r "ch3-requirements.txt"
# ===========================
import warnings
warnings.filterwarnings('ignore')
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from nltk.tokenize import word_tokenize
from pprint import pprint
import nltk
nltk.download('punkt')
data = ["dog bites man",
"man bites dog",
"dog eats meat",
"man eats food"]
tagged_data = [TaggedDocument(words=word_tokenize(word.lower()), tags=[str(i)]) for i, word in enumerate(data)]
tagged_data
#dbow
model_dbow = Doc2Vec(tagged_data,vector_size=20, min_count=1, epochs=2,dm=0)
print(model_dbow.infer_vector(['man','eats','food']))#feature vector of man eats food
model_dbow.wv.most_similar("man",topn=5)#top 5 most simlar words.
model_dbow.wv.n_similarity(["dog"],["man"])
#dm
model_dm = Doc2Vec(tagged_data, min_count=1, vector_size=20, epochs=2,dm=1)
print("Inference Vector of man eats food\n ",model_dm.infer_vector(['man','eats','food']))
print("Most similar words to man in our corpus\n",model_dm.wv.most_similar("man",topn=5))
print("Similarity between man and dog: ",model_dm.wv.n_similarity(["dog"],["man"]))
###Output
Inference Vector of man eats food
[-1.01456400e-02 -5.49062993e-03 -2.11605523e-02 -1.16518466e-02
3.54836439e-03 -7.06422143e-03 -9.27604642e-03 -2.83227302e-03
2.35041156e-02 -9.20040839e-05 2.26525515e-02 -8.97767674e-03
1.19706187e-02 -1.19358245e-02 1.34595484e-02 -2.25058738e-02
1.89621784e-02 -1.09350523e-02 1.78532843e-02 -1.49779590e-02]
Most similar words to man in our corpus
[('dog', 0.2630311846733093), ('eats', 0.23952406644821167), ('food', -0.11896046996116638), ('meat', -0.2617309093475342), ('bites', -0.306953489780426)]
Similarity between man and dog: 0.26303118
###Markdown
What happens when we compare between words which are not in the vocabulary?
###Code
model_dm.wv.n_similarity(['covid'],['man'])
###Output
_____no_output_____
###Markdown
Doc2VecIn this notebook we demonstrate how to train a doc2vec model on a custom corpus.
###Code
import warnings
warnings.filterwarnings('ignore')
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from nltk.tokenize import word_tokenize
from pprint import pprint
import nltk
nltk.download('punkt')
data = ["dog bites man",
"man bites dog",
"dog eats meat",
"man eats food"]
tagged_data = [TaggedDocument(words=word_tokenize(word.lower()), tags=[str(i)]) for i, word in enumerate(data)]
tagged_data
#dbow
model_dbow = Doc2Vec(tagged_data,vector_size=20, min_count=1, epochs=2,dm=0)
print(model_dbow.infer_vector(['man','eats','food']))#feature vector of man eats food
model_dbow.wv.most_similar("man",topn=5)#top 5 most simlar words.
model_dbow.wv.n_similarity(["dog"],["man"])
#dm
model_dm = Doc2Vec(tagged_data, min_count=1, vector_size=20, epochs=2,dm=1)
print("Inference Vector of man eats food\n ",model_dm.infer_vector(['man','eats','food']))
print("Most similar words to man in our corpus\n",model_dm.wv.most_similar("man",topn=5))
print("Similarity between man and dog: ",model_dm.wv.n_similarity(["dog"],["man"]))
###Output
Inference Vector of man eats food
[-1.6232852e-02 7.2173858e-03 -1.8149856e-02 1.9396329e-02
-1.0752306e-02 2.1854490e-02 -1.0387184e-02 5.0630077e-04
-1.0485582e-02 -2.3733964e-02 -2.1500139e-02 1.1494617e-02
-7.5761047e-05 -9.6794488e-03 -1.1162374e-02 2.3743976e-02
5.5664619e-03 -2.3691194e-02 1.7469568e-02 -8.0082249e-03]
Most similar words to man in our corpus
[('dog', 0.1856406182050705), ('meat', 0.12032049894332886), ('bites', 0.037392228841781616), ('food', -0.027777723968029022), ('eats', -0.29439008235931396)]
Similarity between man and dog: 0.1856406
###Markdown
What happens when we compare between words which are not in the vocabulary?
###Code
model_dm.wv.n_similarity(['covid'],['man'])
###Output
_____no_output_____
###Markdown
Doc2VecIn this notebook we demonstrate how to train a doc2vec model on a custom corpus.
###Code
# To install only the requirements of this notebook, uncomment the lines below and run this cell
# ===========================
# !pip install gensim==3.6.0
# !pip install spacy==2.2.4
# !pip install nltk==3.2.5
# ===========================
# To install the requirements for the entire chapter, uncomment the lines below and run this cell
# ===========================
# try :
# import google.colab
# !curl https://raw.githubusercontent.com/practical-nlp/practical-nlp/master/Ch3/ch3-requirements.txt | xargs -n 1 -L 1 pip install
# except ModuleNotFoundError :
# !pip install -r "ch3-requirements.txt"
# ===========================
import warnings
warnings.filterwarnings('ignore')
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from nltk.tokenize import word_tokenize
from pprint import pprint
import nltk
nltk.download('punkt')
data = ["dog bites man",
"man bites dog",
"dog eats meat",
"man eats food"]
tagged_data = [TaggedDocument(words=word_tokenize(word.lower()), tags=[str(i)]) for i, word in enumerate(data)]
tagged_data
#dbow
model_dbow = Doc2Vec(tagged_data,vector_size=20, min_count=1, epochs=2,dm=0)
print(model_dbow.infer_vector(['man','eats','food']))#feature vector of man eats food
model_dbow.wv.most_similar("man",topn=5)#top 5 most simlar words.
model_dbow.wv.n_similarity(["dog"],["man"])
#dm
model_dm = Doc2Vec(tagged_data, min_count=1, vector_size=20, epochs=2,dm=1)
print("Inference Vector of man eats food\n ",model_dm.infer_vector(['man','eats','food']))
print("Most similar words to man in our corpus\n",model_dm.wv.most_similar("man",topn=5))
print("Similarity between man and dog: ",model_dm.wv.n_similarity(["dog"],["man"]))
###Output
Inference Vector of man eats food
[-0.01203259 0.01399781 0.00436171 -0.00180043 0.01481868 0.00915196
-0.00378094 -0.00889238 0.00451853 0.02051536 0.02342224 0.01624064
-0.00929315 -0.01506988 -0.02199879 0.01465174 0.02258903 -0.02092638
0.00850757 -0.01780711]
Most similar words to man in our corpus
[('meat', 0.39641645550727844), ('bites', 0.05595850199460983), ('dog', 0.050179000943899155), ('food', -0.06502582132816315), ('eats', -0.2928891181945801)]
Similarity between man and dog: 0.050179023
###Markdown
What happens when we compare between words which are not in the vocabulary?
###Code
model_dm.wv.n_similarity(['covid'],['man'])
###Output
_____no_output_____
###Markdown
Doc2VecIn this notebook we demonstrate how to train a doc2vec model on a custom corpus.
###Code
# To install only the requirements of this notebook, uncomment the lines below and run this cell
# ===========================
!pip install gensim==3.6.0
!pip install spacy==2.2.4
!pip install nltk==3.2.5
# ===========================
# To install the requirements for the entire chapter, uncomment the lines below and run this cell
# ===========================
# try :
# import google.colab
# !curl https://raw.githubusercontent.com/practical-nlp/practical-nlp/master/Ch3/ch3-requirements.txt | xargs -n 1 -L 1 pip install
# except ModuleNotFoundError :
# !pip install -r "ch3-requirements.txt"
# ===========================
import warnings
warnings.filterwarnings('ignore')
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from nltk.tokenize import word_tokenize
from pprint import pprint
import nltk
nltk.download('punkt')
data = ["dog bites man",
"man bites dog",
"dog eats meat",
"man eats food"]
tagged_data = [TaggedDocument(words=word_tokenize(word.lower()), tags=[str(i)]) for i, word in enumerate(data)]
tagged_data
#dbow
model_dbow = Doc2Vec(tagged_data,vector_size=20, min_count=1, epochs=2,dm=0)
print(model_dbow.infer_vector(['man','eats','food']))#feature vector of man eats food
model_dbow.wv.most_similar("man",topn=5)#top 5 most simlar words.
model_dbow.wv.n_similarity(["dog"],["man"])
#dm
model_dm = Doc2Vec(tagged_data, min_count=1, vector_size=20, epochs=2,dm=1)
print("Inference Vector of man eats food\n ",model_dm.infer_vector(['man','eats','food']))
print("Most similar words to man in our corpus\n",model_dm.wv.most_similar("man",topn=5))
print("Similarity between man and dog: ",model_dm.wv.n_similarity(["dog"],["man"]))
###Output
Inference Vector of man eats food
[-1.01456400e-02 -5.49062993e-03 -2.11605523e-02 -1.16518466e-02
3.54836439e-03 -7.06422143e-03 -9.27604642e-03 -2.83227302e-03
2.35041156e-02 -9.20040839e-05 2.26525515e-02 -8.97767674e-03
1.19706187e-02 -1.19358245e-02 1.34595484e-02 -2.25058738e-02
1.89621784e-02 -1.09350523e-02 1.78532843e-02 -1.49779590e-02]
Most similar words to man in our corpus
[('dog', 0.2630311846733093), ('eats', 0.23952406644821167), ('food', -0.11896046996116638), ('meat', -0.2617309093475342), ('bites', -0.306953489780426)]
Similarity between man and dog: 0.26303118
###Markdown
What happens when we compare between words which are not in the vocabulary?
###Code
model_dm.wv.n_similarity(['covid'],['man'])
###Output
_____no_output_____ |
examples/validation.ipynb | ###Markdown
Corpus ValidationClean and valid data is essential for successful machine learning. For this purpose the `validation` module provides different methods for validate a corpus on specific properties.
###Code
import audiomate
from audiomate.corpus import assets
from audiomate.corpus import io
from audiomate.corpus import validation
# clear the data if already existing
import shutil
shutil.rmtree('output/fsd', ignore_errors=True)
###Output
_____no_output_____
###Markdown
DataFirst we download the Free-spoken-digit corpus and load it.
###Code
corpus_path = 'output/fsd'
io.FreeSpokenDigitDownloader().download(corpus_path)
corpus = audiomate.Corpus.load(corpus_path, reader='free-spoken-digits')
###Output
_____no_output_____
###Markdown
Perform validation and print result We can either perform a single validation task ...
###Code
val = validation.UtteranceTranscriptionRatioValidator(max_characters_per_second=6,
label_list_idx=assets.LL_WORD_TRANSCRIPT)
result = val.validate(corpus)
print(result.get_report())
###Output
Utterance-Transcription-Ratio (word-transcript)
===============================================
--> Label-List ID: word-transcript
--> Threshold max. characters per second: 6
Result: Failed
Invalid utterances:
* 2_theo_34 (6.211180124223603)
* 6_nicolas_23 (6.172839506172839)
* 6_nicolas_35 (6.177606177606178)
* 6_nicolas_7 (6.962576153176675)
* 6_nicolas_9 (6.354249404289119)
###Markdown
Or we can combine multiple validation tasks to run in one go.
###Code
val = validation.CombinedValidator(validators=[
validation.UtteranceTranscriptionRatioValidator(max_characters_per_second=6,
label_list_idx=assets.LL_WORD_TRANSCRIPT),
validation.LabelCountValidator(min_number_of_labels=1,
label_list_idx=assets.LL_WORD_TRANSCRIPT)
])
result = val.validate(corpus)
print(result.get_report())
###Output
Label-Count (word-transcript) --> Passed
Utterance-Transcription-Ratio (word-transcript) --> Failed
Label-Count (word-transcript)
=============================
--> Label-List ID: word-transcript
--> Min. number of labels: 1
Result: Passed
Utterance-Transcription-Ratio (word-transcript)
===============================================
--> Label-List ID: word-transcript
--> Threshold max. characters per second: 6
Result: Failed
Invalid utterances:
* 2_theo_34 (6.211180124223603)
* 6_nicolas_23 (6.172839506172839)
* 6_nicolas_35 (6.177606177606178)
* 6_nicolas_7 (6.962576153176675)
* 6_nicolas_9 (6.354249404289119)
###Markdown
Corpus ValidationClean and valid data is essential for successful machine learning. For this purpose the `validation` module provides different methods for validate a corpus on specific properties.
###Code
import audiomate
from audiomate.corpus import io
from audiomate.corpus import validation
# clear the data if already existing
import shutil
shutil.rmtree('output/fsd', ignore_errors=True)
###Output
_____no_output_____
###Markdown
DataFirst we download the Free-spoken-digit corpus and load it.
###Code
corpus_path = 'output/fsd'
io.FreeSpokenDigitDownloader().download(corpus_path)
corpus = audiomate.Corpus.load(corpus_path, reader='free-spoken-digits')
###Output
_____no_output_____
###Markdown
Perform validation and print result We can either perform a single validation task ...
###Code
val = validation.UtteranceTranscriptionRatioValidator(max_characters_per_second=6,
label_list_idx=audiomate.corpus.LL_WORD_TRANSCRIPT)
result = val.validate(corpus)
print(result.get_report())
###Output
Utterance-Transcription-Ratio (word-transcript)
===============================================
--> Label-List ID: word-transcript
--> Threshold max. characters per second: 6
Result: Failed
Invalid Utterances:
* 2_theo_34 (6.211180124223603)
* 6_nicolas_23 (6.172839506172839)
* 6_nicolas_35 (6.177606177606178)
* 6_nicolas_7 (6.962576153176675)
* 6_nicolas_9 (6.354249404289119)
* 6_yweweler_1 (6.39488409272582)
* 6_yweweler_10 (6.1443932411674345)
* 6_yweweler_17 (6.182380216383307)
* 6_yweweler_3 (6.968641114982579)
###Markdown
Or we can combine multiple validation tasks to run in one go.
###Code
val = validation.CombinedValidator(validators=[
validation.UtteranceTranscriptionRatioValidator(
max_characters_per_second=6,
label_list_idx=audiomate.corpus.LL_WORD_TRANSCRIPT
),
validation.LabelCountValidator(
min_number_of_labels=1,
label_list_idx=audiomate.corpus.LL_WORD_TRANSCRIPT
)
])
result = val.validate(corpus)
print(result.get_report())
###Output
Label-Count (word-transcript) --> Passed
Utterance-Transcription-Ratio (word-transcript) --> Failed
Label-Count (word-transcript)
=============================
--> Label-List ID: word-transcript
--> Min. number of labels: 1
Result: Passed
Utterance-Transcription-Ratio (word-transcript)
===============================================
--> Label-List ID: word-transcript
--> Threshold max. characters per second: 6
Result: Failed
Invalid Utterances:
* 2_theo_34 (6.211180124223603)
* 6_nicolas_23 (6.172839506172839)
* 6_nicolas_35 (6.177606177606178)
* 6_nicolas_7 (6.962576153176675)
* 6_nicolas_9 (6.354249404289119)
* 6_yweweler_1 (6.39488409272582)
* 6_yweweler_10 (6.1443932411674345)
* 6_yweweler_17 (6.182380216383307)
* 6_yweweler_3 (6.968641114982579)
|
breast-cancer-prediction.ipynb | ###Markdown
Clean and prepare data
###Code
df.drop('id',axis=1,inplace=True)
df.drop('Unnamed: 32',axis=1,inplace=True)
len(df)
df.diagnosis.unique()
#Convert
df['diagnosis'] = df['diagnosis'].map({'M':1,'B':0})
df.head()
# Explore data
df.describe()
df.describe()
plt.hist(df['diagnosis'])
plt.title('Diagnosis (M=1 , B=0)')
plt.show()
###Output
_____no_output_____
###Markdown
nucleus features vs diagnosis
###Code
features_mean=list(df.columns[1:11])
# split dataframe into two based on diagnosis
dfM=df[df['diagnosis'] ==1]
dfB=df[df['diagnosis'] ==0]
plt.rcParams.update({'font.size': 8})
fig, axes = plt.subplots(nrows=5, ncols=2, figsize=(8,10))
axes = axes.ravel()
for idx,ax in enumerate(axes):
ax.figure
binwidth= (max(df[features_mean[idx]]) - min(df[features_mean[idx]]))/50
ax.hist([dfM[features_mean[idx]],dfB[features_mean[idx]]], alpha=0.5,stacked=True, label=['M','B'],color=['r','g'],bins=np.arange(min(df[features_mean[idx]]), max(df[features_mean[idx]]) + binwidth, binwidth) , density = True,)
ax.legend(loc='upper right')
ax.set_title(features_mean[idx])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Observations1. mean values of cell radius, perimeter, area, compactness, concavity and concave points can be used in classification of the cancer. Larger values of these parameters tends to show a correlation with malignant tumors. 2. mean values of texture, smoothness, symmetry or fractual dimension does not show a particular preference of one diagnosis over the other. In any of the histograms there are no noticeable large outliers that warrants further cleanup. Creating a test set and a training setSince this data set is not ordered, I am going to do a simple 70:30 split to create a training data set and a test data set.
###Code
traindf, testdf = train_test_split(df, test_size = 0.3)
###Output
_____no_output_____
###Markdown
Model BuildingHere we are going to build a classification model and evaluate its performance using the training set. Naive Bayes model
###Code
from sklearn.naive_bayes import GaussianNB
model=GaussianNB()
predictor_var = ['radius_mean','perimeter_mean','area_mean','compactness_mean','concave points_mean']
outcome_var='diagnosis'
model.fit(traindf[predictor_var],traindf[outcome_var])
predictions = model.predict(traindf[predictor_var])
accuracy = metrics.accuracy_score(predictions,traindf[outcome_var])
print("Accuracy : %s" % "{0:.3%}".format(accuracy))
import seaborn as sns
sns.heatmap(metrics.confusion_matrix(predictions,traindf[outcome_var]),annot=True)
from sklearn.model_selection import cross_val_score
from statistics import mean
print(mean(cross_val_score(model, traindf[predictor_var],traindf[outcome_var], cv=5))*100)
###Output
90.94936708860759
###Markdown
KNN Model
###Code
from sklearn.neighbors import KNeighborsClassifier
model=KNeighborsClassifier(n_neighbors=4)
predictor_var = ['radius_mean','perimeter_mean','area_mean','compactness_mean','concave points_mean']
outcome_var='diagnosis'
model.fit(traindf[predictor_var],traindf[outcome_var])
predictions = model.predict(traindf[predictor_var])
accuracy = metrics.accuracy_score(predictions,traindf[outcome_var])
print("Accuracy : %s" % "{0:.3%}".format(accuracy))
from sklearn.model_selection import cross_val_score
from statistics import mean
print(mean(cross_val_score(model, traindf[predictor_var],traindf[outcome_var], cv=5))*100)
import numpy as np
x_train=traindf[predictor_var]
y_train=traindf[outcome_var]
x_test=testdf[predictor_var]
y_test=testdf[outcome_var]
trainAccuracy=[]
testAccuracy=[]
errorRate=[]
for k in range(1,40):
model=KNeighborsClassifier(n_neighbors=k)
model.fit(x_train,y_train)
pred_i = model.predict(x_test)
errorRate.append(np.mean(pred_i != y_test))
trainAccuracy.append(model.score(x_train,y_train))
testAccuracy.append(model.score(x_test,y_test))
plt.figure(figsize=(10,6))
plt.plot(range(1,40),errorRate,color='blue', linestyle='dashed',
marker='o',markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
print("Minimum error:-",min(errorRate),"at K =",errorRate.index(min(errorRate))+1)
from matplotlib import pyplot as plt,style
plt.figure(figsize=(12,6))
plt.plot(range(1,40),trainAccuracy,label="Train Score",marker="o",markerfacecolor="teal",color="blue",linestyle="dashed")
plt.plot(range(1,40),testAccuracy,label="Test Score",marker="o",markerfacecolor="red",color="black",linestyle="dashed")
plt.legend()
plt.xlabel("Number of Neighbors")
plt.ylabel("Score")
plt.title("Nbd Vs Score")
plt.show()
###Output
_____no_output_____
###Markdown
Testing with new K Value= 30
###Code
from sklearn.neighbors import KNeighborsClassifier
model=KNeighborsClassifier(n_neighbors=31)
predictor_var = ['radius_mean','perimeter_mean','area_mean','compactness_mean','concave points_mean']
outcome_var='diagnosis'
model.fit(traindf[predictor_var],traindf[outcome_var])
predictions = model.predict(traindf[predictor_var])
accuracy = metrics.accuracy_score(predictions,traindf[outcome_var])
print("Accuracy : %s" % "{0:.3%}".format(accuracy))
from sklearn.model_selection import cross_val_score
from statistics import mean
print(mean(cross_val_score(model, traindf[predictor_var],traindf[outcome_var], cv=5))*100)
###Output
88.43987341772151
###Markdown
Using the Wisconsin breast cancer diagnostic data set for predictive analysisAttribute Information: - 1) ID number - 2) Diagnosis (M = malignant, B = benign) -3-32.Ten real-valued features are computed for each cell nucleus: - a) radius (mean of distances from center to points on the perimeter) - b) texture (standard deviation of gray-scale values) - c) perimeter - d) area - e) smoothness (local variation in radius lengths) - f) compactness (perimeter^2 / area - 1.0) - g). concavity (severity of concave portions of the contour) - h). concave points (number of concave portions of the contour) - i). symmetry - j). fractal dimension ("coastline approximation" - 1)The mean, standard error and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features. For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.For this analysis, as a guide to predictive analysis I followed the instructions and discussion on "A Complete Tutorial on Tree Based Modeling from Scratch (in R & Python)" at Analytics Vidhya. Load Libraries
###Code
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import mpld3 as mpl
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn import metrics
###Output
_____no_output_____
###Markdown
Load the data
###Code
df = pd.read_csv("../input/data.csv",header = 0)
df.head()
###Output
_____no_output_____ |
CartPole/.ipynb_checkpoints/Q-learning-checkpoint.ipynb | ###Markdown
Hill Climb Test
###Code
import gym
import numpy as np
import matplotlib.pyplot as plt
from gym import wrappers
def run_episode(env, parameters):
observation = env.reset()
totalreward = 0
counter = 0
for _ in range(200):
env.render()
action = 0 if np.matmul(parameters, observation) < 0 else 1
observation, reward, done, info = env.step(action)
totalreward += reward
counter+= 1
if done:
break
return totalreward
def train(submit):
env = gym.make('CartPole-v0')
if submit:
env = wrappers.Monitor(env, '/tmp/CartPole-v0-hill-climbing', None, True)
episodes_per_update = 5
noise_scaling = 0.1
parameters = np.random.rand(4) * 2 - 1 # random weights between [-1, 1]
bestreward = 0
counter = 0
for episode in range(2000):
counter += 1
newparams = parameters + (np.random.rand(4) * 2 - 1) * noise_scaling
print(episode)
reward = run_episode(env, newparams)
if reward > bestreward:
bestreward = reward
parameters = newparams
if reward == 200:
print('Yay')
break
return counter
train(True)
###Output
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
###Markdown
Because it's hill climbing, not surprised it sucks. Your parameters are set Q-Learning I follow this: https://dev.to/n1try/cartpole-with-q-learning---first-experiences-with-openai-gymQ-learning makes a Q-table with discrete actions and state pairs. Since the observation_space is a 4 tuple of floats, we will need to discretize it. But how mnay states should we discretize it to? Goal: Stay alive for 200 time stepsWell, we take out x and x' because the cart probably won't leave the screen in 200 time steps.Now we are only left with theta(angle) and theta' (angle velocity) to worry about. Theta is [-0.42, .42] while theta' is [-3.4*1038, 3.4*1038]Q-learning uses one function to fetch the best action from the q-table and another function to update the q-table based on the last action. Rewards are 1 for every time step alive.Interestingly, the hyperparameters: alpha (learning rate), epsilon (exploration rate) and gamma (discount factor) are interesting to choose.
###Code
import gym
import numpy as np
import matplotlib.pyplot as plt
from gym import wrappers
from gym import ObservationWrapper
from gym import spaces
import math
###Output
_____no_output_____
###Markdown
Helper code to discretize observation space:Copied from:https://github.com/ngc92/space-wrappers/blob/master/space_wrappers/observation_wrappers.py
###Code
from space_wrappers import observation_wrappers as ow
###Output
_____no_output_____
###Markdown
Q-learning algorithm following pseudocode from: https://towardsdatascience.com/introduction-to-various-reinforcement-learning-algorithms-i-q-learning-sarsa-dqn-ddpg-72a5e0cb6287and mainly this dude's: https://dev.to/n1try/cartpole-with-q-learning---first-experiences-with-openai-gym Here's his github: https://gist.github.com/n1try/af0b8476ae4106ec098fea1dfe57f578 Here's the reasoning he followed: https://medium.com/@tuzzer/cart-pole-balancing-with-q-learning-b54c6068d947
###Code
def Qlearning():
discount = 1.0 # You don't want to discount since your goal is to survive as long as possible
num_episodes = 1000
buckets=(1, 1, 6, 12,)
def discretize(obs):
upper_bounds = [env.observation_space.high[0], 0.5, env.observation_space.high[2], math.radians(50)]
lower_bounds = [env.observation_space.low[0], -0.5, env.observation_space.low[2], -math.radians(50)]
ratios = [(obs[i] + abs(lower_bounds[i])) / (upper_bounds[i] - lower_bounds[i]) for i in range(len(obs))]
new_obs = [int(round((buckets[i] - 1) * ratios[i])) for i in range(len(obs))]
new_obs = [min(buckets[i] - 1, max(0, new_obs[i])) for i in range(len(obs))]
return tuple(new_obs)
env = gym.make('CartPole-v0')
# Initialize a Q-table
num_actions = 2
qtable = np.zeros(buckets + (num_actions,))
# Loop for every episode
for ep in range(num_episodes):
# Optimized epsilon
epsilon = max(0.1, min(1, 1.0 - math.log10((ep + 1) / 25)))
alpha = max(0.1, min(1.0, 1.0 - math.log10((ep + 1) / 25)))
state = discretize(env.reset())
done = False
score = 0
# Loop for each step of episode
while not done:
if ep % 100 == 0:
env.render()
# Select action using epsilon Greedy policy: Either folo policy or pick a random action
action = np.random.choice([np.argmax(qtable[state]), env.action_space.sample()], 1, p=[1-epsilon, epsilon])[0]
# Do the new action
observation, reward, done, info = env.step(action)
new_state = discretize(observation)
# Update Q Table
qtable[state][action] = qtable[state][action] + alpha * (reward + discount * np.max(qtable[new_state]) - qtable[state][action])
score += reward
state = new_state
print("Episode {}, Score: {}".format(ep, score))
env.close()
Qlearning()
# don't forget to do plots of the logistics and such
###Output
_____no_output_____ |
toronto_neighborhood_geographical_coordinates.ipynb | ###Markdown
Problem 2Now that we have built a dataframe of the postal code of each neighborhood along with the borough name and neighborhood name, in order to utilize the Foursquare location data, we need to get the latitude and the longitude coordinates of each neighborhood.
###Code
# Import libraries
import pandas as pd
from bs4 import BeautifulSoup
import requests
# Website url
url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
# Scrapping data from thw website
website_script = requests.get(url) # Website script (download the HTML content)
website_content = website_script.content # Website content (HTML content)
# Make HTML look Beautiful
website_soup = BeautifulSoup(website_content, 'html.parser')
# Get Toronto neighborhood dataframe
def get_toronto_neighborhood_df(soup, table_class):
# Table data
table = soup.find_all('table', class_=table_class)
# Table dataframe
df = pd.read_html(str(table))[0]
# Remove rows where Borough is 'Not assigned'
df = df[df['Borough'] != 'Not assigned']
# Sort ascending values
df.sort_values(by=['Postal Code'], ascending=True, inplace=True)
# Return dataframe
return df
# Get result dataframe
def get_result_df(neighborhood_df):
# Latitude and Longitude dataframe
lat_lng_coords_df = pd.read_csv('https://cocl.us/Geospatial_data')
# The result of both dataframe
result_df = pd.merge(neighborhood_df, lat_lng_coords_df, on="Postal Code")
# Rename Postal Code and Neighborhood column
result_df.rename(columns={"Neighbourhood": "Neighborhood", "Postal Code": "PostalCode"}, inplace=True)
# Reset index
result_df.reset_index(drop=True, inplace=True)
# Return result dataframe
return result_df
# Dataframe
toronto_neighborhood_df = get_toronto_neighborhood_df(website_soup, 'wikitable sortable') # Toronto neighborhood Dataframe
result_df = get_result_df(toronto_neighborhood_df) # Result dataframe
# Dataframe output
print(result_df.head(12))
###Output
PostalCode Borough Neighborhood Latitude Longitude
0 M1B Scarborough Malvern, Rouge 43.806686 -79.194353
1 M1C Scarborough Rouge Hill, Port Union, Highland Creek 43.784535 -79.160497
2 M1E Scarborough Guildwood, Morningside, West Hill 43.763573 -79.188711
3 M1G Scarborough Woburn 43.770992 -79.216917
4 M1H Scarborough Cedarbrae 43.773136 -79.239476
5 M1J Scarborough Scarborough Village 43.744734 -79.239476
6 M1K Scarborough Kennedy Park, Ionview, East Birchmount Park 43.727929 -79.262029
7 M1L Scarborough Golden Mile, Clairlea, Oakridge 43.711112 -79.284577
8 M1M Scarborough Cliffside, Cliffcrest, Scarborough Village West 43.716316 -79.239476
9 M1N Scarborough Birch Cliff, Cliffside West 43.692657 -79.264848
10 M1P Scarborough Dorset Park, Wexford Heights, Scarborough Town... 43.757410 -79.273304
11 M1R Scarborough Wexford, Maryvale 43.750072 -79.295849
|
doc/nb/Fiddling_about.ipynb | ###Markdown
Fiddling about a bit
###Code
# imports
from pkg_resources import resource_filename
from astropy.table import Table
from astropy.coordinates import SkyCoord
from astropy import units as u
###Output
_____no_output_____
###Markdown
Load up
###Code
DM_file = resource_filename('pulsars', 'data/atnf_cat/DM_cat_v1.56.dat')
DMs = Table.read(DM_file, format='ascii')
DMs
###Output
_____no_output_____
###Markdown
Coords
###Code
coords = SkyCoord(ra=DMs['RAJ'], dec=DMs['DECJ'], unit=(u.hourangle, u.deg))
###Output
_____no_output_____
###Markdown
Clouds Manchester+06
###Code
mfl = DMs['Pref'] == 'mfl+06'
DMs[mfl]
###Output
_____no_output_____
###Markdown
LMC coords
###Code
lmc_distance = 50 * u.kpc
lmc_coord = SkyCoord('J052334.6-694522', unit=(u.hourangle, u.deg),
distance=lmc_distance)
lmc_coord.separation(coords[mfl]).to('deg').value
###Output
_____no_output_____
###Markdown
Others
###Code
close_to_lmc = lmc_coord.separation(coords) < 3*u.deg
DMs[close_to_lmc]
###Output
_____no_output_____ |
examples/reference/widgets/RadioBoxGroup.ipynb | ###Markdown
The ``RadioBoxGroup`` widget allows selecting from a list or dictionary of values using a set of checkboxes. It falls into the broad category of single-value, option-selection widgets that provide a compatible API and include the [``RadioButtonGroup``](RadioButtonGroup.ipynb), [``Select``](Select.ipynb) and [``DiscreteSlider``](DiscreteSlider.ipynb) widgets.For more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently of any specific widgets in the [param user guide](../../user_guide/Param.ipynb). To express interactivity entirely using Javascript without the need for a Python server take a look at the [links user guide](../../user_guide/Param.ipynb). Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb). Core* **``options``** (list or dict): A list or dictionary of options to select from* **``value``** (object): The current value; must be one of the option values Display* **``disabled``** (boolean): Whether the widget is editable* **``inline``** (boolean): Whether to arrange the items vertically in a column (``False``) or horizontally in a line (``True``)* **``name``** (str): The title of the widget___
###Code
radio_group = pn.widgets.RadioBoxGroup(name='RadioBoxGroup', options=['Biology', 'Chemistry', 'Physics'], inline=True)
radio_group
###Output
_____no_output_____
###Markdown
Like most other widgets, ``RadioBoxGroup`` has a value parameter that can be accessed or set:
###Code
radio_group.value
###Output
_____no_output_____
###Markdown
ControlsThe `RadioBoxGroup` widget exposes a number of options which can be changed from both Python and Javascript. Try out the effect of these parameters interactively:
###Code
pn.Row(radio_group.controls(jslink=True), radio_group)
###Output
_____no_output_____
###Markdown
The ``RadioBoxGroup`` widget allows selecting from a list or dictionary of values using a set of checkboxes. It falls into the broad category of single-value, option-selection widgets that provide a compatible API and include the [``RadioButtonGroup``](RadioButtonGroup.ipynb), [``Select``](Select.ipynb) and [``DiscreteSlider``](DiscreteSlider.ipynb) widgets.For more information about listening to widget events and laying out widgets refer to the [widgets user guide](../../user_guide/Widgets.ipynb). Alternatively you can learn how to build GUIs by declaring parameters independently of any specific widgets in the [param user guide](../../user_guide/Param.ipynb). To express interactivity entirely using Javascript without the need for a Python server take a look at the [links user guide](../../user_guide/Param.ipynb). Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb). Core* **``options``** (list or dict): A list or dictionary of options to select from* **``value``** (object): The current value; must be one of the option values Display* **``disabled``** (boolean): Whether the widget is editable* **``inline``** (boolean): Whether to arrange the items vertically in a column (``False``) or horizontally in a line (``True``)* **``name``** (str): The title of the widget___
###Code
radio_group = pn.widgets.RadioBoxGroup(name='RadioBoxGroup', options=['Biology', 'Chemistry', 'Physics'], inline=True)
radio_group
###Output
_____no_output_____
###Markdown
Like most other widgets, ``RadioBoxGroup`` has a value parameter that can be accessed or set:
###Code
radio_group.value
###Output
_____no_output_____ |
examples/heat_conduction_1d_uniform_bar.ipynb | ###Markdown
In this code we will solve the heat equation using PINN implemented with the DeepXDE library.The equation is as follows:$\frac{\partial u}{\partial t} = \alpha \nabla^2 u\;$ .Where $\nabla^2$ is the laplacian differential operator, $\alpha$ is the thermal diffusivity constant and $u$ is the function (temperature) we want to approximate.In a unidimensional case we have:$\frac{\partial u(x, t)}{\partial t}$ = $\alpha \frac{\partial^2u(x,t)}{{\partial x}^2}\;$, $\;\;\;\; x \in [0, 1]\;$, $\;\;\;\; t \in [0, 1]\;$.With Dirichlet boundary conditions $u(0, t) = u(1, t) = 0\;$ , and periodic (sinoidal) initial conditions:$u(x, 0) = sin(n\pi x/L)\;$, $\;\;\;\; 0 < x < L\;$, $\;\;\;\; n = 1, 2, ...\;.$This setup is a common problem in many differential equations textbooks and can be physically interpreted as the variation of temperature in a uniform and unidimensional bar over time. Here, the constant $\alpha$ is the thermal diffusivity (a property of the material that the bar is made) and $L$ is the lenght of the bar.
###Code
if __name__ == "__main__":
# Problem parameters:
a = 0.4 # Thermal diffusivity
L = 1 # Length of the bar
n = 1 # Frequency of the sinusoidal initial conditions
# Generate a dataset with the exact solution (if you dont have one):
gen_exact_solution()
# Solve the equation:
main()
###Output
c:\users\saransh\saransh_softwares\python_3.9\lib\site-packages\skopt\sampler\sobol.py:246: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+2542=2542.
warnings.warn("The balance properties of Sobol' points require "
c:\users\saransh\saransh_softwares\python_3.9\lib\site-packages\skopt\sampler\sobol.py:246: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+82=82.
warnings.warn("The balance properties of Sobol' points require "
c:\users\saransh\saransh_softwares\python_3.9\lib\site-packages\skopt\sampler\sobol.py:246: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+162=162.
warnings.warn("The balance properties of Sobol' points require "
###Markdown
In this code we will solve the heat equation using PINN implemented with the DeepXDE library.The equation is as follows:$\frac{\partial u}{\partial t} = \alpha \nabla^2 u\;$ .Where $\nabla^2$ is the laplacian differential operator, $\alpha$ is the thermal diffusivity constant and $u$ is the function (temperature) we want to approximate.In a unidimensional case we have:$\frac{\partial u(x, t)}{\partial t}$ = $\alpha \frac{\partial^2u(x,t)}{{\partial x}^2}\;$, $\;\;\;\; x \in [0, 1]\;$, $\;\;\;\; t \in [0, 1]\;$.With Dirichlet boundary conditions $u(0, t) = u(1, t) = 0\;$ , and periodic (sinoidal) initial conditions:$u(x, 0) = sin(n\pi x/L)\;$, $\;\;\;\; 0 < x < L\;$, $\;\;\;\; n = 1, 2, ...\;.$This setup is a common problem in many differential equations textbooks and can be physically interpreted as the variation of temperature in a uniform and unidimensional bar over time. Here, the constant $\alpha$ is the thermal diffusivity (a property of the material that the bar is made) and $L$ is the lenght of the bar.
###Code
if __name__ == "__main__":
# Problem parameters:
a = 0.4 # Thermal diffusivity
L = 1 # Lenght of the bar
n = 1 # Frequency of the sinusoidal initial conditions
# Generate a dataset with the exact solution (if you dont have one):
gen_exact_solution()
# Solve the equation:
main()
###Output
Compiling model...
Building feed-forward neural network...
'build' took 0.051901 s
###Markdown
In this code we will solve the heat equation using PINN implemented with the DeepXDE library.The equation is as follows:$\frac{\partial u}{\partial t} = \alpha \nabla^2 u\;$ .Where $\nabla^2$ is the laplacian differential operator, $\alpha$ is the thermal diffusivity constant and $u$ is the function (temperature) we want to approximate.In a unidimensional case we have:$\frac{\partial u(x, t)}{\partial t}$ = $\alpha \frac{\partial^2u(x,t)}{{\partial x}^2}\;$, $\;\;\;\; x \in [0, 1]\;$, $\;\;\;\; t \in [0, 1]\;$.With Dirichlet boundary conditions $u(0, t) = u(1, t) = 0\;$ , and periodic (sinoidal) initial conditions:$u(x, 0) = sin(n\pi x/L)\;$, $\;\;\;\; 0 < x < L\;$, $\;\;\;\; n = 1, 2, ...\;.$This setup is a common problem in many differential equations textbooks and can be physically interpreted as the variation of temperature in a uniform and unidimensional bar over time. Here, the constant $\alpha$ is the thermal diffusivity (a property of the material that the bar is made) and $L$ is the lenght of the bar.
###Code
if __name__ == "__main__":
# Problem parameters:
a = 0.4 # Thermal diffusivity
L = 1 # Length of the bar
n = 1 # Frequency of the sinusoidal initial conditions
# Generate a dataset with the exact solution (if you dont have one):
gen_exact_solution()
# Solve the equation:
main()
###Output
c:\users\saransh\saransh_softwares\python_3.9\lib\site-packages\skopt\sampler\sobol.py:246: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+2542=2542.
warnings.warn("The balance properties of Sobol' points require "
c:\users\saransh\saransh_softwares\python_3.9\lib\site-packages\skopt\sampler\sobol.py:246: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+82=82.
warnings.warn("The balance properties of Sobol' points require "
c:\users\saransh\saransh_softwares\python_3.9\lib\site-packages\skopt\sampler\sobol.py:246: UserWarning: The balance properties of Sobol' points require n to be a power of 2. 0 points have been previously generated, then: n=0+162=162.
warnings.warn("The balance properties of Sobol' points require "
###Markdown
In this code we will solve the heat equation using PINN implemented with the DeepXDE library.The equation is as follows:$\frac{\partial u}{\partial t} = \alpha \nabla^2 u\;$ .Where $\nabla^2$ is the laplacian differential operator, $\alpha$ is the thermal diffusivity constant and $u$ is the function (temperature) we want to approximate.In a unidimensional case we have:$\frac{\partial u(x, t)}{\partial t}$ = $\alpha \frac{\partial^2u(x,t)}{{\partial x}^2}\;$, $\;\;\;\; x \in [0, 1]\;$, $\;\;\;\; t \in [0, 1]\;$.With Dirichlet boundary conditions $u(0, t) = u(1, t) = 0\;$ , and periodic (sinoidal) initial conditions:$u(x, 0) = sin(n\pi x/L)\;$, $\;\;\;\; 0 < x < L\;$, $\;\;\;\; n = 1, 2, ...\;.$This setup is a common problem in many differential equations textbooks and can be physically interpreted as the variation of temperature in a uniform and unidimensional bar over time. Here, the constant $\alpha$ is the thermal diffusivity (a property of the material that the bar is made) and $L$ is the lenght of the bar.
###Code
if __name__ == "__main__":
# Problem parameters:
a = 0.4 # Thermal diffusivity
L = 1 # Lenght of the bar
n = 1 # Frequency of the sinusoidal initial conditions
# Generate a dataset with the exact solution (if you dont have one):
gen_exact_solution()
# Solve the equation:
main()
###Output
Compiling model...
Building feed-forward neural network...
'build' took 0.051901 s
|
archive/2018/demo5.ipynb | ###Markdown
Decision trees (example from sklearn)
###Code
iris = datasets.load_iris()
X_train, X_test, y_train, y_test = model_selection.train_test_split(iris.data, iris.target, test_size=0.33, random_state=3)
clf = tree.DecisionTreeClassifier(max_depth=2)
clf = clf.fit(X_train, y_train)
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=iris.feature_names,
class_names=iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
predictions = clf.predict(X_train)
print ('Accuracy: %d ' % ((np.sum(y_train == predictions))/float(y_train.size)*100))
###Output
Accuracy: 97
###Markdown
Increasing the depth...
###Code
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X_train, y_train)
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=iris.feature_names,
class_names=iris.target_names,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
predictions = clf.predict(X_train)
print ('Accuracy: %d ' % ((np.sum(y_train == predictions))/float(y_train.size)*100))
###Output
Accuracy: 100
###Markdown
And what if we look at the accuracy over the test data?
###Code
predictions = clf.predict(X_test)
print ('Accuracy: %d ' % ((np.sum(y_test == predictions))/float(y_test.size)*100))
###Output
Accuracy: 96
|
Tutorials/Pandas/Method Chaining.ipynb | ###Markdown
IntroductionCongratulations! In this section we will put all of the things that we learned together to do some truly interesting things with some datasets. The exercises in this section are therefore more difficult! While working through the exercises, tTry using method chaning syntax (use the resource below if you don't know what method chaining means). Also, take advantage the hints we provide. Relevant Resource* **[Method chaining resource](https://www.kaggle.com/residentmario/method-chaining-reference). ** Set Up**First, fork this notebook using the "Fork Notebook" button towards the top of the screen.**Run the code cell below to load data and the libraries you'll use.
###Code
import pandas as pd
pd.set_option('max_rows', 5)
import sys
sys.path.append('../input/advanced-pandas-exercises/')
from method_chaining import *
chess_games = pd.read_csv("../input/chess/games.csv")
###Output
_____no_output_____
###Markdown
Checking AnswersCheck your answers in each of the exercises that follow using the `check_qN` function provided in the code cell above (replacing `N` with the number of the exercise). For example here's how you would check an incorrect answer to exercise 1:
###Code
check_q1(pd.DataFrame())
###Output
_____no_output_____
###Markdown
For the first set of questions, if you use `check_qN` on your answer, and your answer is right, a simple `True` value will be returned.For the second set of questions, using this function to check a correct answer will present you will an informative graph!If you get stuck, you may also use the companion `answer_qN` function to print the answer outright. Preview DataRun the cell below to preview the data
###Code
chess_games.head()
###Output
_____no_output_____
###Markdown
Exercises **Exercise 1**: It's well-known that in the game of chess, white has a slight first-mover advantage against black. Can you measure this effect in this dataset? Use the `winner` column to create a `pandas` `Series` showing how often white wins, how often black wins, and how often the result is a tie, as a ratio of total games played. In other words, a `Series` that looks something like this: white 0.48 black 0.44 draw 0.08 Name: winner, dtype: float64 Hint: use `len` to get the length of the initial `DataFrame`, e.g. the count of all games played.
###Code
temp = chess_games.winner.value_counts()/len(chess_games)
print (check_q1(temp), '\n\n', temp)
###Output
True
white 0.498604
black 0.454033
draw 0.047363
Name: winner, dtype: float64
###Markdown
**Exercise 2**: The `opening_name` field of the `chess_games` dataset provides interesting data on what the most commonly used chess openings are. However, it gives a bit _too_ much detail, including information on the variation used for the most common opening types. For example, rather than giving `Queen's Pawn Game`, the dataset often includes `Queen's Pawn Game: Zukertort Variation`.This makes it a bit difficult to use for categorical purposes. Here's a function that can be used to separate out the "opening archetype": ```python lambda n: n.split(":")[0].split("|")[0].split("")[0].strip() ```Use this function to parse the `opening_name` field and generate a `pandas` `Series` counting how many times each of the "opening archetypes" gets used. Hint: use a map.
###Code
temp = chess_games.opening_name.map(lambda n: n.split(":")[0].split("|")[0].split("#")[0].strip()).value_counts()
print (check_q2(temp), '\n\n', temp)
###Output
True
Sicilian Defense 2632
French Defense 1412
...
Valencia Opening 1
Pterodactyl Defense 1
Name: opening_name, Length: 143, dtype: int64
###Markdown
**Exercise 3**: In this dataset various players play variably number of games. Group the games by `{white_id, victory_status}` and count how many times each white player ended the game in `mate` , `draw`, `resign`, etcetera. The name of the column counting how many times each outcome occurred should be `n` (hint: `rename` or `assign` may help).
###Code
temp = chess_games.assign(n=0).groupby(['white_id', 'victory_status']).n.apply(len).reset_index()
temp
###Output
_____no_output_____
###Markdown
**Exercise 4**: There are a lot of players in the dataset who have only played one or a small handful of games. Create a `DataFrame` like the one in the previous exercise, but only include users who are in the top 20 users by number of games played. See if you can do this using method chaining alone! Hint: reuse the code from the previous example. Then, use `pipe`.
###Code
#chess_games.white_id.value_counts().sort_index()
#temp['white_id'].value_counts().iloc[:20]
temp = temp.pipe(lambda x: x.loc[x.white_id.isin(chess_games.white_id.value_counts().head(20).index)])
print (check_q4(temp), '\n\n', temp)
chess_games.white_id.value_counts().head(20).index
###Output
_____no_output_____
###Markdown
Next, let's do some visual exercises.The next exercise uses the following dataset:
###Code
kepler = pd.read_csv("../input/kepler-exoplanet-search-results/cumulative.csv")
kepler
###Output
_____no_output_____
###Markdown
**Exercise 5**: The Kepler space observatory is in the business of finding potential exoplanets (planets orbiting stars other suns) and, after collecting the evidence, generating whether or not to confirm, decline to confirm, or deny that a given space body is, in fact, an exoplanet. In the dataset above, the "before" status of the body is `koi_pdisposition`, and the "after" status is `koi_disposition`. Using the dataset above, generate a `Series` counting all of the possible transitions between pre-disposition and post-disposition. In other words, generate a `Series` whose index is a `MultiIndex` based on the `{koi_pdisposition, koi_disposition}` fields, and whose values is a count of how many times each possible combination occurred.
###Code
kepler.koi_disposition.unique()
check_q5(kepler.groupby(['koi_pdisposition', 'koi_disposition']).rowid.count())
###Output
_____no_output_____
###Markdown
The next few exercises use the following datasets:
###Code
wine_reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
wine_reviews.head()
ramen_reviews = pd.read_csv("../input/ramen-ratings/ramen-ratings.csv", index_col=0)
ramen_reviews.head()
###Output
_____no_output_____
###Markdown
**Exercise 6**: As we demonstrated in previous workbooks, the `points` column in the `wine_reviews` dataset is measured on a 20-point scale between 80 and 100. Create a `Series` which normalizes the ratings so that they fit on a 1-to-5 scale instead (e.g. a score of 80 translates to 1 star, while a score of 100 is five stars). Set the `Series` name to "Wine Ratings", and sort by index value (ascending).
###Code
temp2 = wine_reviews.points.map(lambda x: (x-80)/4).value_counts().sort_index().rename_axis("Wine Ratings")
print (check_q6(temp2))
#check_q6(pd.Series(temp2, name='Wine Ratings'))
#wine_reviews.points.sort_values().plot.hist()
###Output
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
**Exercise 7**: The `Stars` column in the `ramen_reviews` dataset is the ramen equivalent to the similar data points in `wine_reviews`. Luckily it is already on a 0-to-5 scale, but it has some different problems...create a `Series` counting how many ramens earned each of the possible scores in the dataset. Convert the `Series` to the `float64` dtype and drop rames whose rating is `"Unrated"`. Set the name of the `Series` to "Ramen Ratings". Sort by index value (ascending).
###Code
check_q7(ramen_reviews.Stars.replace('Unrated', None).dropna().astype('float64').value_counts().sort_index().rename_axis("Ramen Ratings"))
#answer_q7()
###Output
_____no_output_____
###Markdown
**Exercise 8**: We can see from the result of the previous exercise that whilst the wine reviewers stick to a strict 20-point scale, ramen reviews occassionally deviate into fractional numbers. Modify your answer to the previous exercise by rounding review scores to the nearest half-point (so 0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, or 5).
###Code
round(3.7, 0)
check_q8(ramen_reviews.Stars.replace('Unrated', None).dropna().astype('float64').map(lambda x: int(x) if x - int(x) < 0.5 else int(x) + 0.5).value_counts().sort_index().rename_axis("Ramen Reviews"))
###Output
_____no_output_____ |
Netflix Stock Price Prediction/Netflix Stock Price Prediction using Pytorch and RNN.ipynb | ###Markdown
Netflix Stock Price Prediction
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import warnings
warnings.simplefilter("ignore")
df = pd.read_csv('NFLX_data.csv')
df.sort_values('Date',inplace=True)
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3418 entries, 0 to 3417
Data columns (total 7 columns):
Date 3418 non-null object
Open 3418 non-null float64
High 3418 non-null float64
Low 3418 non-null float64
Close 3418 non-null float64
Adj Close 3418 non-null float64
Volume 3418 non-null int64
dtypes: float64(5), int64(1), object(1)
memory usage: 213.6+ KB
###Markdown
No missing values found.
###Code
df.plot(x='Date',y='Close',figsize=(16,8))
close = df[['Close']]
from sklearn.preprocessing import MinMaxScaler
mm = MinMaxScaler(feature_range=(-1, 1))
close['Close'] = mm.fit_transform(close['Close'].values.reshape(-1,1))
close.head(3)
raw = close.as_matrix()
print('Shape: ',raw.shape)
print('')
print(raw[:5])
lookback = 30
data = []
for index in range(len(raw) - lookback):
data.append(raw[index: index + lookback])
data = np.array(data)
print(data.shape)
test_size = int(np.round(0.2*data.shape[0]))
train_size = data.shape[0] - (test_size)
x_train = data[:train_size,:-1,:]
y_train = data[:train_size,-1,:]
x_test = data[train_size:,:-1]
y_test = data[train_size:,-1,:]
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
# make training and test sets in torch
x_train = torch.from_numpy(x_train).type(torch.Tensor)
x_test = torch.from_numpy(x_test).type(torch.Tensor)
y_train = torch.from_numpy(y_train).type(torch.Tensor)
y_test = torch.from_numpy(y_test).type(torch.Tensor)
n_steps = lookback - 1
batch_size = 1000
epochs = 120
train = torch.utils.data.TensorDataset(x_train,y_train)
test = torch.utils.data.TensorDataset(x_test,y_test)
train_loader = torch.utils.data.DataLoader(dataset=train,
batch_size=batch_size,
shuffle=False)
test_loader = torch.utils.data.DataLoader(dataset=test,
batch_size=batch_size,
shuffle=False)
input_dim = 1
hidden_dim = 36
num_layers = 2
output_dim = 1
class LSTM(nn.Module):
def __init__(self, input_dim, hidden_dim, num_layers, output_dim):
super(LSTM, self).__init__()
self.hidden_dim = hidden_dim
self.num_layers = num_layers
self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True)
self.fc = nn.Linear(hidden_dim, output_dim)
def forward(self, x):
h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_()
c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_()
out, (hn, cn) = self.lstm(x, (h0.detach(), c0.detach()))
out = self.fc(out[:, -1, :])
return out
model = LSTM(input_dim=input_dim, hidden_dim=hidden_dim, output_dim=output_dim, num_layers=num_layers)
loss_fn = torch.nn.MSELoss(size_average=True)
optimiser = torch.optim.Adam(model.parameters(), lr=0.007)
print(model)
print(len(list(model.parameters())))
for i in range(len(list(model.parameters()))):
print(list(model.parameters())[i].size())
lis = np.zeros(epochs)
# Number of steps to unroll
seq_dim =lookback-1
for t in range(epochs):
y_train_pred = model(x_train)
loss = loss_fn(y_train_pred, y_train)
if t % 10 == 0 and t !=0:
print("Epoch ", t, "MSE: ", loss.item())
lis[t] = loss.item()
optimiser.zero_grad()
loss.backward()
optimiser.step()
prd = mm.inverse_transform(y_train_pred.detach().numpy())
org = mm.inverse_transform(y_train.detach().numpy())
plt.plot(prd, label="Preds")
plt.plot(org, label="Data")
plt.legend()
plt.show()
plt.plot(lis, label="Training loss")
plt.legend()
plt.show()
np.shape(y_train_pred)
import math
from sklearn.metrics import mean_squared_error
from math import sqrt
# make predictions
y_test_pred = model(x_test)
# invert predictions
y_train_pred = mm.inverse_transform(y_train_pred.detach().numpy())
y_train = mm.inverse_transform(y_train.detach().numpy())
y_test_pred = mm.inverse_transform(y_test_pred.detach().numpy())
y_test = mm.inverse_transform(y_test.detach().numpy())
# calculate root mean squared error
trainScore = math.sqrt(mean_squared_error(y_train[:,0], y_train_pred[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(y_test[:,0], y_test_pred[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
# shift train predictions for plotting
trainPredictPlot = np.empty_like(close)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[lookback:len(y_train_pred)+lookback, :] = y_train_pred
# shift test predictions for plotting
testPredictPlot = np.empty_like(close)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(y_train_pred)+lookback-1:len(close)-1, :] = y_test_pred
# plot baseline and predictions
plt.figure(figsize=(15,8))
plt.plot(mm.inverse_transform(close),label='Actual Values')
plt.plot(trainPredictPlot,label='Training Predictions')
plt.plot(testPredictPlot,label='Test Predictions')
plt.legend()
plt.show()
###Output
_____no_output_____ |
RFCPY/.ipynb_checkpoints/exemplo2-checkpoint.ipynb | ###Markdown
Random Forest from the scratch (using dataset Adult from UCI)A modifield version of a modifield version of:Decision Tree from the Scratch, Rakend Dubba (Computational Engineer | Data Scientist).*Source:* https://medium.com/@rakendd/decision-tree-from-scratch-9e23bcfb4928.This example is a basic refined from *exemplo1*. Done:1) Using Bagging, get a group of $N_B$ random samples ($x_i, i = 1,... ,N_B$) with replacement for each three, for all $M$ trees.2) Each tree with a maximum limit of $s_{MAX}$ splitlevels. Why there categorical features with more then two values, each level may have more then two nodes. *There is implemented the limit of splits trough each way, i.e., fallowing the same sequence till limit.5) The ensembling model is based in voting, may possible to use both majority or soft. The schoice is made when using the predict funcion. *Rather that, it is use only soft voting.The final version, we will have:3) Each tree receive $K = s_{MAX}$ random features from all $p$ features.4) There are two alternatives for splitting with numeric features: using entropy criteria and random splitting between max/min values. For categorical features, all values receive a node.
###Code
import re
import numpy as np
import pandas as pd
eps = np.finfo(float).eps
from numpy import log2 as log
from tabulate import tabulate as tb
from anytree import Node, RenderTree
from anytree import search as anys
from anytree.exporter import DotExporter
from IPython.display import Image
###Output
/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
###Markdown
Load dataset:
###Code
features = ["Age", "Workclass", "fnlwgt", "Education", "Education-Num", "Marital Status",
"Occupation", "Relationship", "Race", "Sex", "Capital Gain", "Capital Loss",
"Hours per week", "Country", "Target"]
train_data = pd.read_csv(
#"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
"adult.data",
names=features,
sep=r'\s*,\s*',
engine='python',
na_values="?").dropna()
Target = 'Target'
Labels = train_data.Target.unique()
counts = train_data.Target.value_counts()
print(counts)
test_data = pd.read_csv(
#"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test",
"adult.test_fix",
names=features,
sep=r'\s*,\s*',
skiprows=[0],
engine='python',
na_values="?").dropna()
Labels = test_data.Target.unique()
counts = test_data.Target.value_counts()
print(counts)
def find_entropy(df):
entropy = 0
values = df[Target].unique()
for value in values:
temp = df[Target].value_counts()[value]/len(df[Target])
entropy += -temp*np.log2(temp)
return entropy
def find_entropy_attribute(df,attribute):
if not np.issubdtype(df[attribute].dtype, np.number):
return find_entropy_attribute_not_number(df,attribute), None
else:
return find_entropy_attribute_number(df,attribute)
def find_entropy_attribute_not_number(df,attribute):
target_variables = df[Target].unique() #This gives all 'Yes' and 'No'
variables = df[attribute].unique() #This gives different features in that attribute (like 'Hot','Cold' in Temperature)
entropy2 = 0
for variable in variables:
entropy = 0
for target_variable in target_variables:
num = len(df[attribute][df[attribute]==variable][df[Target] ==target_variable])
den = len(df[attribute][df[attribute]==variable])
fraction = num/(den+eps)
entropy += -fraction*log(fraction+eps)
entropy2 += -(den/len(df))*entropy
return abs(entropy2)
def find_entropy_attribute_number(df,attribute):
target_variables = df[Target].unique() #This gives all 'Yes' and 'No'
variables = df[attribute].unique() #This gives different features in that attribute (like 'Hot','Cold' in Temperature)
variables.sort()
if len(variables)>2:
variables = variables[1:-1]
vk3 = variables[0]
entropy3 = 0
else:
vk3 = variables[0]
entropy3 = np.Inf
for vk in variables:
entropy = 0
for target_variable in target_variables:
num = len(df[attribute][df[attribute]<=vk][df[Target] ==target_variable])
den = len(df[attribute][df[attribute]<=vk])
fraction = num/(den+eps)
entropy += -fraction*log(fraction+eps)
for target_variable in target_variables:
num = len(df[attribute][df[attribute]>vk][df[Target] ==target_variable])
den = len(df[attribute][df[attribute]>vk])
fraction = num/(den+eps)
entropy += -fraction*log(fraction+eps)
entropy2 = (den/len(df))*abs(entropy)
#print(str(entropy2)+"|"+str(vk))
if entropy2>entropy3:
entropy3 = entropy2
vk3 = vk
return abs(entropy3),vk3
def find_winner(df):
IG = []
vk = list()
for key in df.columns.difference([Target]):
temp,temp2 = find_entropy_attribute(df,key)
vk.append(temp2)
IG.append(find_entropy(df)-temp)
return df.columns.difference([Target])[np.argmax(IG)], vk[np.argmax(IG)]
def print_result_node(node,value,classe,prob):
print(node +' : '+value+' : '+classe+' ('+str(prob)+')')
def buildtree(df,tree=None, mytree=None, T_pro=0.9, T_pro_num=0.6,total_splits=10,splits=1):
def ramificatree(Thd,ss):
if (len(clValue)==1):
tree[node][value] = {}
tree[node][value]['Class'] = clValue[0]
tree[node][value]['Prob'] = 1.0
#print_result_node(node,value,clValue[0],1)
else:
prob = counts.max() / counts.sum()
if (prob>=Thd)or(splits>=total_splits):
tree[node][value] = {}
tree[node][value]['Class'] = clValue[counts.argmax()]
tree[node][value]['Prob'] = prob
#print_result_node(node,value,clValue[counts.argmax()],prob)
else:
ss += 1
tree[node][value] = buildtree(subtable,splits=ss)
#print(node +' : '+value+' : *')
#print(find_winner(df))
#formata_dados(dados)
node,vk = find_winner(df)
if tree is None:
tree={}
tree[node] = {}
if vk is None:
attValue = np.unique(df[node])
for value in attValue:
subtable = df[df[node] == value].reset_index(drop=True)
clValue,counts = np.unique(subtable[Target],return_counts=True)
splits += 1
ramificatree(T_pro,ss=splits)
else:
if (len(df[node][df[node] <= vk].unique())>0) and (len(df[node][df[node] > vk].unique())>0):
# >vk
value = node+' >'+str(vk)
subtable = df[df[node] > vk].rename(columns = {node:value}).reset_index(drop=True)
clValue,counts = np.unique(subtable[Target],return_counts=True)
if (len(subtable[value].unique())==1) and (len(clValue)>1):
tree[node][value] = {}
tree[node][value]['Class'] = clValue[counts.argmax()]
prob = counts.max() / counts.sum()
tree[node][value]['Prob'] = prob
#print_result_node(node,value,clValue[counts.argmax()],prob)
else:
splits += 1
ramificatree(T_pro_num,ss=splits)
clValue_antes = clValue[0]
value_antes = value
# <=vk
value = node+' <='+str(vk)
subtable = df[df[node] <= vk].rename(columns = {node:value}).reset_index(drop=True)
clValue,counts = np.unique(subtable[Target],return_counts=True)
if ((len(subtable[value].unique())==1) and (len(clValue)>1)):
tree[node][value] = {}
tree[node][value]['Class'] = clValue[counts.argmax()]
prob = counts.max() / counts.sum()
tree[node][value]['Prob'] = prob
#print_result_node(node,value,clValue[counts.argmax()],prob)
else:
splits += 1
ramificatree(T_pro_num,ss=splits)
else:
df[node] = df[node].astype(str)
buildtree(df)
return tree
# Only to see
def print_tree(arg):
for pre, fill, node in RenderTree(arg):
print("%s%s" % (pre, node.name))
def converte_para_anytree(tree,node=None,mytree=None):
if node is None:
temp = list(tree.keys())
node = temp[0]
mytree = {}
mytree[node] = Node(node)
converte_para_anytree(tree,node,mytree)
else:
tree = tree[node]
if not isinstance(tree, str):
childs = list(tree.keys())
for child in childs:
if (list(tree[child])[0] == 'Class'):
temp = mytree[node]
mytree[child] = Node(child, parent=temp, target=tree[child]['Class'], prob=tree[child]['Prob'])
else:
temp = mytree[node]
mytree[child] = Node(child, parent=temp)
converte_para_anytree(tree,child,mytree)
else:
mytree[node] = 'Fim'
return mytree
#anys.findall_by_attr(mytree['Taste'], name="target", value='Yes')
def mostra_tree(tree):
mytree = converte_para_anytree(tree)
temp = list(tree.keys())
root = temp[0]
mytree[root]
for pre, fill, node in RenderTree(mytree[root]):
txt_node = str(node)
m = re.search('prob\=\d+\.\d+', txt_node)
if Labels[0] in txt_node:
if not m is None:
print("%s%s" % (pre, node.name+': '+Labels[0]+' ('+m.group()[5:]+')'))
else:
print("%s%s" % (pre, node.name+': '+Labels[0]+' (?)'))
elif Labels[1] in txt_node:
if not m is None:
print("%s%s" % (pre, node.name+': '+Labels[1]+' ('+m.group()[5:]+')'))
else:
print("%s%s" % (pre, node.name+': '+Labels[1]+' (?)'))
else:
print("%s%s" % (pre, node.name))
def mostra_tree_graph(tree, largura=None, altura=None):
mytree = converte_para_anytree(tree)
temp = list(tree.keys())
root = temp[0]
mytree[root]
DotExporter(mytree[root]).to_picture("tree.png")
return Image(filename='tree.png', width=largura, height=altura)
def predict(inst,tree):
for node in tree.keys():
if ('<=' in str(tree[node].keys())):
childs = list(tree[node].keys())
if ('<=' in childs[1]):
temp = childs[1]
childs[1] = childs[0]
childs[0] = temp
vk = float(childs[1].split('>')[1])
if ('>' in node):
valor = float(str(inst[node.split('>')[0][:-1]]))
elif ('<=' in node):
valor = float(str(inst[node.split('<')[0][:-1]]))
else:
valor = float(str(inst[node]))
if (valor > vk):
tree = tree[node][childs[1]]
prediction = None
prob = None
if (list(tree)[0] != 'Class'):
prediction,prob = predict(inst, tree)
else:
prediction = tree['Class']
prob = tree['Prob']
break;
else:
tree = tree[node][childs[0]]
prediction = None
prob = None
if (list(tree)[0] != 'Class'):
prediction,prob = predict(inst, tree)
else:
prediction = tree['Class']
prob = tree['Prob']
break;
else:
value = str(inst[node])
if value in tree[node].keys():
tree = tree[node][value]
prediction = None
prob = None
if (list(tree)[0] != 'Class'):
prediction,prob = predict(inst, tree)
else:
prediction = tree['Class']
prob = tree['Prob']
break;
else:
prediction = 'Not exists node: '+value
prob = 0
return prediction, prob
def predict_forest(arg,forest):
prob_yes = 0
prob_no = 0
for tree in forest:
result = predict(arg,tree)
if (result[0] == arg.Target):
prob_yes += result[1]
else:
prob_no += 1-result[1]
return prob_yes, prob_no
def test_step_prob(arg,tree):
P = 0;
S = 0
for i in range(0,len(arg)):
S += (predict(arg.iloc[i],tree)[0] == arg.iloc[i].Target)*1
P += predict(arg.iloc[i],tree)[1]
S = S / len(arg)
P = P / len(arg)
print(str(S)+' ('+str(P)+')')
def test_step(arg,tree):
NO = 0;
YES = 0
for i in range(0,len(arg)):
if (predict(arg.iloc[i],tree)[0] == arg.iloc[i].Target):
YES += 1
else:
NO += 1
YES = YES / len(arg)
NO = NO / len(arg)
#print("YES: "+str(YES)+'. NO: '+str(NO)+'.')
return YES,NO
def test_step_forest(arg,forest):
NO = 0;
YES = 0
for i in range(0,len(arg)):
result = predict_forest(arg.loc[i],forest)
if result[0]>result[1]:
YES += 1
else:
NO += 1
YES = YES / len(arg)
NO = NO / len(arg)
#print("YES: "+str(YES)+'. NO: '+str(NO)+'.')
return YES,NO
# Bagging functions:
def formata_dados(dados):
for chave in dados.keys():
if not np.issubdtype(dados[chave].dtype, np.number):
dados[chave] = dados[chave].astype(str)
elif (len(dados[chave].unique())<5):
dados[chave] = dados[chave].astype(str)
return dados
def amostra_dados(dados,n_samples):
dados2 = dados.loc[dados[Target]==Labels[0]].sample(int(n_samples/2))
dados2 = dados2.append(dados.loc[dados[Target]==Labels[1]].sample(int(n_samples/2)), ignore_index=True).reset_index(drop=True)
return formata_dados(dados2)
n_samples=40
forest = list()
M = 250
for m in range(0,M):
print(str(m+1)+'/'+str(M), end='\r')
train_bag = amostra_dados(train_data,n_samples)
forest.append(buildtree(train_bag,T_pro=0.8, T_pro_num=0.8))
n_samples_test = 1000
test_bag = amostra_dados(test_data,n_samples_test)
values_tree = np.empty((M,2))
m=0
for tree in forest:
result = test_step(test_bag,tree)
values_tree[m][0] = result[0]
values_tree[m][1] = result[1]
m+=1
values_forest = test_step_forest(test_bag,forest)
mean_tree = round(values_tree[:,0].mean(),4)
std_tree = round(values_tree[:,0].std(),4)
print("\n")
print(tb([['Trees', "{:.2f}".format(mean_tree)], ['Forest ', "{:.2f}".format(values_forest[0])]],
headers=["Method", "Precision (%)"], tablefmt='orgtbl'))
mean_tree = round(values_tree[:,0].mean(),4)
std_tree = round(values_tree[:,0].std(),4)
print("\n")
print(tb([['Trees', "{:.2f}".format(mean_tree)], ['Forest ', "{:.2f}".format(values_forest[0])]],
headers=["Method", "Precision (%)"], tablefmt='orgtbl'))
size_tree = np.empty((M,1))
m=0
for tree in forest:
size_tree[m] = len(str(tree))
m+=1
test_step(test_bag,forest[size_tree.argmin()])
mostra_tree_graph(forest[size_tree.argmin()])
mostra_tree(forest[size_tree.argmin()])
test_step(test_bag,forest[size_tree.argmax()])
mostra_tree_graph(forest[size_tree.argmax()])
mostra_tree(forest[size_tree.argmax()])
test_bag.dtypes
###Output
_____no_output_____ |
Numerical_analysis/Test/Test_2/BFVM19DATASC2_I_DataScience2_1920_DSLS_LADR.ipynb | ###Markdown
Data Science 2 (modeling) Computer-exam BFVM19DATASC2 (irregular opp) Tue. 26 Jan 2021, 08:30-11:30, BB-Collaborate**Materials:**On your computer desktop you will find all data files and supplementary materials.* `BFVM19DATASC2_I_DataScience2_1920_DSLS_HEMI-LADR-WATS.ipynb`* `neuron.csv`* ...All notes, textbooks and other written reference materials are permitted.**Instructions:**This exam consists of three parts that can in principle be answered separately. All questions have the possible number of points to be scored indicated. Your grade will be calculated as follows:$$\text{Grade} = 1 + 9 \cdot \frac {\text{Points Scored}} {\text{Maximum Score}}$$Provide your answers in the code cells corresponding with each of the questions. For those questions that require a textual answer rather than python code, you may either type your answer in the cell using a python comment or insert a new markdown cell with your formatted text. You can receive partial credit on textual answers as well as code if you don't get the whole right answer. Be sure to explain your code through commenting, even if it doesn't work correctly.After finishing:Rename your notebook with your name and student number, like `JohnDoe_123456`, using the menu option `File` > `Rename`.Evaluate the notebook by means of the menu option `Kernel` > `Restart & Run All` and check that your notebook runs without errors.Save the evaluated notebook using the menu option `File` > `Save and Checkpoint`.Submit your saved file on Blackboard using the `Assignment submission` item. *** Part I: Graph theory [30 pts] Question 1a [5 pts]Bla bla bla Part II: Numerical analysis [30 pts]Below, you will investigate the behavior of the *FitzHugh-Nagumo* (FHN) model that can be used to crudely model the spiking behaviour of a single neuron in the central nervous system when stimulated with excitatory input. The first-order differential equations for the FHN model read [ref](http://www.scholarpedia.org/article/FitzHugh-Nagumo_model)$$\begin{aligned}\dot{V} &= V - \frac{V^3}{3} - W + I\\\dot{W} &= 0.08 \left( V + 0.7 - 0.8 W \right)\end{aligned}$$Here, the dotted variables $\dot{V}$ and $\dot{W}$ denote the derivatives of $V$ and $W$ with respect to time $t$ (so-called Newton's notation), and* $V$ is the neuron's membrane potential,* $W$ is a supplementary recovery variable,* $I$ is the magnitude of the stimulus current.It is an example of a *relaxation oscillator* because, if the external stimulus $I$ exceeds a certain threshold value, the system will exhibit a characteristic excursion called an *action potential* before the variables $V$ and $W$ relax back to their rest values. Question 2a [9 pts]Integrate the FHN model using the *Midpoint* method from the Runge-Kutta family of integration methods. Employ starting values $V=W=0$ and a step size $\Delta t = \frac{1}{2}$, and plot the membrane potential $V(t)$ from $t_0=0$ to $t_1=300$ that you obtain for no ($I=0.0$), weak ($I=0.3$) or strong ($I=0.6$) stimulus currents in a single graph.What is the order of the Midpoint method?Hint:Modify your implementation of Heun's method to obtain the Midpoint method.
###Code
import numpy as np
import matplotlib.pyplot as plt
def FHN(x, y, I = 0):
return np.array([
y[0] - (y[0]**3)/3 - y[1] +I ,
0.08*(y[0] + 0.7 - 0.8*y[1])
])
def midpoint(f, y0, x0, x1, steps, I):
h = (x1 - x0) / steps
xs = np.linspace(x0, x1, steps + 1)
y = y0
ys =[y]
for x in xs[:-1]:
k1 = f(x, y, I)
k2 = f(x + (h/2), y + (h/2)*k1, I)
y = y + h*(k2)
ys.append(y)
return xs, ys
I = [0, 0.3, 0.6]
for i in I:
xs, ys = midpoint(FHN, np.array([0.0, 0.0]), 0, 300, 501, i)
# print(ys)
plt.axhline(-0.0019242265446122067)
plt.plot(xs, ys)
plt.show()
###Output
_____no_output_____
###Markdown
Note:If you did not succeed in calculating neural signals according to the FHN model, import substitute data using `pandas.read_csv('./neuron.csv')`. Question 2b [7 pts]The average value $\bar{V}$ of the continuous signal $V(t)$ over an arbitrary interval $(t_0, t_1)$ can be determined by the expression$$\bar{V} = \frac{\int_{t_0}^{t_1} V(t) \text{d}t}{t_1-t_0}$$Given the sampled values $V(t)$ that you determined in **2a.**, determine the average value $\bar{V}$ of the membrane potential $V(t)$ between $t_0=100$ and $t_1=300$ for each of the three stimulus currents $I=0.0,0.3,0.6$ using *Simpson's integration rule* and report the three outcomes using three decimals.Would you generally prefer Simpson's rule to the trapezoidal rule? Explain why.
###Code
def simpson(f, a, b, r, n=100):
"""df = simpson(f, a, b, n=...).
Calculates the definite integral of the function f(x)
from a to b using the composite Simpson's
rule with n subdivisions (with default n=...).
"""
n += n % 2 # force to be even
h = (b -a) / n
I = f(a, r) + f(b, r)
for i in range(1, n, 2):
xi = a + i*h
I += 4*f(xi, r)
for i in range(2, n, 2):
xi = a + i*h
I += 2*f(xi, r)
I *= h/3
return I
def V(b, r):
prey = []
x, res = midpoint(FHN, np.array([0.0, 0.0]), 0, b, 501, r)
for i in range(len(res)):
prey.append(res[i][0])
return prey[::-1][0]
for i in I:
print('I:',i)
print( simpson( V, a = -2, b = 0, r = i)/200)
###Output
I: 0
-0.00024738658669877826
I: 0.3
-0.0019242265446122067
I: 0.6
-0.003666881535684628
###Markdown
Question 2c [7 pts]For sufficiently high values of the stimulus $I$, the system shows oscillatory behavior, whereas below a certain critical threshold it quickly achieves a stable equilibrium close to $V(t) \approx -1$ in which no excursions occur. The fact that $V$ and $W$ are stationary in such an equilibrium implies that $\dot{V}=\dot{W}=0$. The second FHN equation $\dot{W} = 0.08 \left( V + 0.7 - 0.8 W \right) = 0$ then results in $W = (V+0.7) / 0.8$, which can be substituted into the first FHN equation to obtain$$V - \frac{V^3}{3} - \frac{V+0.7}{0.8} + I = 0$$Find the static solution for the above equality for $V$ near -1 for $I=0.0$, $0.3$, and $0.6$ to at least 3 digits accuracy.Do your results agree with those from **2b.**? Explain your observations.
###Code
def func(x, I):
return x - (x**3)/3 - (x -0.7)/0.8 + I
x = np.linspace(-5, 5, 400)
def rootsearch(f, a, b, steps, r):
"""lo, hi = rootsearch(f, a, b, steps).
Searches the interval (a,b) in a number of steps for
the bounds (lo,hi) of the roots of f(x).
"""
h = (b - a) / steps
f_lo = f(a, r)
for step in range(steps):
lo = a + step * h
hi = lo + h
f_hi = f(hi, r)
if f_lo * f_hi <= 0.0:
yield lo, hi
f_lo = f_hi
for i in I:
print('I:', i)
plt.plot(x, func(x, i))
plt.show()
print(list(rootsearch(func, -2, 2, 1000, i)))
###Output
I: 0
|
Send_more_money.ipynb | ###Markdown
Ejercicio 2: Polinomios
###Code
def p(x):
a = [10, 20, 0, 1, 23, 4]
s = 0.0
for i, ai in enumerate(reversed(a)):
s += ai * x ** i
return s
p(2)
###Output
_____no_output_____
###Markdown
SEND + MORE = MONEY
###Code
def validate(a, b, c, codex, chars):
stra = a
strb = b
strc = c
for i in range(len(codex)):
stra = stra.replace(chars[i], str(codex[i]))
strb = strb.replace(chars[i], str(codex[i]))
strc = strc.replace(chars[i], str(codex[i]))
if int(stra) + int(strb) == int(strc):
print(a, stra, b, strb, c, strc)
validate("SEND", "MORE", "MONEY", [7,6,4,9,0,8,1,5], "SENDMORY")
def combinations(digits, n, w, chars, codex, a, b, c):
if w == n:
validate(a, b, c, codex, chars)
else:
for i in range(len(digits)):
e = digits[i]
combinations(digits[:i] + digits[i+1:], n, w+1, chars, codex + [e], a, b, c)
def solve(a, b, c):
chars = list(set(a + b + c))
digits = [i for i in range(10)] #set quita los repetidos
n = len(chars)
combinations(digits, n, 0, chars, [], a, b, c)
solve("SEND", "MORE", "MONEY")
###Output
SEND 7429 MORE 0814 MONEY 08243
SEND 7539 MORE 0815 MONEY 08354
SEND 7649 MORE 0816 MONEY 08465
SEND 8432 MORE 0914 MONEY 09346
SEND 8542 MORE 0915 MONEY 09457
SEND 8324 MORE 0913 MONEY 09237
SEND 6853 MORE 0728 MONEY 07581
SEND 6419 MORE 0724 MONEY 07143
SEND 7531 MORE 0825 MONEY 08356
SEND 7643 MORE 0826 MONEY 08469
SEND 7534 MORE 0825 MONEY 08359
SEND 7316 MORE 0823 MONEY 08139
SEND 5849 MORE 0638 MONEY 06487
SEND 6851 MORE 0738 MONEY 07589
SEND 6524 MORE 0735 MONEY 07259
SEND 6415 MORE 0734 MONEY 07149
SEND 5731 MORE 0647 MONEY 06378
SEND 5732 MORE 0647 MONEY 06379
SEND 3719 MORE 0457 MONEY 04176
SEND 3829 MORE 0458 MONEY 04287
SEND 2817 MORE 0368 MONEY 03185
SEND 2819 MORE 0368 MONEY 03187
SEND 3821 MORE 0468 MONEY 04289
SEND 3712 MORE 0467 MONEY 04179
SEND 9567 MORE 1085 MONEY 10652
|
Stock_LSTM_day_1.ipynb | ###Markdown
Heat Map
###Code
sns.heatmap(stock_df1_1[['open','high','low']])
###Output
_____no_output_____
###Markdown
Histograms and Curve Distribution
###Code
fig, axes = plt.subplots(1,3, figsize=(15,5))
for name, ax in zip(['open', 'high', 'low'], axes):
sns.distplot(stock_df1_1[name], ax=ax)
###Output
_____no_output_____
###Markdown
Correlation
###Code
plt.matshow(stock_df1_1.corr())
plt.show()
###Output
_____no_output_____
###Markdown
Scatter Plot
###Code
plt.scatter(stock_df1_1['Day'],stock_df1_1['open'])
plt.scatter(stock_df1_1['Day'],stock_df1_1['high'])
plt.scatter(stock_df1_1['Day'],stock_df1_1['low'])
plt.legend(['Open','High','Low'])
plt.xlabel('Dayss')
plt.ylabel('Stock Rate')
plt.show()
###Output
_____no_output_____
###Markdown
Trend Line
###Code
plt.plot(stock_df1_1['open'].rolling(window=150, center=True, min_periods=30).mean())
plt.plot(stock_df1_1['high'].rolling(window=150, center=True, min_periods=30).mean())
plt.plot(stock_df1_1['low'].rolling(window=150, center=True, min_periods=30).mean())
plt.legend(['Open','High','Low'])
plt.title('Trend Line')
plt.xlabel('Days')
###Output
_____no_output_____
###Markdown
Splitting Data into Train/Test
###Code
def train_test_data(data):
x = np.array(data.iloc[:,:-1])
y = np.array(data.iloc[:,-1])
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size = 0.2, shuffle= True)
return (x_train, x_test, y_train, y_test)
x_train_df1, x_test_df1, y_train_df1, y_test_df1 = train_test_data(stock_df2)
x_train_df1.shape
y_train_df1.shape
X_train_df1 = x_train_df1.reshape((x_train_df1.shape[0],1, x_train_df1.shape[1]))
X_test_df1 = x_test_df1.reshape((x_test_df1.shape[0],1, x_test_df1.shape[1]))
###Output
_____no_output_____
###Markdown
LSTM
###Code
from keras.layers import LSTM
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, BatchNormalization, Activation
lstm = Sequential()
lstm.add(LSTM(20, input_shape=(X_train_df1.shape[1], X_train_df1.shape[2])))
lstm.add(Dense(2, activation='sigmoid'))
lstm.add(Dense(y_train_df1.reshape(-1,1).shape[1]))
lstm.compile(loss='mae', optimizer='adam', metrics=['mean_squared_error'])
lstm.summary()
lstm.fit(X_train_df1, y_train_df1, epochs =20, verbose=1, batch_size=8,
validation_data=(X_test_df1,y_test_df1), shuffle=True)
predict = lstm.predict(X_test_df1)
# Output value is scaled. To get actual value undo scaled value of output
print('Scaled Value Predicted: %.2f' %predict[2])
print('Actual Predicted Value: %.2f'%out_scaler.inverse_transform([predict[2]]))
print('True Value: %.2f' %out_scaler.inverse_transform([[y_test_df1[2]]]))
lstm.save('lstm.h5')
###Output
_____no_output_____
###Markdown
Evaluation
###Code
print('R_2 Score: %.7f' %r2_score(y_test_df1, predict))
print('Mean Absolute Error: %.7f' %mean_absolute_error(y_test_df1, predict))
print('Mean Square Error: %.7f' %mean_squared_error(y_test_df1, predict))
print('Root Mean Square Error: %.7f' %np.sqrt(mean_squared_error(y_test_df1, predict)))
###Output
R_2 Score: 0.9915114
Mean Absolute Error: 0.0175962
Mean Square Error: 0.0005674
Root Mean Square Error: 0.0238209
###Markdown
Plot
###Code
f, ax = plt.subplots()
ax.plot([None] + lstm.history.history['loss'], 'o-' )
ax.plot([None] + lstm.history.history['val_loss'], 'x-')
ax.legend(['Train MAE', 'Valid MAE'], loc=1)
ax.set_title('Train/Validation Mean Absolute Error')
ax.set_xlabel('Epochs')
ax.set_ylabel('MAE')
f.show()
f, ax = plt.subplots()
ax.plot([None] + lstm.history.history['mean_squared_error'], 'o-' )
ax.plot([None] + lstm.history.history['val_mean_squared_error'], 'x-')
ax.legend(['Train MSE', 'Valid MSE'], loc=1)
ax.set_title('Train/Validation Mean ASquare Error')
ax.set_xlabel('Epochs')
ax.set_ylabel('MSE')
f.show()
plt.plot(y_test_df1[1:100], 'b')
plt.plot( predict[1:100], 'y')
plt.legend(['True', 'Pred'])
plt.title('Predicted vs True')
plt.xlabel('Samples')
plt.ylabel('Stock')
plt.show()
plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.hist(y_test_df1)
plt.title('True')
plt.subplot(1,2,2)
plt.hist(predict, color='grey')
plt.title('Predicted')
plt.show()
plt.boxplot(predict, showmeans=True)
plt.show()
###Output
_____no_output_____
###Markdown
Preprocessing Data
###Code
def Date_Time(dataFrame):
dateTime = dataFrame['date'].map(str)+dataFrame['time']
k = pd.to_datetime(dateTime, format='%Y%m%d%H:%M')
dataFrame['DateTime'] = k
dataFrame['Day'] = dataFrame['DateTime'].dt.day
dataFrame['Month'] = dataFrame['DateTime'].dt.month
dataFrame['Year'] = dataFrame['DateTime'].dt.year
#dataFrame['Hour'] = dataFrame['DateTime'].dt.hour
#dataFrame['Minute'] = dataFrame['DateTime'].dt.minute
dataFrame = dataFrame.drop(labels=['DateTime'], axis=1)
dataFrame['group']= dataFrame['Year'].map(str) + dataFrame['Month'].map(str)+ dataFrame['Day'].map(str)
dataFrame = dataFrame[['open', 'high', 'low', 'Day', 'Month', 'Year','group','close']]
dataFrame= dataFrame.sort_values(by=['Year','Month','Day'])
dataFrame= dataFrame.reset_index(drop=True)
return(dataFrame)
def processing(dataframe):
df = dataframe
day_group = df['group'].unique() # extract unique hour values to form group based on days, month and year
d_group_index = np.arange(1,len(day_group)+1)# for reindexing hour group values from 1 to number of groups.
#As indexing starts from 0 so 1 is added
# replacing hour group values with new indexing for extracting hour groups
#(This step will take 20 minutes due to 3 hundred thousand samples)
# it is already done once and results are saved in file hour.npy
# so instead of running again, load this file
for i in range(len(day_group)):
df['group'] = df['group'].replace([day_group[i]],d_group_index[i])
df1 = pd.DataFrame(df, index= day_group) # this data frame has day group as index values for extracting its index
count_index = df['close'].groupby(df['group']).count() # counting each day group values
day_index = [] # extracting months index
w=0
for i in count_index:
w = i+w
day_index.append(w)
day_index = np.array(day_index) -1
# above steps are adding count values(in other words "commulative count_index")
# we need commulative count_index as count_index are absolute values from which required values cant be extracted
# extracting close values which is last value of each month group
close = []
for i in day_index:
t = df.loc[i,'close']
close.append(t)
close = np.array(close)
#extracting low, high, month, year values of each month group
low = pd.DataFrame(df['low'].groupby(df['group']).min()).reset_index(drop=True)
high = pd.DataFrame(df['high'].groupby(df['group']).max()).reset_index(drop=True)
Day = pd.DataFrame(df['Day'].groupby(df['group']).max()).reset_index(drop=True)
Month = pd.DataFrame(df['Month'].groupby(df['group']).max()).reset_index(drop=True)
Year = pd.DataFrame(df['Year'].groupby(df['group']).max()).reset_index(drop=True)
#extracting first value of open from each month group
openn = []
for i in (day_index-count_index+1):
r = df.loc[i,'open']
openn.append(r)
openn = np.array(openn)
#creating new data frame with extracted values
df2 = pd.DataFrame()
df2['open'] = openn
df2['high'] = high
df2['low'] = low
df2['Day'] = Day
df2['Month'] = Month
df2['Year'] = Year
df2['close'] = close
# rearranging data into ascending form
df2 = df2.sort_values(by=['Year','Month','Day'])
df2 = df2.reset_index(drop=True) # reset index
return(df2)
def scaling(dataFrame):
close = np.array(dataFrame['close']).reshape(-1,1)
stock_df = dataFrame.drop(labels=['Day','Month','Year','close'], axis = 1)
scaler = MinMaxScaler(feature_range=(0,1))
scaler.fit(stock_df)
scaled_df = scaler.transform(stock_df)
scaler2 = MinMaxScaler(feature_range=(0,1))
scaler2.fit(close)
scaled_close = scaler2.transform(close)
scaled_df = pd.DataFrame(scaled_df, columns=stock_df.columns)
scaled_df['close'] = scaled_close
return(scaled_df, scaler, scaler2)
stock_df1 = Date_Time(df_1)
stock_df1.head()
stock_df1_1 = processing(stock_df1)
stock_df1_1.head()
stock_df2, in_scaler, out_scaler = scaling(stock_df1_1)
stock_df2.head()
###Output
_____no_output_____
###Markdown
Data Plots Time Series Distribution For Month
###Code
sns.set(rc={'figure.figsize':(11,4)})
stock_df1_1[['open','high','low']].plot(linewidth=0.8, title='Days Series')
plt.xlabel('Days (2012-2016)')
plt.ylabel('Stock Rate')
cols_plot = ['open', 'high','low']
axes = stock_df1_1[cols_plot].plot(marker='o', alpha=0.8, linestyle='-', figsize=(11, 9), subplots=True)
for ax in axes:
ax.set_ylabel('Stock Rate')
ax.set_xlabel('Days (2012-2016)')
###Output
_____no_output_____
###Markdown
Box Pots
###Code
fig, axes = plt.subplots(3, 1, figsize=(12, 10), sharex=True)
for name, ax in zip(['open', 'high', 'low'], axes):
sns.boxplot(data=stock_df1_1, x='Day', y=name, ax=ax)
ax.set_ylabel('Stock Rate')
ax.set_title(name)
###Output
_____no_output_____ |
Crosstab.ipynb | ###Markdown
CrossTab Simple
###Code
pd.crosstab(df.Nationality, df.Handedness)
pd.crosstab(df.Sex, df.Handedness)
###Output
_____no_output_____
###Markdown
With Margins
###Code
pd.crosstab(df.Sex, df.Handedness, margins=True)
###Output
_____no_output_____
###Markdown
Multi-Index Column and Rows
###Code
pd.crosstab(df.Sex, [df.Handedness, df.Nationality], margins=True)
###Output
_____no_output_____
###Markdown
Normalize
###Code
pd.crosstab(df.Sex, df.Handedness, normalize='index')
###Output
_____no_output_____
###Markdown
Aggregate function
###Code
import numpy as np
pd.crosstab(df.Sex, df.Handedness, values=df.Age, aggfunc=np.average)
###Output
_____no_output_____
###Markdown
Automotive dataset example Define the headers since the data does not have any
###Code
headers = ["symboling", "normalized_losses", "make", "fuel_type", "aspiration","num_doors", "body_style", "drive_wheels",
"engine_location", "wheel_base", "length", "width", "height", "curb_weight", "engine_type", "num_cylinders",
"engine_size", "fuel_system", "bore", "stroke", "compression_ratio", "horsepower", "peak_rpm", "city_mpg",
"highway_mpg", "price"]
###Output
_____no_output_____
###Markdown
Read in the CSV file and convert "?" to NaN
###Code
df_raw = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data', header=None, names=headers, na_values="?" )
###Output
_____no_output_____
###Markdown
Define a list of models that we want to review
###Code
models = ["toyota","nissan","mazda", "honda", "mitsubishi", "subaru", "volkswagen", "volvo"]
###Output
_____no_output_____
###Markdown
Create a copy of the data with only the top 8 manufacturers
###Code
df = df_raw[df_raw.make.isin(models)].copy()
###Output
_____no_output_____
###Markdown
CrossTab: make vs body_style
###Code
pd.crosstab(df.make, df.body_style)
###Output
_____no_output_____
###Markdown
Groupby
###Code
df.groupby(['make', 'body_style'])['body_style'].count().unstack().fillna(0)
###Output
_____no_output_____
###Markdown
Pivot table
###Code
df.pivot_table(index='make' , columns= 'body_style' , aggfunc={ 'body_style' :len}, fill_value=0)
###Output
_____no_output_____
###Markdown
Crosstab: make vs num_doors
###Code
pd.crosstab(df.make, df.num_doors, margins=True, margins_name="Total")
###Output
_____no_output_____
###Markdown
Crosstab: Multi-index
###Code
pd.crosstab(df.make, [df.body_style, df.drive_wheels])
###Output
_____no_output_____
###Markdown
Crosstab: Normalize
###Code
pd.crosstab([df.make, df.num_doors], [df.body_style, df.drive_wheels], rownames=['Auto Manufacturer', "Doors"],
colnames=['Body Style', "Drive Type"], dropna=False)
###Output
_____no_output_____
###Markdown
A combination
###Code
pd.crosstab(df.make, [df.body_style, df.drive_wheels], values=df.curb_weight, aggfunc='mean').fillna('-')
###Output
_____no_output_____
###Markdown
Normalization All
###Code
pd.crosstab(df.make, df.body_style, normalize=True)
###Output
_____no_output_____
###Markdown
Rows
###Code
pd.crosstab(df.make, df.body_style, normalize='index')
###Output
_____no_output_____
###Markdown
Columns
###Code
pd.crosstab(df.make, df.body_style, normalize='columns')
###Output
_____no_output_____
###Markdown
CrossTab Simple
###Code
###Output
_____no_output_____
###Markdown
With Margins
###Code
###Output
_____no_output_____
###Markdown
Multi-Index Column and Rows
###Code
###Output
_____no_output_____
###Markdown
Normalize Aggregate function
###Code
###Output
_____no_output_____
###Markdown
Automotive dataset example Define the headers since the data does not have any
###Code
headers = ["symboling", "normalized_losses", "make", "fuel_type", "aspiration","num_doors", "body_style", "drive_wheels",
"engine_location", "wheel_base", "length", "width", "height", "curb_weight", "engine_type", "num_cylinders",
"engine_size", "fuel_system", "bore", "stroke", "compression_ratio", "horsepower", "peak_rpm", "city_mpg",
"highway_mpg", "price"]
###Output
_____no_output_____
###Markdown
Read in the CSV file and convert "?" to NaN
###Code
df_raw = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data', header=None, names=headers, na_values="?" )
###Output
_____no_output_____
###Markdown
Define a list of models that we want to review
###Code
models = ["toyota","nissan","mazda", "honda", "mitsubishi", "subaru", "volkswagen", "volvo"]
###Output
_____no_output_____
###Markdown
Create a copy of the data with only the top 8 manufacturers
###Code
df = df_raw[df_raw.make.isin(models)].copy()
###Output
_____no_output_____
###Markdown
CrossTab: make vs body_style
###Code
pd.crosstab(df.make, df.body_style)
###Output
_____no_output_____
###Markdown
Groupby
###Code
df.groupby(['make', 'body_style'])['body_style'].count().unstack().fillna(0)
###Output
_____no_output_____
###Markdown
Pivot table
###Code
df.pivot_table(index='make' , columns= 'body_style' , aggfunc={ 'body_style' :len}, fill_value=0)
###Output
_____no_output_____
###Markdown
Crosstab: make vs num_doors
###Code
pd.crosstab(df.make, df.num_doors, margins=True, margins_name="Total")
###Output
_____no_output_____
###Markdown
Crosstab: Multi-index
###Code
pd.crosstab(df.make, [df.body_style, df.drive_wheels])
###Output
_____no_output_____
###Markdown
Crosstab: Normalize
###Code
pd.crosstab([df.make, df.num_doors], [df.body_style, df.drive_wheels], rownames=['Auto Manufacturer', "Doors"],
colnames=['Body Style', "Drive Type"], dropna=False)
###Output
_____no_output_____
###Markdown
A combination
###Code
pd.crosstab(df.make, [df.body_style, df.drive_wheels], values=df.curb_weight, aggfunc='mean').fillna('-')
###Output
_____no_output_____
###Markdown
Normalization All
###Code
pd.crosstab(df.make, df.body_style, normalize=True)
###Output
_____no_output_____
###Markdown
Rows
###Code
pd.crosstab(df.make, df.body_style, normalize='index')
###Output
_____no_output_____
###Markdown
Columns
###Code
pd.crosstab(df.make, df.body_style, normalize='columns')
###Output
_____no_output_____ |
hash_to_emoji.ipynb | ###Markdown
hash_to_emojiMitchell / Isthmus - July 2020Twitter recently applied a filter that appears to block any tweets containing alphanumeric strings longer than 26 characters. Unfortunately this includes hash digests (among many other use cases).This inspired the latest cryptographic stenographic innovation for censorship resistance: `hash_to_emoji` ExampleInput: `some prediction for the future`Output: 🐇🐈☁❄☃☃🌁🐕🌀💀☃🌁🎺🐕☃🐁✉👀🌁👀🌀🌀🐕🐁☁☃🌀☃🐈👀👍🐇☃🐈🎺🐕☂☃🐈🐇🐇❄🔔🐇❄💀☁🐇🐇☂👍☁🐕☁🔔💀🐈👍👍❄🐇🌀☃💀 Notes - The 1:1 mapping from hex representation digit to emoji is painfully inefficient. Shorter final digests could be produced by using more characters from the large emoji set. - A possible extension would be an efficient (bidirectional) translation between arbitrary data blobs and emoji strings. (Silly example: can't access a p2p blockchain network to broadcast your transaction? Just convert it to an emoji string and tweet at @xyzGateway to be added to the main mempool) Import libraries
###Code
#!pip install emoji
import emoji
import hashlib
###Output
_____no_output_____
###Markdown
Inputs
###Code
message_to_hash = 'some prediction for the future'
###Output
_____no_output_____
###Markdown
Process Calculate hashYou can easily swap in different algorithms from hashlib
###Code
raw_hash = hashlib.sha256(message_to_hash.encode()).hexdigest()
###Output
_____no_output_____
###Markdown
Convert alphanumeric hash to emoji set
###Code
mapping = {
"0":":skull:",
"1":":umbrella:",
"2":":cloud:",
"3":":snowflake:",
"4":":snowman:",
"5":":trumpet:",
"6":":cyclone:",
"7":":foggy:",
"8":":eyes:",
"9":":cat:",
"a":":dog:",
"b":":mouse:",
"c":":bell:",
"d":":rabbit:",
"e":":envelope:",
"f":":thumbs_up:"
}
output_vec = str()
for i in range(len(raw_hash)):
this_char = raw_hash[i]
output_vec = output_vec + mapping[this_char]
###Output
_____no_output_____
###Markdown
Provide output
###Code
emoji_str = emoji.emojize(output_vec)
print(emoji.emojize('\nHash digest:\n\n' + emoji_str))
###Output
Hash digest:
🐇🐈☁❄☃☃🌁🐕🌀💀☃🌁🎺🐕☃🐁✉👀🌁👀🌀🌀🐕🐁☁☃🌀☃🐈👀👍🐇☃🐈🎺🐕☂☃🐈🐇🐇❄🔔🐇❄💀☁🐇🐇☂👍☁🐕☁🔔💀🐈👍👍❄🐇🌀☃💀
|
00_Intro.ipynb | ###Markdown
Python for EpisAll these files are here: https://github.com/kialio/py4Epis You should be able to install python and run them after this. Feel free to ask me questions now or later.* I'm going to give some background and then some high level examples as fast as I can...* There are many examples on the web. * There are even some for SAS Users. Here's a good one: https://github.com/RandyBetancourt/PythonForSASUsers Objectives* Introduce you to the Python language* Show its utility in your research life* (I'm not going to show how to install python or get it going on your machine, if you want to get going quickly, check out conda: https://docs.conda.io/en/latest/) Credits* Borrowed heavily from https://github.com/profjsb/python-bootcamp Who I Am Jeremy Perkins[@oldmanperkins](https://twitter.com/oldmanperkins)https://github.com/kialioI work at NASA/GSFC (here as a private citizen) and work on developing next generation gamma-ray instrumentation ([AMEGO](https://asd.gsfc.nasa.gov/amego/), [BurstCube](https://asd.gsfc.nasa.gov/burstcube/)). I use python to analyze data, control hardware, figure out budgets (I try to get data out of the excel spreadsheets my financial people give me as fast as possible), make pretty plots... Introduction* What is Python?* Why Python?* Getting Started... What is Python?>Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-level built in data structures, combined with dynamic typing and dynamic binding, make it very attractive for Rapid Application Development, as well as for use as a scripting or glue language to connect existing components together. Python's simple, easy to learn syntax emphasizes readability and therefore reduces the cost of program maintenance. Python supports modules and packages, which encourages program modularity and code reuse. The Python interpreter and the extensive standard library are available in source or binary form without charge for all major platforms, and can be freely distributed.https://www.python.org/doc/essays/blurb/ What is Python? interpreted no need for a compiling stage object-oriented programming paradigm that uses objects (complex data structures with methods) high level abstraction from the way machine interprets & executes dynamic semantics can change meaning on-the-fly built in core language (not external) data structures ways of storing/manipulating data script/glue programs that control other programs typing the sort of variable (int, string) syntax grammar which defines the language library reusable collection of code binary a file that you can run/execute Development History* Started over the Christmas break 1989, by Guido van Rossum* Developed in the early 1990s* Name comes from Monty Python’s Flying Circus* Guido is the Benevolent Dictator for Life (BDFL), meaning that he continues to oversee Python’s development. Development History* Open-sourced development from the start (BSD licensed now) * http://www.opensource.org/licenses/bsd-license.php* Relies on large community input (bugs, patches) and 3rd party add-on software* Version 2.0 (2000), 2.6 (2008), 2.7 (2010). * Version 2.7.X is reaching end of life this year.* Version 3.X (2008) is not backward compatible with 1.X & 2.X. If you're starting now, use 3.X. Why Python Some of the AlternativesI've used almost all of these at some point C, C++, Fortran*Pros: great performance, backbone of legacy scientific computing codes*`Cons: syntax not optimized for causal programming, no interactive facilities, difficult visualization, text processing, etc. ` Mathmatica, Maple, Matlab, IDL (and I guess SAS, SPSS,...)*Pros: interactive, great visuals, extensive libraries*`Cons: costly, proprietary, unpleasant for large-scale programs and non-mathematical tasks.` Perlhttp://strombergers.com/python/ Why Python* **Free** (BSD license), highly portable (Linux, OSX, Windows, lots...)* **Interactive** interpreter provided.* Extremely readable syntax (**“executable pseudo-code”**). * **Simple**: non-professional programmers can use it effectively * great documentation * total abstraction of memory management * Clean object-oriented model, but **not mandatory**.* Rich built-in types: lists, sets, dictionaries (hash tables), strings, ... * Very comprehensive standard library (**batteries included**) * Standard libraries for IDL/Matlab-like arrays (NumPy)* Easy to wrap existing C, C++ and FORTRAN codes. Why Python Amazingly Scalable* Interactive experimentation * build small, self-contained scripts or million-lines projects. * From occasional/novice to full-time use (try that with C++).* Large community of open source packages The Kitchen Sink (in a good way)* really can do anything you want, with impressive simplicity Performance, if you need it* As an interpreted language, Python is slow.* But...if you need speed you can do the heavy lifting in C or FORTRAN ...or you can use a Python compiler (e.g., Cython) My Group Uses Python For Providing a comprehensive analysis framework for Fermi LAT data(I was forced into using python...)* Interface to the low-level (c++) code - Interactive data analysis* Scripting* Developing new analysis techniques* Adding features to static code quickly* Providing high-level analysis tools (data selection, statistical testing, simulation development, plot making, and so on and so forth)* Validation and Testing What I Use Python For* Data reduction & Analysis * processing FITS images quickly * wrapping around 3rd party software* A Handy & Quick Calculator* Prototyping new algorithms/ideas* Making plots for papers* Notebooking (i.e. making me remember stuff) * see the iPython sessions later* Writing Presentations (these slides)* Controling hardware Python is everywherehttps://wiki.python.org/moin/OrganizationsUsingPython Applications are Numerous* Scripting and Programing* GUI's* Web Development* Interactive Notebooks (see later)* Visualization* Parralelization* Animation* And so on... Firing up the interpreter in OSX Go to Utilities->Terminal***`[pyuser@pymac ~]$ python``Python 3.6.7 | packaged by conda-forge | (default, Jul 2 2019, 02:07:37)``[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin``Type "help", "copyright", "credits" or "license" for more information.``>>>`***The details might be different (different version, different compiler). You could also use iPython:***`[pyuser@pymac ~]$ ipython ``Python 3.6.7 | packaged by conda-forge | (default, Jul 2 2019, 02:07:37)``Type 'copyright', 'credits' or 'license' for more information``IPython 7.8.0 -- An enhanced Interactive Python. Type '?' for help.``In [1]:`*** Firing it up in other OS's like WindowsInstall python via Conda and follow the directions. Creating Python Programs and Scripts* Basically, any raw text editor will do * Lot's of the basic ones will do syntax highlighting (reccomended)* You create a python program or script file in the text editor and usually save it with a *.py extension* There are lots of programs out there that can do this and have fancy markup. * I'm still using emacs * List: https://wiki.python.org/moin/PythonEditors * Make sure it saves as raw text (and not rich text or something else) Last Thing: The Notebook* The jupyter Notebook is a powerful tool* You **will** want to use it.* To start it up from the terminal type`jupyter notebook`and a browser window should open that looks like this
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
plt.xkcd()
plt.figure(figsize=(16,8))
x = np.arange(10)
plt.plot(x,x+0.5*x*x)
plt.xlabel('Years Since Release')
plt.ylabel('Interest in Python')
plt.show()
###Output
_____no_output_____
###Markdown
Why Are We Interested in RAPIDS? (and GPU, CUDA, Numba...) Let's start by taking a really straightforward look at GPU benefit without RAPIDSHere are 1 million numbers and their square roots in (regular) Python:
###Code
import math
numbers = list(range(1000000))
%%timeit
s = [math.sqrt(x) for x in numbers]
###Output
_____no_output_____
###Markdown
Using NumPy (https://numpy.org/) we can both vectorize our operation and leverage a native (C) implementation from Python.Don't know about NumPy? It's a core part of the SciPy stack, and provides an implementation of tensors (multi-dimensional array) and tensor math, where the underlying storage is native (not Python objects) and operations are implemented in native extenstion ... so it's Python-friendly, but much faster.The most common Python data science tools -- things like Pandas and Scikit-Learn -- are built on top of NumPy.
###Code
import numpy as np
np_numbers = np.array(numbers)
%%timeit
np_s = np.sqrt(np_numbers)
###Output
_____no_output_____
###Markdown
That's pretty nice. Of course, maybe we just started out with Python as an easy target.Let's look at jitted compiled code with Numba.(Don't know about Numba? You're going to love it: a great JIT add-on that can target CPU as well as multiple flavors of GPU ... learn more at https://numba.pydata.org/)
###Code
import numba
@numba.jit
def root(n):
return np.sqrt(n)
%%timeit
numba_s = root(np_numbers)
###Output
_____no_output_____
###Markdown
Not bad. But we're here for GPUs ... will the GPU help much?A few libraries make it easy to do matrix operations like this on GPU ... two of the most popular/famous are PyTorch and CuPy
###Code
import cupy
gpu_numbers = cupy.array(numbers)
%%timeit
gpu_squares = cupy.sqrt(gpu_numbers)
###Output
_____no_output_____
###Markdown
Exploratory Multivariate Analysis of Geochemical DatasetsCompiled by [Morgan Williams](mailto:[email protected]) for C3DIS 2018 This collection of Jupyter notebooks illustrates some common simple problems encountered with geochemical data, and some solutions. They cover the majority of the workflow outlined below, but represent what is generally a work in progress. Associated data is sourced solely from the [EarthChem data portal](http://ecp.iedadata.org/), and is here stored in a S3 bucket for simplicity. The Workflow The data analysis workflow denoted below lists some common necessary tasks to derive useful insight from geochemical data. Much of this is common to any data science workflow, but due to the nature of the geochemical data itself, a few of these processes are still current research problems. Our research aims not to introduce radical change in methodology, but instead to simply streamline and standardise the process, such that we can use geochemistry in a robust way to address geological problems.  The Problem Much has happened since our planet was a primitive ball of molten rock, including the origin of plate tectonics, the modern atmosphere and life. This extended geological history has been encoded into chemical signatures of rocks and minerals, which may then used to (partially) reconstruct the past.Inverting geochemistry to infer the geological past is commonly an underdetermined problem (especially prior to the advent of modern geochemical analysis instrumentation), and is hindered by complex geological histories.Modern analytical methods have higher throughput and greater sensitivity and precision. As a result, established publicly-accessible geochemical databases are growing steadily. However, the potential value of aggregating the increasing volume of high-quality data has not yet been fully realised. The Other Problems.. Before we can tackle the geological problems, we must first have a dataset which is consistently formatted and which contains relevant data of sufficient accuracy (lest we achieve simply *"garbage in, garbage out"*). These notebooks illustrate some of these processing steps, and demonstrate some approaches for the initial stages of data exploration. The Data If you wish to download a subset of the EarthChem data to this binder server (approx 300 MB as a sparse dataframe) such that it can be acessed in later notebooks, do so below. If you do not, it will instead be downloaded *on-run* as necessary. Please note this can take more than a minute even on a good day.
###Code
%matplotlib inline
%load_ext autoreload
%load_ext memory_profiler
%autoreload 2
%%time
import sys
sys.path.insert(0, './src')
from datasource import download_data, load_df
download_data('EarthChemData.pkl', 'EarthChemData.pkl')
%%memit
df = load_df('EarthChemData.pkl')
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1073034 entries, 0 to 2092330
Data columns (total 71 columns):
SampleID 1073028 non-null object
Source 1073034 non-null object
Reference 1073034 non-null object
CruiseID 180711 non-null object
Latitude 1073034 non-null float64
Longitude 1073034 non-null float64
LocPrec 1073034 non-null float64
MinAge 614923 non-null float64
Age 607557 non-null float64
MaxAge 625776 non-null float64
Method 1073034 non-null object
Material 1073034 non-null object
Type 1073019 non-null object
Composition 1073034 non-null object
RockName 1073034 non-null object
Na2O 378229 non-null float64
MgO 375983 non-null float64
Al2O3 375158 non-null float64
SiO2 381264 non-null float64
P2O5 343454 non-null float64
K2O 391758 non-null float64
CaO 375813 non-null float64
TiO2 375673 non-null float64
MnO 349445 non-null float64
FeOT 485685 non-null float64
Li 44788 non-null float64
Be 74981 non-null float64
B 42947 non-null float64
Mg 92342 non-null float64
Cl 37633 non-null float64
K 58797 non-null float64
Ca 103305 non-null float64
Sc 227228 non-null float64
Ti 109537 non-null float64
V 249099 non-null float64
Cr 278450 non-null float64
Mn 106043 non-null float64
Fe 120046 non-null float64
Co 209541 non-null float64
Ni 281268 non-null float64
Cu 228744 non-null float64
Zn 220164 non-null float64
Ga 126015 non-null float64
Rb 275938 non-null float64
Sr 367161 non-null float64
Y 308961 non-null float64
Zr 337013 non-null float64
Nb 240845 non-null float64
Mo 37700 non-null float64
Cs 95928 non-null float64
Ba 341793 non-null float64
La 264928 non-null float64
Ce 232241 non-null float64
Pr 89315 non-null float64
Nd 199149 non-null float64
Sm 175005 non-null float64
Eu 162006 non-null float64
Gd 117043 non-null float64
Tb 138647 non-null float64
Dy 104030 non-null float64
Ho 90438 non-null float64
Er 99464 non-null float64
Tm 86574 non-null float64
Yb 186035 non-null float64
Lu 143638 non-null float64
Hf 133165 non-null float64
Ta 121178 non-null float64
Pb 201956 non-null float64
Th 190403 non-null float64
U 147985 non-null float64
TotalAlkali 362866 non-null float64
dtypes: float64(62), object(9)
memory usage: 589.4+ MB
peak memory: 1527.42 MiB, increment: 1423.32 MiB
|
Filter/filter_in_list.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
states = ee.FeatureCollection('TIGER/2018/States')
selected = states.filter(ee.Filter.inList("NAME", ['California', 'Nevada', 'Utah', 'Arizona']))
Map.centerObject(selected, 6)
Map.addLayer(ee.Image().paint(selected, 0, 2), {'palette': 'yellow'}, 'Selected')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
states = ee.FeatureCollection('TIGER/2018/States')
selected = states.filter(ee.Filter.inList("NAME", ['California', 'Nevada', 'Utah', 'Arizona']))
Map.centerObject(selected, 6)
Map.addLayer(ee.Image().paint(selected, 0, 2), {'palette': 'yellow'}, 'Selected')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
states = ee.FeatureCollection('TIGER/2018/States')
selected = states.filter(ee.Filter.inList("NAME", ['California', 'Nevada', 'Utah', 'Arizona']))
Map.centerObject(selected, 6)
Map.addLayer(ee.Image().paint(selected, 0, 2), {'palette': 'yellow'}, 'Selected')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
states = ee.FeatureCollection('TIGER/2018/States')
selected = states.filter(ee.Filter.inList("NAME", ['California', 'Nevada', 'Utah', 'Arizona']))
Map.centerObject(selected, 6)
Map.addLayer(ee.Image().paint(selected, 0, 2), {'palette': 'yellow'}, 'Selected')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
Pydeck Earth Engine IntroductionThis is an introduction to using [Pydeck](https://pydeck.gl) and [Deck.gl](https://deck.gl) with [Google Earth Engine](https://earthengine.google.com/) in Jupyter Notebooks. If you wish to run this locally, you'll need to install some dependencies. Installing into a new Conda environment is recommended. To create and enter the environment, run:```conda create -n pydeck-ee -c conda-forge python jupyter notebook pydeck earthengine-api requests -ysource activate pydeck-eejupyter nbextension install --sys-prefix --symlink --overwrite --py pydeckjupyter nbextension enable --sys-prefix --py pydeck```then open Jupyter Notebook with `jupyter notebook`. Now in a Python Jupyter Notebook, let's first import required packages:
###Code
from pydeck_earthengine_layers import EarthEngineLayer
import pydeck as pdk
import requests
import ee
###Output
_____no_output_____
###Markdown
AuthenticationUsing Earth Engine requires authentication. If you don't have a Google account approved for use with Earth Engine, you'll need to request access. For more information and to sign up, go to https://signup.earthengine.google.com/. If you haven't used Earth Engine in Python before, you'll need to run the following authentication command. If you've previously authenticated in Python or the command line, you can skip the next line.Note that this creates a prompt which waits for user input. If you don't see a prompt, you may need to authenticate on the command line with `earthengine authenticate` and then return here, skipping the Python authentication.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create MapNext it's time to create a map. Here we create an `ee.Image` object
###Code
# Initialize objects
ee_layers = []
view_state = pdk.ViewState(latitude=37.7749295, longitude=-122.4194155, zoom=10, bearing=0, pitch=45)
# %%
# Add Earth Engine dataset
states = ee.FeatureCollection('TIGER/2018/States')
selected = states.filter(ee.Filter.inList("NAME", ['California', 'Nevada', 'Utah', 'Arizona']))
ee_layers.append(EarthEngineLayer(ee_object=ee.Image().paint(selected,0,2), vis_params={'palette':'yellow'}))
###Output
_____no_output_____
###Markdown
Then just pass these layers to a `pydeck.Deck` instance, and call `.show()` to create a map:
###Code
r = pdk.Deck(layers=ee_layers, initial_view_state=view_state)
r.show()
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://geemap.org). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('Installing geemap ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
import ee
import geemap
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
###Code
Map = geemap.Map(center=[40,-100], zoom=4)
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
states = ee.FeatureCollection('TIGER/2018/States')
selected = states.filter(ee.Filter.inList("NAME", ['California', 'Nevada', 'Utah', 'Arizona']))
Map.centerObject(selected, 6)
Map.addLayer(ee.Image().paint(selected, 0, 2), {'palette': 'yellow'}, 'Selected')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
states = ee.FeatureCollection('TIGER/2018/States')
selected = states.filter(ee.Filter.inList("NAME", ['California', 'Nevada', 'Utah', 'Arizona']))
Map.centerObject(selected, 6)
Map.addLayer(ee.Image().paint(selected, 0, 2), {'palette': 'yellow'}, 'Selected')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
states = ee.FeatureCollection('TIGER/2018/States')
selected = states.filter(ee.Filter.inList("NAME", ['California', 'Nevada', 'Utah', 'Arizona']))
Map.centerObject(selected, 6)
Map.addLayer(ee.Image().paint(selected, 0, 2), {'palette': 'yellow'}, 'Selected')
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____ |
NoSQL/NetworkX/plot_labels_and_colors.ipynb | ###Markdown
Labels And ColorsDraw a graph with matplotlib, color by degree.You must have matplotlib for this to work.
###Code
# Author: Aric Hagberg ([email protected])
import matplotlib.pyplot as plt
import networkx as nx
G = nx.cubical_graph()
pos = nx.spring_layout(G) # positions for all nodes
# nodes
nx.draw_networkx_nodes(G, pos,
nodelist=[0, 1, 2, 3],
node_color='r',
node_size=500,
alpha=0.8)
nx.draw_networkx_nodes(G, pos,
nodelist=[4, 5, 6, 7],
node_color='b',
node_size=500,
alpha=0.8)
# edges
nx.draw_networkx_edges(G, pos, width=1.0, alpha=0.5)
nx.draw_networkx_edges(G, pos,
edgelist=[(0, 1), (1, 2), (2, 3), (3, 0)],
width=8, alpha=0.5, edge_color='r')
nx.draw_networkx_edges(G, pos,
edgelist=[(4, 5), (5, 6), (6, 7), (7, 4)],
width=8, alpha=0.5, edge_color='b')
# some math labels
labels = {}
labels[0] = r'$a$'
labels[1] = r'$b$'
labels[2] = r'$c$'
labels[3] = r'$d$'
labels[4] = r'$\alpha$'
labels[5] = r'$\beta$'
labels[6] = r'$\gamma$'
labels[7] = r'$\delta$'
nx.draw_networkx_labels(G, pos, labels, font_size=16)
plt.axis('off')
plt.show()
###Output
_____no_output_____ |
03_DRL_Agent_en.ipynb | ###Markdown
Tutorial 3: Demonstration of developing original *Agent* with DRLThis tutorial demonstrate how to develop *Agent* with DRL algorithm by using ***KSPDRLAgent*** . *Agent* base classes are as follows: - `Agent`(used in **Tutorial 2**)- `KSPAgent`(used in **Tutorial 2**)- `PrioritizedKSPAgent`(used in **Tutorial 2**)- `KSPDRLAgent`
###Code
!pip install git+https://github.com/Optical-Networks-Group/rsa-rl.git
###Output
Collecting git+https://github.com/Optical-Networks-Group/rsa-rl.git
Cloning https://github.com/Optical-Networks-Group/rsa-rl.git to c:\users\khuatduc\appdata\local\temp\pip-req-build-09phlp88
Resolved https://github.com/Optical-Networks-Group/rsa-rl.git to commit 4b82c519742fa47b1537204780174cdb0c2f4ae0
Requirement already satisfied: bitarray>=1.2.1 in c:\users\khuatduc\anaconda3\lib\site-packages (from rsarl==1.0.0) (2.3.0)
Requirement already satisfied: networkx>=2.5 in c:\users\khuatduc\anaconda3\lib\site-packages (from rsarl==1.0.0) (2.5)
Requirement already satisfied: tensorboard>=2.2.2 in c:\users\khuatduc\anaconda3\lib\site-packages (from rsarl==1.0.0) (2.4.0)
Requirement already satisfied: tensorboardX>=2.1 in c:\users\khuatduc\anaconda3\lib\site-packages (from rsarl==1.0.0) (2.5)
Requirement already satisfied: torch>=1.5.1 in c:\users\khuatduc\anaconda3\lib\site-packages (from rsarl==1.0.0) (1.10.2)
Requirement already satisfied: plotly>=4.9.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from rsarl==1.0.0) (5.6.0)
Requirement already satisfied: dash>=1.14.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from rsarl==1.0.0) (2.2.0)
Requirement already satisfied: dash-bootstrap-components>=0.10.7 in c:\users\khuatduc\anaconda3\lib\site-packages (from rsarl==1.0.0) (1.0.3)
Requirement already satisfied: pfrl>=0.1.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from rsarl==1.0.0) (0.3.0)
Requirement already satisfied: Flask>=1.0.4 in c:\users\khuatduc\anaconda3\lib\site-packages (from dash>=1.14.0->rsarl==1.0.0) (1.1.2)
Requirement already satisfied: dash-html-components==2.0.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from dash>=1.14.0->rsarl==1.0.0) (2.0.0)
Requirement already satisfied: dash-table==5.0.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from dash>=1.14.0->rsarl==1.0.0) (5.0.0)
Requirement already satisfied: flask-compress in c:\users\khuatduc\anaconda3\lib\site-packages (from dash>=1.14.0->rsarl==1.0.0) (1.11)
Requirement already satisfied: dash-core-components==2.0.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from dash>=1.14.0->rsarl==1.0.0) (2.0.0)
Requirement already satisfied: itsdangerous>=0.24 in c:\users\khuatduc\anaconda3\lib\site-packages (from Flask>=1.0.4->dash>=1.14.0->rsarl==1.0.0) (1.1.0)
Requirement already satisfied: Jinja2>=2.10.1 in c:\users\khuatduc\anaconda3\lib\site-packages (from Flask>=1.0.4->dash>=1.14.0->rsarl==1.0.0) (2.11.3)
Requirement already satisfied: Werkzeug>=0.15 in c:\users\khuatduc\anaconda3\lib\site-packages (from Flask>=1.0.4->dash>=1.14.0->rsarl==1.0.0) (1.0.1)
Requirement already satisfied: click>=5.1 in c:\users\khuatduc\anaconda3\lib\site-packages (from Flask>=1.0.4->dash>=1.14.0->rsarl==1.0.0) (7.1.2)
Requirement already satisfied: MarkupSafe>=0.23 in c:\users\khuatduc\anaconda3\lib\site-packages (from Jinja2>=2.10.1->Flask>=1.0.4->dash>=1.14.0->rsarl==1.0.0) (2.0.1)
Requirement already satisfied: decorator>=4.3.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from networkx>=2.5->rsarl==1.0.0) (5.0.6)
Requirement already satisfied: numpy>=1.10.4 in c:\users\khuatduc\anaconda3\lib\site-packages (from pfrl>=0.1.0->rsarl==1.0.0) (1.20.2)
Requirement already satisfied: pillow in c:\users\khuatduc\anaconda3\lib\site-packages (from pfrl>=0.1.0->rsarl==1.0.0) (8.4.0)
Requirement already satisfied: gym>=0.9.7 in c:\users\khuatduc\anaconda3\lib\site-packages (from pfrl>=0.1.0->rsarl==1.0.0) (0.21.0)
Requirement already satisfied: filelock in c:\users\khuatduc\anaconda3\lib\site-packages (from pfrl>=0.1.0->rsarl==1.0.0) (3.4.2)
Requirement already satisfied: cloudpickle>=1.2.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from gym>=0.9.7->pfrl>=0.1.0->rsarl==1.0.0) (1.6.0)
Requirement already satisfied: importlib-metadata>=4.8.1 in c:\users\khuatduc\anaconda3\lib\site-packages (from gym>=0.9.7->pfrl>=0.1.0->rsarl==1.0.0) (4.11.1)
Requirement already satisfied: zipp>=0.5 in c:\users\khuatduc\anaconda3\lib\site-packages (from importlib-metadata>=4.8.1->gym>=0.9.7->pfrl>=0.1.0->rsarl==1.0.0) (3.4.1)
Requirement already satisfied: typing-extensions>=3.6.4 in c:\users\khuatduc\anaconda3\lib\site-packages (from importlib-metadata>=4.8.1->gym>=0.9.7->pfrl>=0.1.0->rsarl==1.0.0) (3.7.4.3)
Requirement already satisfied: six in c:\users\khuatduc\anaconda3\lib\site-packages (from plotly>=4.9.0->rsarl==1.0.0) (1.16.0)
Requirement already satisfied: tenacity>=6.2.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from plotly>=4.9.0->rsarl==1.0.0) (8.0.1)
Requirement already satisfied: markdown>=2.6.8 in c:\users\khuatduc\anaconda3\lib\site-packages (from tensorboard>=2.2.2->rsarl==1.0.0) (3.3.4)
Requirement already satisfied: protobuf>=3.6.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from tensorboard>=2.2.2->rsarl==1.0.0) (3.17.2)
Requirement already satisfied: grpcio>=1.24.3 in c:\users\khuatduc\anaconda3\lib\site-packages (from tensorboard>=2.2.2->rsarl==1.0.0) (1.35.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in c:\users\khuatduc\anaconda3\lib\site-packages (from tensorboard>=2.2.2->rsarl==1.0.0) (1.21.3)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\users\khuatduc\anaconda3\lib\site-packages (from tensorboard>=2.2.2->rsarl==1.0.0) (0.4.4)
Requirement already satisfied: requests<3,>=2.21.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from tensorboard>=2.2.2->rsarl==1.0.0) (2.25.1)
Requirement already satisfied: setuptools>=41.0.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from tensorboard>=2.2.2->rsarl==1.0.0) (58.0.4)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from tensorboard>=2.2.2->rsarl==1.0.0) (1.6.0)
Requirement already satisfied: wheel>=0.26 in c:\users\khuatduc\anaconda3\lib\site-packages (from tensorboard>=2.2.2->rsarl==1.0.0) (0.36.2)
Requirement already satisfied: absl-py>=0.4 in c:\users\khuatduc\anaconda3\lib\site-packages (from tensorboard>=2.2.2->rsarl==1.0.0) (0.13.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.2->rsarl==1.0.0) (4.2.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in c:\users\khuatduc\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.2->rsarl==1.0.0) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4 in c:\users\khuatduc\anaconda3\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard>=2.2.2->rsarl==1.0.0) (4.7.2)
Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.2->rsarl==1.0.0) (1.3.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in c:\users\khuatduc\anaconda3\lib\site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard>=2.2.2->rsarl==1.0.0) (0.4.8)
Requirement already satisfied: chardet<5,>=3.0.2 in c:\users\khuatduc\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard>=2.2.2->rsarl==1.0.0) (4.0.0)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\khuatduc\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard>=2.2.2->rsarl==1.0.0) (2021.10.8)
Requirement already satisfied: idna<3,>=2.5 in c:\users\khuatduc\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard>=2.2.2->rsarl==1.0.0) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\khuatduc\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard>=2.2.2->rsarl==1.0.0) (1.26.8)
Requirement already satisfied: oauthlib>=3.0.0 in c:\users\khuatduc\anaconda3\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.2.2->rsarl==1.0.0) (3.1.1)
Requirement already satisfied: brotli in c:\users\khuatduc\anaconda3\lib\site-packages (from flask-compress->dash>=1.14.0->rsarl==1.0.0) (1.0.9)
###Markdown
Evaluation SettingsFor evaluation, prepare *Environment* and evaluation function. Please see **Tutorial 1** if you have not seen it.
###Code
import functools
import numpy as np
from rsarl.envs import DeepRMSAEnv, make_multiprocess_vector_env
from rsarl.requester import UniformRequester
from rsarl.networks import SingleFiberNetwork
from rsarl.evaluator import batch_warming_up, batch_evaluation, batch_summary
# Set the device id to use GPU. To use CPU only, set it to -1.
gpu = -1
# exp settings
n_requests = 100
n_envs, seed = 2, 0
# build network
net = SingleFiberNetwork("nsf", n_slot=60, is_weight=True)
# build requester
requester = UniformRequester(
net.n_nodes,
avg_service_time=10,
avg_request_arrival_rate=12)
# build env
env = DeepRMSAEnv(net, requester)
# envs for training and evaluation
envs = make_multiprocess_vector_env(env, n_envs, seed, test=False)
test_envs = make_multiprocess_vector_env(env, n_envs, seed, test=True)
def _evaluation(envs, agent, n_requests):
# start simulation
envs.reset()
#
batch_warming_up(envs, agent, n_requests=3000)
# evaluation
experiences = batch_evaluation(envs, agent, n_requests=n_requests)
# calc performance
blocking_probs, avg_utils, total_rewards = batch_summary(experiences)
for env_id, (blocking_prob, avg_util, total_reward) in enumerate(zip(blocking_probs, avg_utils, total_rewards)):
print(f'[{env_id}-th ENV]Blocking Probability: {blocking_prob}')
print(f'[{env_id}-th ENV]Avg. Slot-utilization: {avg_util}')
print(f'[{env_id}-th ENV]Total Rewards: {total_reward}')
# evaluation with test environments
evaluation = functools.partial(_evaluation, envs=test_envs, n_requests=n_requests)
###Output
_____no_output_____
###Markdown
Step1: Select DRL algorithm from PFRL*RSA-RL* assumes that DRL algorithm provided by [PFRL](https://github.com/pfnet/pfrl) library is used. ***PFRL*** is a DRL library that implements various state-of-the-art deep reinforcement algorithms in Python using[PyTorch](https://github.com/pytorch/pytorch). Discrete action algorithms are as follows: - ***DQN(Double DQN)***- ***Rainbow***- ***IQN***- ***A3C***, ***A2C***- ***ACER***- ***PPO***- ***TRPO***In this tutorial, we try to reproduct the prior [DeepRMSA](https://ieeexplore.ieee.org/document/8386173) that applies DRL to ***routing algorithm*** that selects one from the *k* shortest paths. This tutorial call it ***DeepRMSAv1***, and implement it by using ***Double DQN (DDQN)***. In the case of using DDQN, there are three steps:1. Build deep neural network (DNN) model2. Specify ***Explore*** and ***Replay Buffer***, e.g., epsilon greedy and prioritized replay buffer, respectively3. Build DDQNFirst, you develop a DNN that the number of outputs is *k*.
###Code
import pfrl
import torch
import torch.nn as nn
class DeepRMSAv1_DNN(torch.nn.Module):
def __init__(self, SLOT: int, ICH: int, K: int, n_edges: int):
super().__init__()
self.SLOT = SLOT
# CNN
self.conv = nn.Sequential(*[
nn.Conv2d(ICH, 1, kernel_size=(1,1), stride=(1, 1)),
nn.ReLU(),
# 2 conv layers with16 filters
nn.Conv2d(1, 16, kernel_size=(n_edges,1), stride=(1, 1)),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=(1,1), stride=(1, 1)),
nn.ReLU(),
# 2 depthwise conv layers with 1 filter
nn.ZeroPad2d((1, 0, 0, 0)), # left, right, top, bottom
nn.Conv2d(16, 16, kernel_size=(1,2), stride=(1, 1), groups=16),
nn.ReLU(),
nn.ZeroPad2d((1, 0, 0, 0)),
nn.Conv2d(16, 16, kernel_size=(1,2), stride=(1, 1), groups=16),
nn.ReLU(),
])
# fc
self.fc = nn.Sequential(*[
nn.Linear(SLOT*16, 128),
nn.ReLU(),
nn.Linear(128, 50),
nn.ReLU(),
nn.Linear(50, K),
])
def forward(self, x):
h = x
h = self.conv(h)
h = h.view(-1, self.SLOT*16)
h = self.fc(h)
return pfrl.action_value.DiscreteActionValue(h)
# Experimental Settings
K = 5
# slot-table(1) + one-hot-node * 2 + bandwidth(1)
ICH = 1 + 2 * net.n_nodes + 1
# build DNN for Q-function
q_func = DeepRMSAv1_DNN( net.n_slot, ICH, K, net.n_edges)
# Specify optimizer
optimizer = torch.optim.Adam(q_func.parameters(), eps=1e-2)
###Output
_____no_output_____
###Markdown
Specify *Explore* and *Replay Buffer*This tutorial selects ConstantEpsilonGreedy. If you want to use others, please refere *PFRL*'s documentation:- [explore](https://pfrl.readthedocs.io/en/latest/explorers.html)- [replay buffer](https://pfrl.readthedocs.io/en/latest/replay_buffers.html)
###Code
def _action_sampler(k):
return np.random.randint(0, k)
# random action function
action_sampler = functools.partial(_action_sampler, k=K)
# Set the discount factor that discounts future rewards.
gamma = 0.99
# Use epsilon-greedy for exploration
explorer = pfrl.explorers.ConstantEpsilonGreedy(
epsilon=0.1, random_action_func=action_sampler)
# DQN uses Experience Replay.
# Specify a replay buffer and its capacity.
replay_buffer = pfrl.replay_buffers.ReplayBuffer(capacity=10 ** 6, num_steps=50)
###Output
_____no_output_____
###Markdown
Build DDQNNOTE that since DeepRMSAv1 does not show sufficient information of hyper parameter, we cannot reproduct it precisely.
###Code
# Now create an agent that will interact with the environment.
DDQN = pfrl.agents.DQN(
q_func,
optimizer,
replay_buffer,
gamma,
explorer,
minibatch_size=50,
update_interval=1,
replay_start_size=500,
target_update_interval=100,
gpu=gpu,
)
###Output
_____no_output_____
###Markdown
Step 2: Develop your algorithm by using *KSPDRLAgent**RSA-RL* provides ***KSPDRLAgent*** that is based on *KSPAgent* class, which means that ***k-shortest path table*** can be used. You need to override two methods: - `preprocess`: create *feature vector* from *observation*- `map_drlout_to_action`: map outputs of DRL algorithms to *Action*
###Code
import numpy as np
import networkx as nx
from rsarl.data import Action
from rsarl.agents import KSPDRLAgent
from rsarl.utils import cal_slot, sort_tuple
from rsarl.algorithms import SpectrumAssignment
def vectorize(n_nodes: int, node_id: int):
mp = np.eye(n_nodes, dtype=np.float32)[node_id].reshape(-1, 1, 1)
return mp
class DRLAgent(KSPDRLAgent):
def preprocess(self, obs):
"""
"""
net = obs.net
source, destination, bandwidth, duration = obs.request
# slot table
whole_slot = np.array(list(nx.get_edge_attributes(net.G, name="slot").values()))
whole_slot = whole_slot.reshape(1, net.n_edges, net.n_slot).astype(np.float32)
# source, destination, bandwidth map
smap = np.ones_like(whole_slot) * vectorize(net.n_nodes, source)
dmap = np.ones_like(whole_slot) * vectorize(net.n_nodes, destination)
bmap = np.ones_like(whole_slot) * bandwidth
# concate: (1, ICH, #edges, #slots)
fvec = np.concatenate([whole_slot, smap, dmap, bmap], axis=0)
return fvec.astype(np.float32, copy=False)
def map_drlout_to_action(self, obs, out):
net = obs.net
s, d, bandwidth, duration = obs.request
paths = self.path_table[sort_tuple((s, d))]
# map
path = paths[out]
#required slots
path_len = net.distance(path)
n_req_slot = cal_slot(bandwidth, path_len)
#FF
path_slot = net.path_slot(path)
slot_index = SpectrumAssignment.first_fit(path_slot, n_req_slot)
if slot_index is None:
return None
else:
return Action(path, slot_index, n_req_slot, duration)
agent = DRLAgent(k=5, drl=DDQN)
# prepare path table
agent.prepare_ksp_table(net)
###Output
_____no_output_____
###Markdown
Step 3: Training and Evaluate *DRL Agent*Finally, let's training and evaluation! Interaction between *Agent* with *Environment* automatically trains *Agent*. NOTE that before evaluation, you should change DRL model to ***evaluation mode*** by `eval_mode` method that *explore* does not run.
###Code
# Batch act
obses = envs.reset()
resets = [False for _ in range(len(obses))]
for train_cnt in range(200000):
acts = agent.batch_act(obses)
obses, rews, dones, infos = envs.step(acts)
agent.batch_observe(obses, rews, dones, resets)
# Make mask(not_end). 0 if done/reset, 1 if pass
not_end = np.logical_not(dones)
obses = envs.reset(not_end)
if train_cnt % 20000 == 0:
print(f'[{train_cnt}-th EVAL]')
test_envs.reset()
with agent.drl.eval_mode():
evaluation(agent=agent)
###Output
[0-th EVAL]
|
basics/second/Categorical Data.ipynb | ###Markdown
Categorical DataCategoricals are a pandas data type, which correspond to categorical variables in statistics: a variable, which can takeon only a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, socialclass, blood types, country affiliations, observation time or ratings via Likert scales.In contrast to statistical categorical variables, categorical data might have an order (e.g. ‘strongly agree’ vs ‘agree’ or‘first observation’ vs. ‘second observation’), but numerical operations (additions, divisions, ...) are not possible.All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexicalorder of the values.documentation: http://pandas.pydata.org/pandas-docs/stable/categorical.html
###Code
import pandas as pd
import numpy as np
file_name_string = 'C:/Users/Charles Kelly/Desktop/Exercise Files/02_07/Begin/EmployeesWithGrades.xlsx'
employees_df = pd.read_excel(file_name_string, 'Sheet1', index_col=None, na_values=['NA'])
###Output
_____no_output_____
###Markdown
Change data typechange data type for "Grade" column to categorydocumentation for astype(): http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.astype.html
###Code
employees_df["Grade"] = employees_df["Grade"].astype("category")
###Output
_____no_output_____
###Markdown
Rename the categoriesRename the categories to more meaningful names (assigning to Series.cat.categories is inplace)
###Code
employees_df["Grade"].cat.categories = ["excellent", "good", "acceptable", "poor", "unacceptable"]
###Output
_____no_output_____
###Markdown
Values in data frame have not changed tabulate Department, Name, and YearsOfService, by Grade
###Code
employees_df.groupby('Grade').count()
###Output
_____no_output_____ |
tutorials/test_data_quality_at_scale.ipynb | ###Markdown
Test data quality at scale with PyDeequAuthors: Calvin Wang (calviwan@), Chris Ghyzel (cghyzel@), Joan Aoanan (jaoanan@), Veronika Megler (meglerv@) You generally write unit tests for your code, but do you also test your data? Incoming data quality can make or break your machine learning application. Incorrect, missing or malformed data can have a large impact on production systems. Examples of data quality issues are:* Missing values can lead to failures in production system that require non-null values (NullPointerException).* Changes in the distribution of data can lead to unexpected outputs of machine learning models.* Aggregations of incorrect data can lead to wrong business decisions.In this blog post, we introduce PyDeequ, an open source Python wrapper over [Deequ](https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/) (an open source tool developed and used at Amazon). While Deequ is written in Scala, PyDeequ allows you to use its data quality and testing capabilities from Python and PySpark, the language of choice of many data scientists. PyDeequ democratizes and extends the power of Deequ by allowing you to use it alongside the many data science libraries that are available in that language. Furthermore, PyDeequ allows for fluid interface with [Pandas](https://pandas.pydata.org/) DataFrame as opposed to restricting within Spark DataFrames. Deequ allows you to calculate data quality metrics on your dataset, define and verify data quality constraints, and be informed about changes in the data distribution. Instead of implementing checks and verification algorithms on your own, you can focus on describing how your data should look. Deequ supports you by suggesting checks for you. Deequ is implemented on top of [Apache Spark](https://spark.apache.org/) and is designed to scale with large datasets (think billions of rows) that typically live in a distributed filesystem or a data warehouse. PyDeequ gives you access to this capability, but also allows you to use it from the familiar environment of your Python Jupyter notebook. Deequ at Amazon Deequ is being used internally at Amazon for verifying the quality of many large production datasets. Dataset producers can add and edit data quality constraints. The system computes data quality metrics on a regular basis (with every new version of a dataset), verifies constraints defined by dataset producers, and publishes datasets to consumers in case of success. In error cases, dataset publication can be stopped, and producers are notified to take action. Data quality issues do not propagate to consumer data pipelines, reducing their blast radius. Deequ is also used within [Amazon SageMaker Model Monitor](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.htmlmodel-monitor-how-it-works). Now with the availability of PyDeequ, it is finding its way into a broader set of environments - SageMaker Notebooks, AWS Glue, and more. Overview of PyDeequLet’s look at PyDeequ’s main components, and how they relate to Deequ (shown in Figure 1). * Metrics Computation — Deequ computes data quality metrics, that is, statistics such as completeness, maximum, or correlation. Deequ uses Spark to read from sources such as Amazon S3, and to compute metrics through an optimized set of aggregation queries. You have direct access to the raw metrics computed on the data.* Constraint Verification — As a user, you focus on defining a set of data quality constraints to be verified. Deequ takes care of deriving the required set of metrics to be computed on the data. Deequ generates a data quality report, which contains the result of the constraint verification.* Constraint Suggestion — You can choose to define your own custom data quality constraints, or use the automated constraint suggestion methods that profile the data to infer useful constraints.* Python wrappers — You can call each of the Deequ functions using Python syntax. The wrappers translate the commands to the underlying Deequ calls, and return their response.Figure 1. Overview of PyDeequ components. Example As a running example, we use [a customer review dataset provided by Amazon](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) on Amazon S3. We have intentionally followed the example in the [Deequ blog](https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/), to show the similarity in functionality and execution. We begin the way many data science projects do: with initial data exploration and assessment in a Jupyter notebook. During the data exploration phase, you’d like to easily answer some basic questions about the data: * Are the fields that are supposed to contain unique values, really unique? Are there fields that are missing values? * How many distinct categories are there in the categorical fields?* Are there correlations between some key features?* If there are two supposedly similar datasets (different categories, or different time periods, say), are they really similar?Then, we’ll show you how to scale this approach to large-scale datasets, using the same code on an EMR cluster. This is how you’d likely do your ML training, and later as you move into a production setting. Setup: Start a PySpark Session in a SageMaker Notebook
###Code
%%bash
# install PyDeequ via pip
pip install pydeequ
from pyspark.sql import SparkSession, Row, DataFrame
import json
import pandas as pd
import sagemaker_pyspark
import pydeequ
classpath = ":".join(sagemaker_pyspark.classpath_jars())
spark = (SparkSession
.builder
.config("spark.driver.extraClassPath", classpath)
.config("spark.jars.packages", pydeequ.deequ_maven_coord)
.config("spark.jars.excludes", pydeequ.f2j_maven_coord)
.getOrCreate())
###Output
_____no_output_____
###Markdown
We will be using the Amazon Product Reviews dataset -- specifically the Electronics subset.
###Code
df = spark.read.parquet("s3a://amazon-reviews-pds/parquet/product_category=Electronics/")
df.printSchema()
###Output
root
|-- marketplace: string (nullable = true)
|-- customer_id: string (nullable = true)
|-- review_id: string (nullable = true)
|-- product_id: string (nullable = true)
|-- product_parent: string (nullable = true)
|-- product_title: string (nullable = true)
|-- star_rating: integer (nullable = true)
|-- helpful_votes: integer (nullable = true)
|-- total_votes: integer (nullable = true)
|-- vine: string (nullable = true)
|-- verified_purchase: string (nullable = true)
|-- review_headline: string (nullable = true)
|-- review_body: string (nullable = true)
|-- review_date: date (nullable = true)
|-- year: integer (nullable = true)
###Markdown
Data Analysis Before we define checks on the data, we want to calculate some statistics on the dataset; we call them metrics. As with Deequ, PyDeequ supports a rich set of metrics (they are described in this blog (https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/) and in this Deequ package (https://github.com/awslabs/deequ/tree/master/src/main/scala/com/amazon/deequ/analyzers)). In the following example, we show how to use the _AnalysisRunner (https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/analyzers/runners/AnalysisRunner.scala)_ to capture the metrics you are interested in.
###Code
from pydeequ.analyzers import *
analysisResult = AnalysisRunner(spark) \
.onData(df) \
.addAnalyzer(Size()) \
.addAnalyzer(Completeness("review_id")) \
.addAnalyzer(ApproxCountDistinct("review_id")) \
.addAnalyzer(Mean("star_rating")) \
.addAnalyzer(Compliance("top star_rating", "star_rating >= 4.0")) \
.addAnalyzer(Correlation("total_votes", "star_rating")) \
.addAnalyzer(Correlation("total_votes", "helpful_votes")) \
.run()
analysisResult_df = AnalyzerContext.successMetricsAsDataFrame(spark, analysisResult)
analysisResult_df.show()
###Output
+-----------+--------------------+-------------------+--------------------+
| entity| instance| name| value|
+-----------+--------------------+-------------------+--------------------+
| Column| review_id| Completeness| 1.0|
| Column| review_id|ApproxCountDistinct| 3010972.0|
|Mutlicolumn|total_votes,star_...| Correlation|-0.03451097996538765|
| Dataset| *| Size| 3120938.0|
| Column| star_rating| Mean| 4.036143941340712|
| Column| top star_rating| Compliance| 0.7494070692849394|
|Mutlicolumn|total_votes,helpf...| Correlation| 0.9936463809903863|
+-----------+--------------------+-------------------+--------------------+
###Markdown
You can also get that result in a Pandas Dataframe!Passing `pandas=True` in any call for getting metrics as DataFrames will return the dataframe in Pandas form! We'll see more of it down the line!
###Code
analysisResult_pd_df = AnalyzerContext.successMetricsAsDataFrame(spark, analysisResult, pandas=True)
analysisResult_pd_df
###Output
_____no_output_____
###Markdown
From this, we learn that: * review_id has no missing values and approximately 3,010,972 unique values. * 74.9% of reviews have a star_rating of 4 or higher * total_votes and star_rating are not correlated. * helpful_votes and total_votes are strongly correlated * the average star_rating is 4.0 * The dataset contains 3,120,938 reviews. Define and Run Tests for DataAfter analyzing and understanding the data, we want to verify that the properties we have derived also hold for new versions of the dataset. By defining assertions on the data distribution as part of a data pipeline, we can ensure that every processed dataset is of high quality, and that any application consuming the data can rely on it.For writing tests on data, we start with the _VerificationSuite (https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/VerificationSuite.scala)_ and add _Checks (https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/checks/Check.scala)_ on attributes of the data. In this example, we test for the following properties of our data:* There are at least 3 million rows in total. * review_id is never NULL.* review_id is unique. * star_rating has a minimum of 1.0 and maximum of 5.0. * marketplace only contains “US”, “UK”, “DE”, “JP”, or “FR”.* year does not contain negative values. This is the code that reflects the previous statements. For information about all available checks, see _this GitHub repository (https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/checks/Check.scala)_. You can run this directly in the Spark shell as previously explained:
###Code
from pydeequ.checks import *
from pydeequ.verification import *
check = Check(spark, CheckLevel.Warning, "Amazon Electronic Products Reviews")
checkResult = VerificationSuite(spark) \
.onData(df) \
.addCheck(
check.hasSize(lambda x: x >= 3000000) \
.hasMin("star_rating", lambda x: x == 1.0) \
.hasMax("star_rating", lambda x: x == 5.0) \
.isComplete("review_id") \
.isUnique("review_id") \
.isComplete("marketplace") \
.isContainedIn("marketplace", ["US", "UK", "DE", "JP", "FR"]) \
.isNonNegative("year")) \
.run()
print(f"Verification Run Status: {checkResult.status}")
checkResult_df = VerificationResult.checkResultsAsDataFrame(spark, checkResult, pandas=True)
checkResult_df
###Output
Python Callback server started!
Verification Run Status: Warning
###Markdown
After calling run(), PyDeequ translates your test description into Deequ, which in its turn translates it into a series of Spark jobs which are executed to compute metrics on the data. Afterwards, it invokes your assertion functions (e.g., lambda x: x == 1.0 for the minimum star-rating check) on these metrics to see if the constraints hold on the data. Interestingly, the review_id column is not unique, which resulted in a failure of the check on uniqueness. We can also look at all the metrics that Deequ computed for this check by running:
###Code
checkResult_df = VerificationResult.successMetricsAsDataFrame(spark, checkResult, pandas=True)
checkResult_df
###Output
_____no_output_____
###Markdown
Automated Constraint Suggestion If you own a large number of datasets or if your dataset has many columns, it may be challenging for you to manually define appropriate constraints. Deequ can automatically suggest useful constraints based on the data distribution. Deequ first runs a data profiling method and then applies a set of rules on the result. For more information about how to run a data profiling method, see _this GitHub repository. (https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/examples/data_profiling_example.md)_
###Code
from pydeequ.suggestions import *
suggestionResult = ConstraintSuggestionRunner(spark) \
.onData(df) \
.addConstraintRule(DEFAULT()) \
.run()
# Constraint Suggestions in JSON format
print(json.dumps(suggestionResult, indent=2))
###Output
{
"constraint_suggestions": [
{
"constraint_name": "CompletenessConstraint(Completeness(review_id,None))",
"column_name": "review_id",
"current_value": "Completeness: 1.0",
"description": "'review_id' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"review_id\")"
},
{
"constraint_name": "UniquenessConstraint(Uniqueness(List(review_id),None))",
"column_name": "review_id",
"current_value": "ApproxDistinctness: 0.9647650802419017",
"description": "'review_id' is unique",
"suggesting_rule": "UniqueIfApproximatelyUniqueRule()",
"rule_description": "If the ratio of approximate num distinct values in a column is close to the number of records (within the error of the HLL sketch), we suggest a UNIQUE constraint",
"code_for_constraint": ".isUnique(\"review_id\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(customer_id,None))",
"column_name": "customer_id",
"current_value": "Completeness: 1.0",
"description": "'customer_id' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"customer_id\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('customer_id' has no negative values,customer_id >= 0,None))",
"column_name": "customer_id",
"current_value": "Minimum: 10005.0",
"description": "'customer_id' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"customer_id\")"
},
{
"constraint_name": "AnalysisBasedConstraint(DataType(customer_id,None),<function1>,Some(<function1>),None)",
"column_name": "customer_id",
"current_value": "DataType: Integral",
"description": "'customer_id' has type Integral",
"suggesting_rule": "RetainTypeRule()",
"rule_description": "If we detect a non-string type, we suggest a type constraint",
"code_for_constraint": ".hasDataType(\"customer_id\", ConstrainableDataTypes.Integral)"
},
{
"constraint_name": "CompletenessConstraint(Completeness(review_date,None))",
"column_name": "review_date",
"current_value": "Completeness: 1.0",
"description": "'review_date' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"review_date\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(helpful_votes,None))",
"column_name": "helpful_votes",
"current_value": "Completeness: 1.0",
"description": "'helpful_votes' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"helpful_votes\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('helpful_votes' has no negative values,helpful_votes >= 0,None))",
"column_name": "helpful_votes",
"current_value": "Minimum: 0.0",
"description": "'helpful_votes' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"helpful_votes\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(star_rating,None))",
"column_name": "star_rating",
"current_value": "Completeness: 1.0",
"description": "'star_rating' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"star_rating\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('star_rating' has no negative values,star_rating >= 0,None))",
"column_name": "star_rating",
"current_value": "Minimum: 1.0",
"description": "'star_rating' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"star_rating\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(year,None))",
"column_name": "year",
"current_value": "Completeness: 1.0",
"description": "'year' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"year\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('year' has no negative values,year >= 0,None))",
"column_name": "year",
"current_value": "Minimum: 1999.0",
"description": "'year' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"year\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(product_title,None))",
"column_name": "product_title",
"current_value": "Completeness: 1.0",
"description": "'product_title' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"product_title\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(review_headline,None))",
"column_name": "review_headline",
"current_value": "Completeness: 0.9999987183340393",
"description": "'review_headline' has less than 1% missing values",
"suggesting_rule": "RetainCompletenessRule()",
"rule_description": "If a column is incomplete in the sample, we model its completeness as a binomial variable, estimate a confidence interval and use this to define a lower bound for the completeness",
"code_for_constraint": ".hasCompleteness(\"review_headline\", lambda x: x >= 0.99, \"It should be above 0.99!\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(product_id,None))",
"column_name": "product_id",
"current_value": "Completeness: 1.0",
"description": "'product_id' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"product_id\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(total_votes,None))",
"column_name": "total_votes",
"current_value": "Completeness: 1.0",
"description": "'total_votes' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"total_votes\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('total_votes' has no negative values,total_votes >= 0,None))",
"column_name": "total_votes",
"current_value": "Minimum: 0.0",
"description": "'total_votes' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"total_votes\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(product_parent,None))",
"column_name": "product_parent",
"current_value": "Completeness: 1.0",
"description": "'product_parent' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"product_parent\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('product_parent' has no negative values,product_parent >= 0,None))",
"column_name": "product_parent",
"current_value": "Minimum: 6478.0",
"description": "'product_parent' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"product_parent\")"
},
{
"constraint_name": "AnalysisBasedConstraint(DataType(product_parent,None),<function1>,Some(<function1>),None)",
"column_name": "product_parent",
"current_value": "DataType: Integral",
"description": "'product_parent' has type Integral",
"suggesting_rule": "RetainTypeRule()",
"rule_description": "If we detect a non-string type, we suggest a type constraint",
"code_for_constraint": ".hasDataType(\"product_parent\", ConstrainableDataTypes.Integral)"
},
{
"constraint_name": "CompletenessConstraint(Completeness(review_body,None))",
"column_name": "review_body",
"current_value": "Completeness: 0.9999724441818453",
"description": "'review_body' has less than 1% missing values",
"suggesting_rule": "RetainCompletenessRule()",
"rule_description": "If a column is incomplete in the sample, we model its completeness as a binomial variable, estimate a confidence interval and use this to define a lower bound for the completeness",
"code_for_constraint": ".hasCompleteness(\"review_body\", lambda x: x >= 0.99, \"It should be above 0.99!\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('vine' has value range 'N', 'Y',`vine` IN ('N', 'Y'),None))",
"column_name": "vine",
"current_value": "Compliance: 1",
"description": "'vine' has value range 'N', 'Y'",
"suggesting_rule": "CategoricalRangeRule()",
"rule_description": "If we see a categorical range for a column, we suggest an IS IN (...) constraint",
"code_for_constraint": ".isContainedIn(\"vine\", [\"N\", \"Y\"])"
},
{
"constraint_name": "CompletenessConstraint(Completeness(vine,None))",
"column_name": "vine",
"current_value": "Completeness: 1.0",
"description": "'vine' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"vine\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('vine' has value range 'N' for at least 99.0% of values,`vine` IN ('N'),None))",
"column_name": "vine",
"current_value": "Compliance: 0.9939271462617969",
"description": "'vine' has value range 'N' for at least 99.0% of values",
"suggesting_rule": "FractionalCategoricalRangeRule(0.9)",
"rule_description": "If we see a categorical range for most values in a column, we suggest an IS IN (...) constraint that should hold for most values",
"code_for_constraint": ".isContainedIn(\"vine\", [\"N\"], lambda x: x >= 0.99, \"It should be above 0.99!\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('marketplace' has value range 'US', 'UK', 'DE', 'JP', 'FR',`marketplace` IN ('US', 'UK', 'DE', 'JP', 'FR'),None))",
"column_name": "marketplace",
"current_value": "Compliance: 1",
"description": "'marketplace' has value range 'US', 'UK', 'DE', 'JP', 'FR'",
"suggesting_rule": "CategoricalRangeRule()",
"rule_description": "If we see a categorical range for a column, we suggest an IS IN (...) constraint",
"code_for_constraint": ".isContainedIn(\"marketplace\", [\"US\", \"UK\", \"DE\", \"JP\", \"FR\"])"
},
{
"constraint_name": "CompletenessConstraint(Completeness(marketplace,None))",
"column_name": "marketplace",
"current_value": "Completeness: 1.0",
"description": "'marketplace' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"marketplace\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('marketplace' has value range 'US' for at least 99.0% of values,`marketplace` IN ('US'),None))",
"column_name": "marketplace",
"current_value": "Compliance: 0.9949982985884372",
"description": "'marketplace' has value range 'US' for at least 99.0% of values",
"suggesting_rule": "FractionalCategoricalRangeRule(0.9)",
"rule_description": "If we see a categorical range for most values in a column, we suggest an IS IN (...) constraint that should hold for most values",
"code_for_constraint": ".isContainedIn(\"marketplace\", [\"US\"], lambda x: x >= 0.99, \"It should be above 0.99!\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('verified_purchase' has value range 'Y', 'N',`verified_purchase` IN ('Y', 'N'),None))",
"column_name": "verified_purchase",
"current_value": "Compliance: 1",
"description": "'verified_purchase' has value range 'Y', 'N'",
"suggesting_rule": "CategoricalRangeRule()",
"rule_description": "If we see a categorical range for a column, we suggest an IS IN (...) constraint",
"code_for_constraint": ".isContainedIn(\"verified_purchase\", [\"Y\", \"N\"])"
},
{
"constraint_name": "CompletenessConstraint(Completeness(verified_purchase,None))",
"column_name": "verified_purchase",
"current_value": "Completeness: 1.0",
"description": "'verified_purchase' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"verified_purchase\")"
}
]
}
###Markdown
Test data quality at scale with PyDeequAuthors: Calvin Wang (calviwan@), Chris Ghyzel (cghyzel@), Joan Aoanan (jaoanan@), Veronika Megler (meglerv@) You generally write unit tests for your code, but do you also test your data? Incoming data quality can make or break your machine learning application. Incorrect, missing or malformed data can have a large impact on production systems. Examples of data quality issues are:* Missing values can lead to failures in production system that require non-null values (NullPointerException).* Changes in the distribution of data can lead to unexpected outputs of machine learning models.* Aggregations of incorrect data can lead to wrong business decisions.In this blog post, we introduce PyDeequ, an open source Python wrapper over [Deequ](https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/) (an open source tool developed and used at Amazon). While Deequ is written in Scala, PyDeequ allows you to use its data quality and testing capabilities from Python and PySpark, the language of choice of many data scientists. PyDeequ democratizes and extends the power of Deequ by allowing you to use it alongside the many data science libraries that are available in that language. Furthermore, PyDeequ allows for fluid interface with [Pandas](https://pandas.pydata.org/) DataFrame as opposed to restricting within Spark DataFrames. Deequ allows you to calculate data quality metrics on your dataset, define and verify data quality constraints, and be informed about changes in the data distribution. Instead of implementing checks and verification algorithms on your own, you can focus on describing how your data should look. Deequ supports you by suggesting checks for you. Deequ is implemented on top of [Apache Spark](https://spark.apache.org/) and is designed to scale with large datasets (think billions of rows) that typically live in a distributed filesystem or a data warehouse. PyDeequ gives you access to this capability, but also allows you to use it from the familiar environment of your Python Jupyter notebook. Deequ at Amazon Deequ is being used internally at Amazon for verifying the quality of many large production datasets. Dataset producers can add and edit data quality constraints. The system computes data quality metrics on a regular basis (with every new version of a dataset), verifies constraints defined by dataset producers, and publishes datasets to consumers in case of success. In error cases, dataset publication can be stopped, and producers are notified to take action. Data quality issues do not propagate to consumer data pipelines, reducing their blast radius. Deequ is also used within [Amazon SageMaker Model Monitor](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.htmlmodel-monitor-how-it-works). Now with the availability of PyDeequ, it is finding its way into a broader set of environments - SageMaker Notebooks, AWS Glue, and more. Overview of PyDeequLet’s look at PyDeequ’s main components, and how they relate to Deequ (shown in Figure 1). * Metrics Computation — Deequ computes data quality metrics, that is, statistics such as completeness, maximum, or correlation. Deequ uses Spark to read from sources such as Amazon S3, and to compute metrics through an optimized set of aggregation queries. You have direct access to the raw metrics computed on the data.* Constraint Verification — As a user, you focus on defining a set of data quality constraints to be verified. Deequ takes care of deriving the required set of metrics to be computed on the data. Deequ generates a data quality report, which contains the result of the constraint verification.* Constraint Suggestion — You can choose to define your own custom data quality constraints, or use the automated constraint suggestion methods that profile the data to infer useful constraints.* Python wrappers — You can call each of the Deequ functions using Python syntax. The wrappers translate the commands to the underlying Deequ calls, and return their response.Figure 1. Overview of PyDeequ components. Example As a running example, we use [a customer review dataset provided by Amazon](https://s3.amazonaws.com/amazon-reviews-pds/readme.html) on Amazon S3. We have intentionally followed the example in the [Deequ blog](https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/), to show the similarity in functionality and execution. We begin the way many data science projects do: with initial data exploration and assessment in a Jupyter notebook. During the data exploration phase, you’d like to easily answer some basic questions about the data: * Are the fields that are supposed to contain unique values, really unique? Are there fields that are missing values? * How many distinct categories are there in the categorical fields?* Are there correlations between some key features?* If there are two supposedly similar datasets (different categories, or different time periods, say), are they really similar?Then, we’ll show you how to scale this approach to large-scale datasets, using the same code on an EMR cluster. This is how you’d likely do your ML training, and later as you move into a production setting. Setup: Start a PySpark Session in a SageMaker Notebook
###Code
%%bash
# install PyDeequ via pip
! pip install pydeequ
from pyspark.sql import SparkSession, Row, DataFrame
import json
import pandas as pd
import sagemaker_pyspark
import pydeequ
classpath = ":".join(sagemaker_pyspark.classpath_jars())
spark = (SparkSession
.builder
.config("spark.driver.extraClassPath", classpath)
.config("spark.jars.packages", pydeequ.deequ_maven_coord)
.config("spark.jars.excludes", pydeequ.f2j_maven_coord)
.getOrCreate())
###Output
_____no_output_____
###Markdown
We will be using the Amazon Product Reviews dataset -- specifically the Electronics subset.
###Code
df = spark.read.parquet("s3a://amazon-reviews-pds/parquet/product_category=Electronics/")
df.printSchema()
###Output
root
|-- marketplace: string (nullable = true)
|-- customer_id: string (nullable = true)
|-- review_id: string (nullable = true)
|-- product_id: string (nullable = true)
|-- product_parent: string (nullable = true)
|-- product_title: string (nullable = true)
|-- star_rating: integer (nullable = true)
|-- helpful_votes: integer (nullable = true)
|-- total_votes: integer (nullable = true)
|-- vine: string (nullable = true)
|-- verified_purchase: string (nullable = true)
|-- review_headline: string (nullable = true)
|-- review_body: string (nullable = true)
|-- review_date: date (nullable = true)
|-- year: integer (nullable = true)
###Markdown
Data Analysis Before we define checks on the data, we want to calculate some statistics on the dataset; we call them metrics. As with Deequ, PyDeequ supports a rich set of metrics (they are described in this blog (https://aws.amazon.com/blogs/big-data/test-data-quality-at-scale-with-deequ/) and in this Deequ package (https://github.com/awslabs/deequ/tree/master/src/main/scala/com/amazon/deequ/analyzers)). In the following example, we show how to use the _AnalysisRunner (https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/analyzers/runners/AnalysisRunner.scala)_ to capture the metrics you are interested in.
###Code
from pydeequ.analyzers import *
analysisResult = AnalysisRunner(spark) \
.onData(df) \
.addAnalyzer(Size()) \
.addAnalyzer(Completeness("review_id")) \
.addAnalyzer(ApproxCountDistinct("review_id")) \
.addAnalyzer(Mean("star_rating")) \
.addAnalyzer(Compliance("top star_rating", "star_rating >= 4.0")) \
.addAnalyzer(Correlation("total_votes", "star_rating")) \
.addAnalyzer(Correlation("total_votes", "helpful_votes")) \
.run()
analysisResult_df = AnalyzerContext.successMetricsAsDataFrame(spark, analysisResult)
analysisResult_df.show()
###Output
+-----------+--------------------+-------------------+--------------------+
| entity| instance| name| value|
+-----------+--------------------+-------------------+--------------------+
| Column| review_id| Completeness| 1.0|
| Column| review_id|ApproxCountDistinct| 3010972.0|
|Mutlicolumn|total_votes,star_...| Correlation|-0.03451097996538765|
| Dataset| *| Size| 3120938.0|
| Column| star_rating| Mean| 4.036143941340712|
| Column| top star_rating| Compliance| 0.7494070692849394|
|Mutlicolumn|total_votes,helpf...| Correlation| 0.9936463809903863|
+-----------+--------------------+-------------------+--------------------+
###Markdown
You can also get that result in a Pandas Dataframe!Passing `pandas=True` in any call for getting metrics as DataFrames will return the dataframe in Pandas form! We'll see more of it down the line!
###Code
analysisResult_pd_df = AnalyzerContext.successMetricsAsDataFrame(spark, analysisResult, pandas=True)
analysisResult_pd_df
###Output
_____no_output_____
###Markdown
From this, we learn that: * review_id has no missing values and approximately 3,010,972 unique values. * 74.9% of reviews have a star_rating of 4 or higher * total_votes and star_rating are not correlated. * helpful_votes and total_votes are strongly correlated * the average star_rating is 4.0 * The dataset contains 3,120,938 reviews. Define and Run Tests for DataAfter analyzing and understanding the data, we want to verify that the properties we have derived also hold for new versions of the dataset. By defining assertions on the data distribution as part of a data pipeline, we can ensure that every processed dataset is of high quality, and that any application consuming the data can rely on it.For writing tests on data, we start with the _VerificationSuite (https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/VerificationSuite.scala)_ and add _Checks (https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/checks/Check.scala)_ on attributes of the data. In this example, we test for the following properties of our data:* There are at least 3 million rows in total. * review_id is never NULL.* review_id is unique. * star_rating has a minimum of 1.0 and maximum of 5.0. * marketplace only contains “US”, “UK”, “DE”, “JP”, or “FR”.* year does not contain negative values. This is the code that reflects the previous statements. For information about all available checks, see _this GitHub repository (https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/checks/Check.scala)_. You can run this directly in the Spark shell as previously explained:
###Code
from pydeequ.checks import *
from pydeequ.verification import *
check = Check(spark, CheckLevel.Warning, "Amazon Electronic Products Reviews")
checkResult = VerificationSuite(spark) \
.onData(df) \
.addCheck(
check.hasSize(lambda x: x >= 3000000) \
.hasMin("star_rating", lambda x: x == 1.0) \
.hasMax("star_rating", lambda x: x == 5.0) \
.isComplete("review_id") \
.isUnique("review_id") \
.isComplete("marketplace") \
.isContainedIn("marketplace", ["US", "UK", "DE", "JP", "FR"]) \
.isNonNegative("year")) \
.run()
print(f"Verification Run Status: {checkResult.status}")
checkResult_df = VerificationResult.checkResultsAsDataFrame(spark, checkResult, pandas=True)
checkResult_df
###Output
Python Callback server started!
Verification Run Status: Warning
###Markdown
After calling run(), PyDeequ translates your test description into Deequ, which in its turn translates it into a series of Spark jobs which are executed to compute metrics on the data. Afterwards, it invokes your assertion functions (e.g., lambda x: x == 1.0 for the minimum star-rating check) on these metrics to see if the constraints hold on the data. Interestingly, the review_id column is not unique, which resulted in a failure of the check on uniqueness. We can also look at all the metrics that Deequ computed for this check by running:
###Code
checkResult_df = VerificationResult.successMetricsAsDataFrame(spark, checkResult, pandas=True)
checkResult_df
###Output
_____no_output_____
###Markdown
Automated Constraint Suggestion If you own a large number of datasets or if your dataset has many columns, it may be challenging for you to manually define appropriate constraints. Deequ can automatically suggest useful constraints based on the data distribution. Deequ first runs a data profiling method and then applies a set of rules on the result. For more information about how to run a data profiling method, see _this GitHub repository. (https://github.com/awslabs/deequ/blob/master/src/main/scala/com/amazon/deequ/examples/data_profiling_example.md)_
###Code
from pydeequ.suggestions import *
suggestionResult = ConstraintSuggestionRunner(spark) \
.onData(df) \
.addConstraintRule(DEFAULT()) \
.run()
# Constraint Suggestions in JSON format
print(json.dumps(suggestionResult, indent=2))
###Output
{
"constraint_suggestions": [
{
"constraint_name": "CompletenessConstraint(Completeness(review_id,None))",
"column_name": "review_id",
"current_value": "Completeness: 1.0",
"description": "'review_id' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"review_id\")"
},
{
"constraint_name": "UniquenessConstraint(Uniqueness(List(review_id),None))",
"column_name": "review_id",
"current_value": "ApproxDistinctness: 0.9647650802419017",
"description": "'review_id' is unique",
"suggesting_rule": "UniqueIfApproximatelyUniqueRule()",
"rule_description": "If the ratio of approximate num distinct values in a column is close to the number of records (within the error of the HLL sketch), we suggest a UNIQUE constraint",
"code_for_constraint": ".isUnique(\"review_id\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(customer_id,None))",
"column_name": "customer_id",
"current_value": "Completeness: 1.0",
"description": "'customer_id' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"customer_id\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('customer_id' has no negative values,customer_id >= 0,None))",
"column_name": "customer_id",
"current_value": "Minimum: 10005.0",
"description": "'customer_id' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"customer_id\")"
},
{
"constraint_name": "AnalysisBasedConstraint(DataType(customer_id,None),<function1>,Some(<function1>),None)",
"column_name": "customer_id",
"current_value": "DataType: Integral",
"description": "'customer_id' has type Integral",
"suggesting_rule": "RetainTypeRule()",
"rule_description": "If we detect a non-string type, we suggest a type constraint",
"code_for_constraint": ".hasDataType(\"customer_id\", ConstrainableDataTypes.Integral)"
},
{
"constraint_name": "CompletenessConstraint(Completeness(review_date,None))",
"column_name": "review_date",
"current_value": "Completeness: 1.0",
"description": "'review_date' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"review_date\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(helpful_votes,None))",
"column_name": "helpful_votes",
"current_value": "Completeness: 1.0",
"description": "'helpful_votes' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"helpful_votes\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('helpful_votes' has no negative values,helpful_votes >= 0,None))",
"column_name": "helpful_votes",
"current_value": "Minimum: 0.0",
"description": "'helpful_votes' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"helpful_votes\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(star_rating,None))",
"column_name": "star_rating",
"current_value": "Completeness: 1.0",
"description": "'star_rating' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"star_rating\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('star_rating' has no negative values,star_rating >= 0,None))",
"column_name": "star_rating",
"current_value": "Minimum: 1.0",
"description": "'star_rating' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"star_rating\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(year,None))",
"column_name": "year",
"current_value": "Completeness: 1.0",
"description": "'year' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"year\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('year' has no negative values,year >= 0,None))",
"column_name": "year",
"current_value": "Minimum: 1999.0",
"description": "'year' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"year\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(product_title,None))",
"column_name": "product_title",
"current_value": "Completeness: 1.0",
"description": "'product_title' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"product_title\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(review_headline,None))",
"column_name": "review_headline",
"current_value": "Completeness: 0.9999987183340393",
"description": "'review_headline' has less than 1% missing values",
"suggesting_rule": "RetainCompletenessRule()",
"rule_description": "If a column is incomplete in the sample, we model its completeness as a binomial variable, estimate a confidence interval and use this to define a lower bound for the completeness",
"code_for_constraint": ".hasCompleteness(\"review_headline\", lambda x: x >= 0.99, \"It should be above 0.99!\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(product_id,None))",
"column_name": "product_id",
"current_value": "Completeness: 1.0",
"description": "'product_id' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"product_id\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(total_votes,None))",
"column_name": "total_votes",
"current_value": "Completeness: 1.0",
"description": "'total_votes' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"total_votes\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('total_votes' has no negative values,total_votes >= 0,None))",
"column_name": "total_votes",
"current_value": "Minimum: 0.0",
"description": "'total_votes' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"total_votes\")"
},
{
"constraint_name": "CompletenessConstraint(Completeness(product_parent,None))",
"column_name": "product_parent",
"current_value": "Completeness: 1.0",
"description": "'product_parent' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"product_parent\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('product_parent' has no negative values,product_parent >= 0,None))",
"column_name": "product_parent",
"current_value": "Minimum: 6478.0",
"description": "'product_parent' has no negative values",
"suggesting_rule": "NonNegativeNumbersRule()",
"rule_description": "If we see only non-negative numbers in a column, we suggest a corresponding constraint",
"code_for_constraint": ".isNonNegative(\"product_parent\")"
},
{
"constraint_name": "AnalysisBasedConstraint(DataType(product_parent,None),<function1>,Some(<function1>),None)",
"column_name": "product_parent",
"current_value": "DataType: Integral",
"description": "'product_parent' has type Integral",
"suggesting_rule": "RetainTypeRule()",
"rule_description": "If we detect a non-string type, we suggest a type constraint",
"code_for_constraint": ".hasDataType(\"product_parent\", ConstrainableDataTypes.Integral)"
},
{
"constraint_name": "CompletenessConstraint(Completeness(review_body,None))",
"column_name": "review_body",
"current_value": "Completeness: 0.9999724441818453",
"description": "'review_body' has less than 1% missing values",
"suggesting_rule": "RetainCompletenessRule()",
"rule_description": "If a column is incomplete in the sample, we model its completeness as a binomial variable, estimate a confidence interval and use this to define a lower bound for the completeness",
"code_for_constraint": ".hasCompleteness(\"review_body\", lambda x: x >= 0.99, \"It should be above 0.99!\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('vine' has value range 'N', 'Y',`vine` IN ('N', 'Y'),None))",
"column_name": "vine",
"current_value": "Compliance: 1",
"description": "'vine' has value range 'N', 'Y'",
"suggesting_rule": "CategoricalRangeRule()",
"rule_description": "If we see a categorical range for a column, we suggest an IS IN (...) constraint",
"code_for_constraint": ".isContainedIn(\"vine\", [\"N\", \"Y\"])"
},
{
"constraint_name": "CompletenessConstraint(Completeness(vine,None))",
"column_name": "vine",
"current_value": "Completeness: 1.0",
"description": "'vine' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"vine\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('vine' has value range 'N' for at least 99.0% of values,`vine` IN ('N'),None))",
"column_name": "vine",
"current_value": "Compliance: 0.9939271462617969",
"description": "'vine' has value range 'N' for at least 99.0% of values",
"suggesting_rule": "FractionalCategoricalRangeRule(0.9)",
"rule_description": "If we see a categorical range for most values in a column, we suggest an IS IN (...) constraint that should hold for most values",
"code_for_constraint": ".isContainedIn(\"vine\", [\"N\"], lambda x: x >= 0.99, \"It should be above 0.99!\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('marketplace' has value range 'US', 'UK', 'DE', 'JP', 'FR',`marketplace` IN ('US', 'UK', 'DE', 'JP', 'FR'),None))",
"column_name": "marketplace",
"current_value": "Compliance: 1",
"description": "'marketplace' has value range 'US', 'UK', 'DE', 'JP', 'FR'",
"suggesting_rule": "CategoricalRangeRule()",
"rule_description": "If we see a categorical range for a column, we suggest an IS IN (...) constraint",
"code_for_constraint": ".isContainedIn(\"marketplace\", [\"US\", \"UK\", \"DE\", \"JP\", \"FR\"])"
},
{
"constraint_name": "CompletenessConstraint(Completeness(marketplace,None))",
"column_name": "marketplace",
"current_value": "Completeness: 1.0",
"description": "'marketplace' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"marketplace\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('marketplace' has value range 'US' for at least 99.0% of values,`marketplace` IN ('US'),None))",
"column_name": "marketplace",
"current_value": "Compliance: 0.9949982985884372",
"description": "'marketplace' has value range 'US' for at least 99.0% of values",
"suggesting_rule": "FractionalCategoricalRangeRule(0.9)",
"rule_description": "If we see a categorical range for most values in a column, we suggest an IS IN (...) constraint that should hold for most values",
"code_for_constraint": ".isContainedIn(\"marketplace\", [\"US\"], lambda x: x >= 0.99, \"It should be above 0.99!\")"
},
{
"constraint_name": "ComplianceConstraint(Compliance('verified_purchase' has value range 'Y', 'N',`verified_purchase` IN ('Y', 'N'),None))",
"column_name": "verified_purchase",
"current_value": "Compliance: 1",
"description": "'verified_purchase' has value range 'Y', 'N'",
"suggesting_rule": "CategoricalRangeRule()",
"rule_description": "If we see a categorical range for a column, we suggest an IS IN (...) constraint",
"code_for_constraint": ".isContainedIn(\"verified_purchase\", [\"Y\", \"N\"])"
},
{
"constraint_name": "CompletenessConstraint(Completeness(verified_purchase,None))",
"column_name": "verified_purchase",
"current_value": "Completeness: 1.0",
"description": "'verified_purchase' is not null",
"suggesting_rule": "CompleteIfCompleteRule()",
"rule_description": "If a column is complete in the sample, we suggest a NOT NULL constraint",
"code_for_constraint": ".isComplete(\"verified_purchase\")"
}
]
}
|
EDA&3models.ipynb | ###Markdown
COVID-19 World Vaccination Progress
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import itertools
import math
import pycaret.regression as caret
from pycaret.time_series import *
from sklearn.model_selection import TimeSeriesSplit
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
from sklearn.metrics import r2_score,mean_absolute_error,mean_squared_error
from statsmodels.tsa.arima.model import ARIMA
import statsmodels
from fbprophet import Prophet
from fbprophet.diagnostics import cross_validation
from fbprophet.diagnostics import performance_metrics
from fbprophet.plot import plot_cross_validation_metric
import itertools
from matplotlib import pyplot as plt
import seaborn as sns
sns.set()
import statsmodels.api as sm
import scipy.stats as stats
from sklearn.metrics import r2_score
import warnings
from typing import List
from fbprophet import Prophet
from fbprophet.diagnostics import cross_validation
from fbprophet.diagnostics import performance_metrics
from fbprophet.plot import plot_cross_validation_metric
import itertools
from typing import List
import warnings
import datetime
from datetime import date , datetime , timedelta
from statsmodels.tsa.stattools import adfuller
from numpy import log
###Output
_____no_output_____
###Markdown
EDA
###Code
df = pd.read_csv("/Users/luomingni/Desktop/MS/first term/5220_SML/Project/archive/country_vaccinations copy.csv")
df.head()
df.shape
df.info
countries = df.country.unique()
for country in countries:
print(country,end = ":\n")
print(df[df.country == country]['vaccines'].unique()[0] , end = "\n"+"_"*20+"\n\n")
dict_vac_percentages = {}
iso_list = df.iso_code.unique()
for iso_code in iso_list:
dict_vac_percentages[iso_code]=df[df.iso_code==iso_code]['people_fully_vaccinated_per_hundred'].max()
df_vac_percentages = pd.DataFrame()
df_vac_percentages['iso_code'] = dict_vac_percentages.keys()
df_vac_percentages['fully vaccinated percentage'] = dict_vac_percentages.values()
df_vac_percentages['country'] = countries
map_full_percentage = px.choropleth(df_vac_percentages, locations="iso_code" , color="fully vaccinated percentage"
, hover_name="country" , color_continuous_scale=px.colors.sequential.YlGn)
map_full_percentage.show()
plt.subplots(figsize=(8, 8))
sns.heatmap(df.corr(), annot=True, square=True)
plt.show()
###Output
_____no_output_____
###Markdown
Methods
###Code
class DataModeler:
def __init__(self):
pass
def _parametrized(dec):
def layer(*args, **kwargs):
def repl(f):
return dec(f, *args, **kwargs)
return repl
return layer
@staticmethod
@_parametrized
def logger(f, job):
def aux(self, *xs, **kws):
print(job + " - ", end='\t')
res = f(self, *xs, **kws)
print("Completed")
return res
return aux
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
class DataPreprocessor(DataModeler):
"Wrap the operations of data preprocessing."
def __init__(self):
super(DataPreprocessor, self).__init__()
@DataModeler.logger("Transforming feature type")
def _feature_transform(self, df:pd.DataFrame) -> List[pd.DataFrame]:
"""
Transform data type of some columns.
@param df: raw data
return: processed data
"""
df['date'] = pd.to_datetime(df['date'],format="%Y-%m-%d")
return df
@DataModeler.logger("Counting missing rate")
def missing_value_counter(self,df:pd.DataFrame, cols:List[str]) -> pd.DataFrame:
"""
Count missing values in specified columns.
@param df: dataframe
@param cols: columns to be calculated
return: summary information
"""
res = pd.DataFrame(cols, columns=['Feature'])
na_cnts = [sum(df[col].isna()) for col in cols]
res['NA Count'] = na_cnts
res['NA Rate'] = res['NA Count'] / df.shape[0]
res = res[res['NA Count'] != 0]
res = res.sort_values(by='NA Count', ascending=False).reset_index(drop=True)
return res
@DataModeler.logger("Checking day interval")
def check_day_interval(self,d0:date,d1:date):
"""
get internal day to check missing value
"""
#d0 = date(2020,12,20)
#d1 = date(2021 , 10 , 26)
delta = d1 - d0
days = delta.days + 1
print(days) #no missing value in 'date'! nice!
return days
@DataModeler.logger("Checking missing value")
def missing_value(self,data):
return data.isna().sum()
@DataModeler.logger("filling missing value using the day ahead")
def fill_missing_value(self,data,target:str):
"""
fill missing value by the value of last day
"""
for i in data[target][data[target].isna() == True].index:
data[target][i] = data[target][i-1]
return data
@DataModeler.logger("Filtering useful columns")
def _filter_data(self, df:pd.DataFrame) -> List[pd.DataFrame]:
"""
Select useful variables for the model
@param df: raw data
return: processed data
"""
df_filtered = df[['date','daily_vaccinations']]
return df_filtered
@DataModeler.logger("Filling missing value")
def _fill_missing_value(self, df:pd.DataFrame) -> pd.DataFrame:
"""
Fill missing values in input data.
param df: dataframe
return: processed dataframe
"""
res = df.fillna(0.0)
return res
@DataModeler.logger("Sort data by date")
def _sort_data(self, df:pd.DataFrame) -> List[pd.DataFrame]:
"""
Sort data by date
@param df: raw data
return: processed data
"""
df = df.sort_values(by='date')
return df
def preprocess(self, df:pd.DataFrame) -> pd.DataFrame:
"""
Preprocess raw data and modify the fields to get required columns.
@param df: raw data
return: combined clean vaccination data
"""
df = self._feature_transform(df)
df = self._filter_data(df)
df = self._fill_missing_value(df)
df = self._sort_data(df)
df = df.groupby(by=['date']).sum().reset_index()
df['total_vaccinations'] = df['daily_vaccinations'].cumsum()
df['percentage_people_vaccinated'] = (df['total_vaccinations']/(8032669179*2))*100
return df
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
class FeatureEngineer(DataModeler):
"Wrap the operations of feature engineering."
def __init__(self):
super(FeatureEngineer, self).__init__()
@DataModeler.logger("Generating date features")
def _gen_date_feats(self, data1:pd.DataFrame):
"""
Extract date features from time of data
return: dataframe with new features
"""
data1['Date'] = pd.to_datetime(data1['Date'])
data1['Date'] = data1['Date'].dt.strftime('%d.%m.%Y')
data1['year'] = pd.DatetimeIndex(data1['Date']).year
data1['month'] = pd.DatetimeIndex(data1['Date']).month
data1['day'] = pd.DatetimeIndex(data1['Date']).day
data1['dayofyear'] = pd.DatetimeIndex(data1['Date']).dayofyear
data1['weekofyear'] = pd.DatetimeIndex(data1['Date']).weekofyear
data1['weekday'] = pd.DatetimeIndex(data1['Date']).weekday
data1['quarter'] = pd.DatetimeIndex(data1['Date']).quarter
data1['is_month_start'] = pd.DatetimeIndex(data1['Date']).is_month_start
data1['is_month_end'] = pd.DatetimeIndex(data1['Date']).is_month_end
print(data1.info())
return data1
@DataModeler.logger("Generating sliding window features")
def gen_window(self,data1:pd.DataFrame,tar:str, width:str):
"""
Use sliding window to generate features
return: dataframe with new features
"""
data1['Series'] = np.arange(1 , len(data1)+1)
#define lag
data1['Shift1'] = data1[tar].shift(1)
# define Window = 7
#window_len = 7
data1['Window_mean'] = data1['Shift1'].rolling(window = width).mean()
#remove missing value
data1.dropna(inplace = True)
data1.reset_index(drop = True , inplace=True)
#df_X = data1[['Date', 'Series' , 'Window_mean' , 'Shift1' ]]
#df_Y = data1[['Target']]
return data1
###Output
_____no_output_____
###Markdown
Prophet model
###Code
class MLModeler(DataModeler):
"Wrap the operations of Prophet model."
def __init__(self):
super(MLModeler, self).__init__()
@DataModeler.logger("Transforming feature type")
def _train_test_split(self, df:pd.DataFrame,target_variable):
"""
Split data into training and validation dataset.
@param df: processed data
return: train and validation data
"""
df = df.rename(columns={'date':'ds',target_variable:'y'})
df['cap'] = 100
df['floor'] = 0
df_train = df[df['ds'] < datetime(2021,8,22)]
df_val = df[df['ds'] >= datetime(2021,8,22)]
return df_train,df_val
@DataModeler.logger("Fit model on training data")
def _fit_model(self, df:pd.DataFrame):
"""
Fit the model on training data
@param df: raw data
return: trained model
"""
m = Prophet()
m.fit(df)
return m
@DataModeler.logger("Predict results on test data")
def _predict_test(self, m) -> pd.DataFrame:
"""
Test the trained model.
param m: trained
return: dataframe containing forecasts
"""
future = m.make_future_dataframe(periods=90)
forecast = m.predict(future)
return forecast
@DataModeler.logger("Plot predicted data")
def _plot_forecast(self, m):
"""
Plot predicted data
@param m: model
return: none
"""
fig1 = m.plot(forecast)
@DataModeler.logger("Plot components of predicted data")
def _plot_components_forecast(self, m):
"""
Plot components of predicted data
@param m: model
return: none
"""
fig2 = m.plot_components(forecast)
@DataModeler.logger("Plot cross validation metrics")
def _plot_cross_validation_metrics(self, m):
"""
Plot cross validation metrics.
@param m: trained model
return: combined clean vaccination data
"""
df_cv = cross_validation(m, initial='165 days', period='100 days', horizon = '65 days')
df_p = performance_metrics(df_cv)
fig3 = plot_cross_validation_metric(df_cv, metric='mape')
@DataModeler.logger("Calculate RMSE, MAE, MAPE on test data")
def _calculate_metrics(self, m):
"""
Calculate RMSE on test data.
@param m: trained model
return: rmse
"""
df_cv = cross_validation(m, initial='165 days', period='100 days', horizon = '65 days')
df_p = performance_metrics(df_cv)
print('RMSE - ',df_p['rmse'].min())
print('MAE - ',df_p['mae'].min())
print('MAPE - ',df_p['mape'].min())
@DataModeler.logger("Tuning hyperparameters")
def _hyperparameter_tuning(self, m, df):
def create_param_combinations(**param_dict):
param_iter = itertools.product(*param_dict.values())
params =[]
for param in param_iter:
params.append(param)
params_df = pd.DataFrame(params, columns=list(param_dict.keys()))
return params_df
def single_cv_run(history_df, metrics, param_dict):
m = Prophet(**param_dict)
m.add_country_holidays(country_name='US')
m.fit(history_df)
df_cv = cross_validation(m, initial='165 days', period='100 days', horizon = '65 days')
df_p = performance_metrics(df_cv).mean().to_frame().T
df_p['params'] = str(param_dict)
df_p = df_p.loc[:, metrics]
return df_p
param_grid = {
'changepoint_prior_scale': [0.005, 0.05, 0.5, 5],
'changepoint_range': [0.8, 0.9],
'seasonality_prior_scale':[0.1, 1, 10.0],
'holidays_prior_scale':[0.1, 1, 10.0],
'seasonality_mode': ['multiplicative', 'additive'],
'growth': ['linear', 'logistic'],
'yearly_seasonality': [5, 10, 20]
}
metrics = ['horizon', 'rmse', 'mape', 'params']
results = []
params_df = create_param_combinations(**param_grid)
for param in params_df.values:
param_dict = dict(zip(params_df.keys(), param))
cv_df = single_cv_run(df, metrics, param_dict)
results.append(cv_df)
results_df = pd.concat(results).reset_index(drop=True)
return results_df.loc[results_df['rmse'] == min(results_df['rmse']), ['params']]
###Output
_____no_output_____
###Markdown
ARIMA model
###Code
class time_Series_Learner():
def __init__(self):
super(time_Series_Learner, self).__init__()
@DataModeler.logger("Hypothesis testing")
def Hypothesis_test(self,df):
result = adfuller(df.dropna())
print('ADF Statistic: %f' % result[0])
print('p-value: %f' % result[1])
@DataModeler.logger("Transforming feature type")
def split_dataset(self,X, y, train_ratio=0.8):
X_len = len(X)
train_data_len = int(X_len * train_ratio)
X_train = X[:train_data_len]
y_train = y[:train_data_len]
X_valid = X[train_data_len:]
y_valid = y[train_data_len:]
return X_train, X_valid, y_train, y_valid
@DataModeler.logger("Training")
def Univariate_Arima(self, train_Y,parameters:tuple,Y_valid):
model = ARIMA(train_Y, order=parameters) # p,d,q parameters
model_fit = model.fit()
y_pred = model_fit.forecast(len(Y_valid))
# Calcuate metrics
metrics = {}
score_mae = mean_absolute_error(Y_valid, y_pred)
metrics["mae"] = score_mae
score_rmse = math.sqrt(mean_squared_error(Y_valid, y_pred))
metrics["rmse"] = score_rmse
score_r2 = r2_score(Y_valid, y_pred)
metrics["r2"] = score_r2
#print('RMSE: {}'.format(score_rmse))
return metrics, model_fit
@DataModeler.logger("Tuning hyperparameters")
def tune_parameters(self, parameters,y_train,y_valid):
rmse, AIC = [], []
for parameters in pdq:
warnings.filterwarnings("ignore") # specify to ignore warning messages
score_rmse, model_fit = self.Univariate_Arima(y_train,parameters,y_valid)
#rmse.append(score_rmse)
AIC.append(model_fit.aic)
final, index = min(AIC), AIC.index(min(AIC))
parameter = pdq[index]
#print(AIC)
print("suitable parameter:",parameter)
print("result:",final)
return parameter
@DataModeler.logger("Predict results on test data")
def valid_forcast(self, model_fit):
y_pred = model_fit.forecast(66)
return y_pred
@DataModeler.logger("Plot predicted data")
def plot_predict_test(self, X_valid, y_pred, y_valid ):
fig = plt.figure(figsize=(15,4))
sns.lineplot(x=X_valid.index, y=y_pred, color='blue', label='predicted') #navajowhite
sns.lineplot(x=X_valid.index, y=y_valid, color='orange', label='Ground truth') #navajowhite
plt.xlabel(xlabel='Date', fontsize=14)
plt.ylabel(ylabel='Percentage Vaccinations', fontsize=14)
plt.xticks(rotation=-60)
plt.show()
@DataModeler.logger("Model diagonostic")
def Model_diagonostic(self, model_fit):
model_fit.plot_diagnostics(figsize=(15, 12))
plt.show()
###Output
_____no_output_____
###Markdown
Regression model: preliminary result for choosing models
###Code
class RF_Learner(DataModeler):
"Wrap the operations of RF model."
def __init__(self):
super(RF_Learner, self).__init__()
@DataModeler.logger("Transforming feature type")
def split_dataset(self,X, y, train_ratio=0.8):
X_len = len(X)
train_data_len = int(X_len * train_ratio)
X_train = X[:train_data_len]
y_train = y[:train_data_len]
X_valid = X[train_data_len:]
y_valid = y[train_data_len:]
return X_train, X_valid, y_train, y_valid
@DataModeler.logger("Transforming feature type_2")
def trim(self, stamp:List[str], x_train, x_valid):
predictors_train = list(set(list(x_train.columns))-set(stamp))
x_train = x_train[predictors_train].values
#y_train = x_train[target].values
x_valid = x_valid[predictors_train].values
#y_valid_ = df_test[target].values
return x_train, x_valid
@DataModeler.logger("Fit model on training data")
def RF_train(self,x_train, y_train,x_valid):
regressor = RandomForestRegressor(n_estimators=200, random_state=0)
regressor.fit(x_train, y_train)
y_pred = regressor.predict(x_valid)
return y_pred
@DataModeler.logger("Predict results on test data")
def predict(self,y_pred,y_valid):
# Calcuate metrics
metrics = {}
score_mae = mean_absolute_error(y_valid, y_pred)
metrics["mae"] = score_mae
score_rmse = math.sqrt(mean_squared_error(y_valid, y_pred))
metrics["rmse"] = score_rmse
score_r2 = r2_score(y_valid, y_pred)
metrics["r2"] = score_r2
return metrics
###Output
_____no_output_____
###Markdown
ARIMA learner
###Code
# loading data from univariate --
df_world = pd.read_csv("/Users/luomingni/Desktop/MS/first term/5220_SML/Project/world_filtered_data.csv")
# define
df_world1 = pd.DataFrame(df_world,columns = ['date','percentage_people_vaccinated'])
df_world1.index = df_world1['date']
X = df_world1['date']
y = df_world1['percentage_people_vaccinated']
# ARIMA leaner
ARIMA_leaner = time_Series_Learner()
ARIMA_leaner.Hypothesis_test(df_world1.percentage_people_vaccinated)
#grid search
# Define the p, d and q parameters to take any value between 0 and 2
p = q = range(0, 4)
d = range(0,2)
# Generate all different combinations of p, q and q triplets
pdq = list(itertools.product(p, d, q))
X_train, X_valid, y_train, y_valid = ARIMA_leaner.split_dataset(X,y)
parameter = ARIMA_leaner.tune_parameters(pdq,y_train,y_valid)
metrics, model_fit = ARIMA_leaner.Univariate_Arima(y_train,(2,1,2),y_valid)
metrics
y_pred = ARIMA_leaner.valid_forcast(model_fit)
ARIMA_leaner.plot_predict_test(X_valid,y_pred,y_valid)
ARIMA_leaner.Model_diagonostic(model_fit)
###Output
Model diagonostic - |
notebooks/results_outliers.ipynb | ###Markdown
Load data
###Code
DATA_FILE = '../data/lda_data_8.pickle'
METADATA_FILE = '../data/metadata.csv'
dataset, ddf, w_dict = outliers.load_data(DATA_FILE, METADATA_FILE)
X_list, Y, Yaudio = dataset
X = np.concatenate(X_list, axis=1)
###Output
_____no_output_____
###Markdown
Outliers at the recording level
###Code
df_global, threshold, MD = outliers.get_outliers_df(X, Y, chi2thr=0.999)
outliers.print_most_least_outliers_topN(df_global, N=10)
tab_all = interactive_plot.plot_outliers_world_figure(MD, MD>threshold, ddf)
print "n outliers " + str(len(np.where(MD>threshold)[0]))
###Output
most outliers
Country Outliers N_Country N_Outliers
136 Botswana 0.611111 90 55
72 Ivory Coast 0.600000 15 9
95 Chad 0.545455 11 6
43 Benin 0.538462 26 14
86 Gambia 0.500000 50 25
20 Pakistan 0.494505 91 45
106 Nepal 0.473684 95 45
78 El Salvador 0.454545 33 15
64 Mozambique 0.441176 34 15
135 French Guiana 0.428571 28 12
least outliers
Country Outliers N_Country N_Outliers
1 Lithuania 0.000000 47 0
119 Denmark 0.000000 16 0
27 South Korea 0.000000 11 0
120 Kazakhstan 0.011364 88 1
31 Czech Republic 0.024390 41 1
15 Netherlands 0.029851 67 2
30 Afghanistan 0.041667 24 1
105 Sudan 0.044118 68 3
102 Nicaragua 0.047619 21 1
0 Canada 0.050000 100 5
n outliers 1706
###Markdown
Outliers for different sets of features
###Code
# outliers for features
feat = X_list
feat_labels = ['rhythm', 'melody', 'timbre', 'harmony']
tabs_feat = []
for i in range(len(feat)):
print 'outliers', feat_labels[i]
XX = feat[i]
df_feat, threshold, MD = outliers.get_outliers_df(XX, Y, chi2thr=0.999)
outliers.print_most_least_outliers_topN(df_feat, N=5)
tabs_feat.append(interactive_plot.plot_outliers_world_figure(MD, MD>threshold, ddf))
###Output
outliers rhythm
most outliers
Country Outliers N_Country N_Outliers
43 Benin 0.500000 26 13
136 Botswana 0.488889 90 44
106 Nepal 0.421053 95 40
84 Belize 0.418605 43 18
19 Yemen 0.416667 12 5
least outliers
Country Outliers N_Country N_Outliers
28 Tajikistan 0 19 0
119 Denmark 0 16 0
96 Uruguay 0 31 0
25 Republic of Serbia 0 16 0
27 South Korea 0 11 0
outliers melody
most outliers
Country Outliers N_Country N_Outliers
117 Zimbabwe 0.533333 15 8
96 Uruguay 0.483871 31 15
68 Guinea 0.454545 11 5
63 Senegal 0.390244 41 16
86 Gambia 0.380000 50 19
least outliers
Country Outliers N_Country N_Outliers
90 French Polynesia 0.000000 15 0
37 Rwanda 0.000000 17 0
119 Denmark 0.000000 16 0
18 New Zealand 0.000000 34 0
120 Kazakhstan 0.022727 88 2
outliers timbre
most outliers
Country Outliers N_Country N_Outliers
17 French Guiana 0.678571 28 19
136 Botswana 0.477778 90 43
72 Ivory Coast 0.400000 15 6
23 Azerbaijan 0.384615 13 5
106 Nepal 0.347368 95 33
least outliers
Country Outliers N_Country N_Outliers
68 Guinea 0 11 0
55 Mali 0 17 0
77 Algeria 0 27 0
33 Saint Lucia 0 43 0
31 Czech Republic 0 41 0
outliers harmony
most outliers
Country Outliers N_Country N_Outliers
43 Benin 0.538462 26 14
20 Pakistan 0.461538 91 42
86 Gambia 0.360000 50 18
52 Indonesia 0.350000 100 35
136 Botswana 0.311111 90 28
least outliers
Country Outliers N_Country N_Outliers
107 Kiribati 0 17 0
1 Lithuania 0 47 0
134 Paraguay 0 23 0
131 Tunisia 0 39 0
19 Yemen 0 12 0
###Markdown
Output the interactive plot of music outliers in .html.
###Code
interactive_plot.plot_tabs(tab_all, tabs_feat, out_file="../demo/outliers.html")
###Output
_____no_output_____
###Markdown
Outliers wrt spatial neighbourhoods
###Code
df_local = outliers.get_local_outliers_df(X, Y, w_dict)
outliers.print_most_least_outliers_topN(df_local, N=10)
###Output
most outliers
Country Outliers N_Country N_Outliers
46 China 0.260000 100 26
67 Brazil 0.240000 100 24
101 Colombia 0.211111 90 19
64 Mozambique 0.205882 34 7
76 Iran 0.188679 53 10
65 Uganda 0.176471 85 15
27 Kenya 0.164948 97 16
126 South Sudan 0.163043 92 15
24 Azerbaijan 0.153846 13 2
23 India 0.147368 95 14
least outliers
Country Outliers N_Country N_Outliers
0 Canada 0 100 0
95 Portugal 0 100 0
94 Iraq 0 87 0
93 Grenada 0 37 0
90 French Polynesia 0 15 0
89 Croatia 0 31 0
88 Morocco 0 40 0
87 Philippines 0 100 0
86 Gambia 0 50 0
85 Sierra Leone 0 100 0
###Markdown
Outliers at the country level First, cluster recordings in K clusters (select best K based on silhouette score).
###Code
centroids, cl_pred = outliers.get_country_clusters(X, bestncl=None, min_ncl=10, max_ncl=30)
ddf['Clusters'] = cl_pred
print len(np.unique(cl_pred))
outliers.print_clusters_metadata(ddf, cl_pred)
###Output
\begin{tabular}{llll}
\toprule
{} & 0 & 1 & 2 \\
\midrule
0 & (Swaziland, 12) & (Ghana, 13) & (Botswana, 21) \\
1 & (Pakistan, 17) & (Ireland, 21) & (Nepal, 32) \\
2 & (Pakistan, 35) & (Turkey, 41) & (Iraq, 57) \\
3 & (Portugal, 29) & (Switzerland, 32) & (Austria, 53) \\
4 & (Nepal, 22) & (Cuba, 24) & (Zambia, 32) \\
5 & (South Sudan, 36) & (Sierra Leone, 37) & (Lesotho, 45) \\
6 & (Mexico, 40) & (Trinidad and Tobago, 53) & (Kazakhstan, 67) \\
7 & (Japan, 34) & (Australia, 46) & (Solomon Islands, 54) \\
8 & (South Sudan, 56) & (Canada, 59) & (Norway, 62) \\
9 & (Russia, 34) & (Portugal, 38) & (Ukraine, 48) \\
\bottomrule
\end{tabular}
###Markdown
Get histogram of cluster mappings for each country.
###Code
cluster_freq = utils.get_cluster_freq_linear(X, Y, centroids)
cluster_freq.head()
###Output
_____no_output_____ |
bonus-2-one-more-thing.ipynb | ###Markdown
We made a ton of really nice figures today, and I'd like to let you take home a personalized version as my way of saying thanks for attending. Please run the code cells below to generate your personalized ordering of the Circos plots we made.
###Code
import hashlib
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
def make_image(name):
integer = int(hashlib.sha1(bytes(name, 'utf-8')).hexdigest(), 16)
digits = [int(i) for i in list(str(integer))]
# Set the order of images.
order = []
for d in digits:
if d not in order:
order.append(d)
images = {0: 'seventh.png',
1: 'sociopatterns.png',
2: 'physicians.png',
3: 'divvy.png',
4: 'crime-person.png',
5: 'crime-crime.png'}
imgs_read = []
for i in order:
if i in images.keys():
imgs_read.append(mpimg.imread('images/{0}'.format(images[i])))
# Save the images to disk
plt.imshow(np.hstack(imgs_read))
plt.axis('off')
plt.savefig('images/custom-logo.png', dpi=900, bbox_inches='tight')
plt.savefig('images/custom-logo-small.png', dpi=75, bbox_inches='tight')
print('Thank you for attending, {0}!'.format(name))
print('Your hash-ordered image can be found in at "images/custom-logo.png".'.format(name))
# Change accordingly! :)
make_image('Eric Ma')
###Output
Thank you for attending, Eric Ma!
Your hash-ordered image can be found in at "images/custom-logo.png".
###Markdown
We made a ton of really nice figures today, and I'd like to let you take home a personalized version as my way of saying thanks for attending. Please run the code cells below to generate your personalized ordering of the Circos plots we made.
###Code
import hashlib
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
def make_image(name):
integer = int(hashlib.sha1(bytes(name, 'utf-8')).hexdigest(), 16)
digits = [int(i) for i in list(str(integer))]
# Set the order of images.
order = []
for d in digits:
if d not in order:
order.append(d)
images = {0: 'seventh.png',
1: 'sociopatterns.png',
2: 'physicians.png',
3: 'divvy.png',
4: 'crime-person.png',
5: 'crime-crime.png'}
imgs_read = []
for i in order:
if i in images.keys():
imgs_read.append(mpimg.imread('images/{0}'.format(images[i])))
# Save the images to disk
plt.imshow(np.hstack(imgs_read))
plt.axis('off')
plt.savefig('images/custom-logo.png', dpi=900, bbox_inches='tight')
plt.savefig('images/custom-logo-small.png', dpi=75, bbox_inches='tight')
print('Thank you for attending, {0}!'.format(name))
print('Your hash-ordered image can be found in at "images/custom-logo.png".'.format(name))
# Change accordingly! :)
make_image('Eric Ma')
###Output
Thank you for attending, Eric Ma!
Your hash-ordered image can be found in at "images/custom-logo.png".
|
notebooks/while_input.ipynb | ###Markdown
While Loops and Input===While loops are really useful because they let your program run until a user decides to quit the program. They set up an infinite loop that runs until the user does something to end the loop. This section also introduces the first way to get input from your program's users. [Previous: If Statements](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/if_statements.ipynb) | [Home](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/index.ipynb) |[Next: Basic Terminal Apps](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/terminal_apps.ipynb) Contents===- [What is a `while` loop?](What-is-a-while-loop?) - [General syntax](General-syntax) - [Example](Example) - [Exercises](Exercises-while)- [Accepting user input](Accepting-user-input) - [General syntax](General-syntax-input) - [Example](Example-input) - [Accepting input in Python 2.7](Accepting-input-in-Python-2.7) - [Exercises](Exercises-input)- [Using while loops to keep your programs running](Using-while-loops-to-keep-your-programs-running) - [Exercises](Exercises-running)- [Using while loops to make menus](Using-while-loops-to-make-menus)- [Using while loops to process items in a list](Using-while-loops-to-process-items-in-a-list)- [Accidental Infinite loops](Accidental-Infinite-loops) - [Exercises](Exercises-infinite)- [Overall Challenges](Overall-Challenges) What is a while loop?===A while loop tests an initial condition. If that condition is true, the loop starts executing. Every time the loop finishes, the condition is reevaluated. As long as the condition remains true, the loop keeps executing. As soon as the condition becomes false, the loop stops executing. General syntax---
###Code
# Set an initial condition.
game_active = True
# Set up the while loop.
while game_active:
# Run the game.
# At some point, the game ends and game_active will be set to False.
# When that happens, the loop will stop executing.
# Do anything else you want done after the loop runs.
###Output
_____no_output_____
###Markdown
- Every while loop needs an initial condition that starts out true.- The `while` statement includes a condition to test.- All of the code in the loop will run as long as the condition remains true.- As soon as something in the loop changes the condition such that the test no longer passes, the loop stops executing.- Any code that is defined after the loop will run at this point. Example---Here is a simple example, showing how a game will stay active as long as the player has enough power.
###Code
# The player's power starts out at 5.
power = 5
# The player is allowed to keep playing as long as their power is over 0.
while power > 0:
print("You are still playing, because your power is %d." % power)
# Your game code would go here, which includes challenges that make it
# possible to lose power.
# We can represent that by just taking away from the power.
power = power - 1
print("\nOh no, your power dropped to 0! Game Over.")
###Output
_____no_output_____
###Markdown
Exercises--- Growing Strength- Make a variable called strength, and set its initial value to 5.- Print a message reporting the player's strength.- Set up a while loop that runs until the player's strength increases to a value such as 10.- Inside the while loop, print a message that reports the player's current strength.- Inside the while loop, write a statement that increases the player's strength.- Outside the while loop, print a message reporting that the player has grown too strong, and that they have moved up to a new level of the game.- Bonus: Play around with different cutoff levels for the value of *strength*, and play around with different ways to increase the strength value within the while loop. Accepting user input===Almost all interesting programs accept input from the user at some point. You can start accepting user input in your programs by using the `input()` function. The input function displays a messaget to the user describing the kind of input you are looking for, and then it waits for the user to enter a value. When the user presses Enter, the value is passed to your variable. General syntax---The general case for accepting input looks something like this:
###Code
# Get some input from the user.
variable = input('Please enter a value: ')
# Do something with the value that was entered.
###Output
_____no_output_____
###Markdown
You need a variable that will hold whatever value the user enters, and you need a message that will be displayed to the user. Example---In the following example, we have a list of names. We ask the user for a name, and we add it to our list of names.
###Code
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
_____no_output_____
###Markdown
Accepting input in Python 2.7---In Python 3, you always use `input()`. In Python 2.7, you need to use `raw_input()`:
###Code
# The same program, in Python 2.7
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = raw_input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
_____no_output_____
###Markdown
The function `input()` will work in Python 2.7, but it's not good practice to use it. When you use the `input()` function in Python 2.7, Python runs the code that's entered. This is fine in controlled situations, but it's not a very safe practice overall.If you're using Python 3, you have to use `input()`. If you're using Python 2.7, use `raw_input()`. Exercises--- Game Preferences- Make a list that includes 3 or 4 games that you like to play.- Print a statement that tells the user what games you like.- Ask the user to tell you a game they like, and store the game in a variable such as `new_game`.- Add the user's game to your list.- Print a new statement that lists all of the games that we like to play (*we* means you and your user). Using while loops to keep your programs running===Most of the programs we use every day run until we tell them to quit, and in the background this is often done with a while loop. Here is an example of how to let the user enter an arbitrary number of names.
###Code
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
_____no_output_____
###Markdown
That worked, except we ended up with the name 'quit' in our list. We can use a simple `if` test to eliminate this bug:
###Code
###highlight=[15,16]
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
if new_name != 'quit':
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
_____no_output_____
###Markdown
This is pretty cool! We now have a way to accept input from users while our programs run, and we have a way to let our programs run until our users are finished working. Exercises--- Many Games- Modify *[Game Preferences](exercises_input)* so your user can add as many games as they like. Using while loops to make menus===You now have enough Python under your belt to offer users a set of choices, and then respond to those choices until they choose to quit. Let's look at a simple example, and then analyze the code:
###Code
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
print("\nHere's a bicycle. Have fun!\n")
elif choice == '2':
print("\nHere are some running shoes. Run fast!\n")
elif choice == '3':
print("\nHere's a map. Can you leave a trip plan for us?\n")
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
_____no_output_____
###Markdown
Our programs are getting rich enough now, that we could do many different things with them. Let's clean this up in one really useful way. There are three main choices here, so let's define a function for each of those items. This way, our menu code remains really simple even as we add more complicated code to the actions of riding a bicycle, going for a run, or climbing a mountain.
###Code
###highlight=[2,3,4,5,6,7,8,9,10,30,31,32,33,34,35]
# Define the actions for each choice we want to offer.
def ride_bicycle():
print("\nHere's a bicycle. Have fun!\n")
def go_running():
print("\nHere are some running shoes. Run fast!\n")
def climb_mountain():
print("\nHere's a map. Can you leave a trip plan for us?\n")
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
ride_bicycle()
elif choice == '2':
go_running()
elif choice == '3':
climb_mountain()
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
_____no_output_____
###Markdown
This is much cleaner code, and it gives us space to separate the details of taking an action from the act of choosing that action. Using while loops to process items in a list===In the section on Lists, you saw that we can `pop()` items from a list. You can use a while list to pop items one at a time from one list, and work with them in whatever way you need. Let's look at an example where we process a list of unconfirmed users.
###Code
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop()
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
_____no_output_____
###Markdown
This works, but let's make one small improvement. The current program always works with the most recently added user. If users are joining faster than we can confirm them, we will leave some users behind. If we want to work on a 'first come, first served' model, or a 'first in first out' model, we can pop the first item in the list each time.
###Code
###highlight=[10]
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop(0)
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
_____no_output_____
###Markdown
This is a little nicer, because we are sure to get to everyone, even when our program is running under a heavy load. We also preserve the order of people as they join our project. Notice that this all came about by adding *one character* to our program! Accidental Infinite loops===Sometimes we want a while loop to run until a defined action is completed, such as emptying out a list. Sometimes we want a loop to run for an unknown period of time, for example when we are allowing users to give as much input as they want. What we rarely want, however, is a true 'runaway' infinite loop.Take a look at the following example. Can you pick out why this loop will never stop? current_number = 1 Count up to 5, printing the number each time. while current_number <= 5: print(current_number) 1 1 1 1 1 ... I faked that output, because if I ran it the output would fill up the browser. You can try to run it on your computer, as long as you know how to interrupt runaway processes:- On most systems, Ctrl-C will interrupt the currently running program.- In Spyder or in a Jupyter notebook there is a 'Stop' button - similar to the stop button on a typical remote controlThe loop runs forever, because there is no way for the test condition to ever fail. The programmer probably meant to add a line that increments current_number by 1 each time through the loop:
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number + 1
###Output
_____no_output_____
###Markdown
You will certainly make some loops run infintely at some point. When you do, just interrupt the loop and figure out the logical error you made.Infinite loops will not be a real problem until you have users who run your programs on their machines. You won't want infinite loops then, because your users would have to shut down your program, and they would consider it buggy and unreliable. Learn to spot infinite loops, and make sure they don't pop up in your polished programs later on.Here is one more example of an accidental infinite loop:
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number - 1
###Output
_____no_output_____
###Markdown
While Loops and Input===While loops are really useful because they let your program run until a user decides to quit the program. They set up an infinite loop that runs until the user does something to end the loop. This section also introduces the first way to get input from your program's users. [Previous: If Statements](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/if_statements.ipynb) | [Home](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/index.ipynb) |[Next: Basic Terminal Apps](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/terminal_apps.ipynb) Contents===- [What is a `while` loop?](What-is-a-while-loop?) - [General syntax](General-syntax) - [Example](Example) - [Exercises](Exercises-while)- [Accepting user input](Accepting-user-input) - [General syntax](General-syntax-input) - [Example](Example-input) - [Accepting input in Python 2.7](Accepting-input-in-Python-2.7) - [Exercises](Exercises-input)- [Using while loops to keep your programs running](Using-while-loops-to-keep-your-programs-running) - [Exercises](Exercises-running)- [Using while loops to make menus](Using-while-loops-to-make-menus)- [Using while loops to process items in a list](Using-while-loops-to-process-items-in-a-list)- [Accidental Infinite loops](Accidental-Infinite-loops) - [Exercises](Exercises-infinite)- [Overall Challenges](Overall-Challenges) What is a while loop?===A while loop tests an initial condition. If that condition is true, the loop starts executing. Every time the loop finishes, the condition is reevaluated. As long as the condition remains true, the loop keeps executing. As soon as the condition becomes false, the loop stops executing. General syntax---
###Code
# Set an initial condition.
game_active = True
# Set up the while loop.
while game_active:
# Run the game.
# At some point, the game ends and game_active will be set to False.
# When that happens, the loop will stop executing.
# Do anything else you want done after the loop runs.
###Output
_____no_output_____
###Markdown
- Every while loop needs an initial condition that starts out true.- The `while` statement includes a condition to test.- All of the code in the loop will run as long as the condition remains true.- As soon as something in the loop changes the condition such that the test no longer passes, the loop stops executing.- Any code that is defined after the loop will run at this point. Example---Here is a simple example, showing how a game will stay active as long as the player has enough power.
###Code
# The player's power starts out at 5.
power = 5
# The player is allowed to keep playing as long as their power is over 0.
while power > 0:
print("You are still playing, because your power is %d." % power)
# Your game code would go here, which includes challenges that make it
# possible to lose power.
# We can represent that by just taking away from the power.
power = power - 1
print("\nOh no, your power dropped to 0! Game Over.")
###Output
You are still playing, because your power is 5.
You are still playing, because your power is 4.
You are still playing, because your power is 3.
You are still playing, because your power is 2.
You are still playing, because your power is 1.
Oh no, your power dropped to 0! Game Over.
###Markdown
[top]() Exercises--- Growing Strength- Make a variable called strength, and set its initial value to 5.- Print a message reporting the player's strength.- Set up a while loop that runs until the player's strength increases to a value such as 10.- Inside the while loop, print a message that reports the player's current strength.- Inside the while loop, write a statement that increases the player's strength.- Outside the while loop, print a message reporting that the player has grown too strong, and that they have moved up to a new level of the game.- Bonus: Play around with different cutoff levels for the value of *strength*, and play around with different ways to increase the strength value within the while loop. [top]() Accepting user input===Almost all interesting programs accept input from the user at some point. You can start accepting user input in your programs by using the `input()` function. The input function displays a messaget to the user describing the kind of input you are looking for, and then it waits for the user to enter a value. When the user presses Enter, the value is passed to your variable. General syntax---The general case for accepting input looks something like this:
###Code
# Get some input from the user.
variable = input('Please enter a value: ')
# Do something with the value that was entered.
###Output
_____no_output_____
###Markdown
You need a variable that will hold whatever value the user enters, and you need a message that will be displayed to the user. Example---In the following example, we have a list of names. We ask the user for a name, and we add it to our list of names.
###Code
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know: jessica
['guido', 'tim', 'jesse', 'jessica']
###Markdown
Accepting input in Python 2.7---In Python 3, you always use `input()`. In Python 2.7, you need to use `raw_input()`:
###Code
# The same program, in Python 2.7
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = raw_input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know: jessica
['guido', 'tim', 'jesse', 'jessica']
###Markdown
The function `input()` will work in Python 2.7, but it's not good practice to use it. When you use the `input()` function in Python 2.7, Python runs the code that's entered. This is fine in controlled situations, but it's not a very safe practice overall.If you're using Python 3, you have to use `input()`. If you're using Python 2.7, use `raw_input()`. Exercises--- Game Preferences- Make a list that includes 3 or 4 games that you like to play.- Print a statement that tells the user what games you like.- Ask the user to tell you a game they like, and store the game in a variable such as `new_game`.- Add the user's game to your list.- Print a new statement that lists all of the games that we like to play (*we* means you and your user). [top]() Using while loops to keep your programs running===Most of the programs we use every day run until we tell them to quit, and in the background this is often done with a while loop. Here is an example of how to let the user enter an arbitrary number of names.
###Code
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know, or enter 'quit': guido
Please tell me someone I should know, or enter 'quit': jesse
Please tell me someone I should know, or enter 'quit': jessica
Please tell me someone I should know, or enter 'quit': tim
Please tell me someone I should know, or enter 'quit': quit
['guido', 'jesse', 'jessica', 'tim', 'quit']
###Markdown
That worked, except we ended up with the name 'quit' in our list. We can use a simple `if` test to eliminate this bug:
###Code
###highlight=[15,16]
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
if new_name != 'quit':
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know, or enter 'quit': guido
Please tell me someone I should know, or enter 'quit': jesse
Please tell me someone I should know, or enter 'quit': jessica
Please tell me someone I should know, or enter 'quit': tim
Please tell me someone I should know, or enter 'quit': quit
['guido', 'jesse', 'jessica', 'tim']
###Markdown
This is pretty cool! We now have a way to accept input from users while our programs run, and we have a way to let our programs run until our users are finished working. Exercises--- Many Games- Modify *[Game Preferences](exercises_input)* so your user can add as many games as they like. [top]() Using while loops to make menus===You now have enough Python under your belt to offer users a set of choices, and then respond to those choices until they choose to quit. Let's look at a simple example, and then analyze the code:
###Code
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
print("\nHere's a bicycle. Have fun!\n")
elif choice == '2':
print("\nHere are some running shoes. Run fast!\n")
elif choice == '3':
print("\nHere's a map. Can you leave a trip plan for us?\n")
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
Welcome to the nature center. What would you like to do?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 1
Here's a bicycle. Have fun!
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 3
Here's a map. Can you leave a trip plan for us?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? q
Thanks for playing. See you later.
Thanks again, bye now.
###Markdown
Our programs are getting rich enough now, that we could do many different things with them. Let's clean this up in one really useful way. There are three main choices here, so let's define a function for each of those items. This way, our menu code remains really simple even as we add more complicated code to the actions of riding a bicycle, going for a run, or climbing a mountain.
###Code
###highlight=[2,3,4,5,6,7,8,9,10,30,31,32,33,34,35]
# Define the actions for each choice we want to offer.
def ride_bicycle():
print("\nHere's a bicycle. Have fun!\n")
def go_running():
print("\nHere are some running shoes. Run fast!\n")
def climb_mountain():
print("\nHere's a map. Can you leave a trip plan for us?\n")
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
ride_bicycle()
elif choice == '2':
go_running()
elif choice == '3':
climb_mountain()
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
Welcome to the nature center. What would you like to do?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 1
Here's a bicycle. Have fun!
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 3
Here's a map. Can you leave a trip plan for us?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? q
Thanks for playing. See you later.
Thanks again, bye now.
###Markdown
This is much cleaner code, and it gives us space to separate the details of taking an action from the act of choosing that action. [top]() Using while loops to process items in a list===In the section on Lists, you saw that we can `pop()` items from a list. You can use a while list to pop items one at a time from one list, and work with them in whatever way you need. Let's look at an example where we process a list of unconfirmed users.
###Code
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop()
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
Confirming user Daria...confirmed!
Confirming user Clarence...confirmed!
Confirming user Billy...confirmed!
Confirming user Ada...confirmed!
Unconfirmed users:
Confirmed users:
- Daria
- Clarence
- Billy
- Ada
###Markdown
This works, but let's make one small improvement. The current program always works with the most recently added user. If users are joining faster than we can confirm them, we will leave some users behind. If we want to work on a 'first come, first served' model, or a 'first in first out' model, we can pop the first item in the list each time.
###Code
###highlight=[10]
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop(0)
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
Confirming user Ada...confirmed!
Confirming user Billy...confirmed!
Confirming user Clarence...confirmed!
Confirming user Daria...confirmed!
Unconfirmed users:
Confirmed users:
- Ada
- Billy
- Clarence
- Daria
###Markdown
This is a little nicer, because we are sure to get to everyone, even when our program is running under a heavy load. We also preserve the order of people as they join our project. Notice that this all came about by adding *one character* to our program! [top]() Accidental Infinite loops===Sometimes we want a while loop to run until a defined action is completed, such as emptying out a list. Sometimes we want a loop to run for an unknown period of time, for example when we are allowing users to give as much input as they want. What we rarely want, however, is a true 'runaway' infinite loop.Take a look at the following example. Can you pick out why this loop will never stop?
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
1
1
1
1
1
...
###Output
_____no_output_____
###Markdown
I faked that output, because if I ran it the output would fill up the browser. You can try to run it on your computer, as long as you know how to interrupt runaway processes:- On most systems, Ctrl-C will interrupt the currently running program.- If you are using Geany, your output is displayed in a popup terminal window. You can either press Ctrl-C, or you can use your pointer to close the terminal window.The loop runs forever, because there is no way for the test condition to ever fail. The programmer probably meant to add a line that increments current_number by 1 each time through the loop:
###Code
###highlight=[7]
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number + 1
###Output
1
2
3
4
5
###Markdown
You will certainly make some loops run infintely at some point. When you do, just interrupt the loop and figure out the logical error you made.Infinite loops will not be a real problem until you have users who run your programs on their machines. You won't want infinite loops then, because your users would have to shut down your program, and they would consider it buggy and unreliable. Learn to spot infinite loops, and make sure they don't pop up in your polished programs later on.Here is one more example of an accidental infinite loop:
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number - 1
1
0
-1
-2
-3
...
###Output
_____no_output_____
###Markdown
While Loops and Input===While loops are really useful because they let your program run until a user decides to quit the program. They set up an infinite loop that runs until the user does something to end the loop. This section also introduces the first way to get input from your program's users. [Previous: If Statements](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/if_statements.ipynb) | [Home](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/index.ipynb) |[Next: Basic Terminal Apps](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/terminal_apps.ipynb) Contents===- [What is a `while` loop?](What-is-a-while-loop?) - [General syntax](General-syntax) - [Example](Example) - [Exercises](Exercises-while)- [Accepting user input](Accepting-user-input) - [General syntax](General-syntax-input) - [Example](Example-input) - [Accepting input in Python 2.7](Accepting-input-in-Python-2.7) - [Exercises](Exercises-input)- [Using while loops to keep your programs running](Using-while-loops-to-keep-your-programs-running) - [Exercises](Exercises-running)- [Using while loops to make menus](Using-while-loops-to-make-menus)- [Using while loops to process items in a list](Using-while-loops-to-process-items-in-a-list)- [Accidental Infinite loops](Accidental-Infinite-loops) - [Exercises](Exercises-infinite)- [Overall Challenges](Overall-Challenges) What is a while loop?===A while loop tests an initial condition. If that condition is true, the loop starts executing. Every time the loop finishes, the condition is reevaluated. As long as the condition remains true, the loop keeps executing. As soon as the condition becomes false, the loop stops executing. General syntax---
###Code
# Set an initial condition.
game_active = True
# Set up the while loop.
while game_active:
# Run the game.
# At some point, the game ends and game_active will be set to False.
# When that happens, the loop will stop executing.
# Do anything else you want done after the loop runs.
###Output
_____no_output_____
###Markdown
- Every while loop needs an initial condition that starts out true.- The `while` statement includes a condition to test.- All of the code in the loop will run as long as the condition remains true.- As soon as something in the loop changes the condition such that the test no longer passes, the loop stops executing.- Any code that is defined after the loop will run at this point. Example---Here is a simple example, showing how a game will stay active as long as the player has enough power.
###Code
# The player's power starts out at 5.
power = 5
# The player is allowed to keep playing as long as their power is over 0.
while power > 0:
print("You are still playing, because your power is %d." % power)
# Your game code would go here, which includes challenges that make it
# possible to lose power.
# We can represent that by just taking away from the power.
power = power - 1
print("\nOh no, your power dropped to 0! Game Over.")
###Output
You are still playing, because your power is 5.
You are still playing, because your power is 4.
You are still playing, because your power is 3.
You are still playing, because your power is 2.
You are still playing, because your power is 1.
Oh no, your power dropped to 0! Game Over.
###Markdown
[top]() Exercises--- Growing Strength- Make a variable called strength, and set its initial value to 5.- Print a message reporting the player's strength.- Set up a while loop that runs until the player's strength increases to a value such as 10.- Inside the while loop, print a message that reports the player's current strength.- Inside the while loop, write a statement that increases the player's strength.- Outside the while loop, print a message reporting that the player has grown too strong, and that they have moved up to a new level of the game.- Bonus: Play around with different cutoff levels for the value of *strength*, and play around with different ways to increase the strength value within the while loop. [top]() Accepting user input===Almost all interesting programs accept input from the user at some point. You can start accepting user input in your programs by using the `input()` function. The input function displays a messaget to the user describing the kind of input you are looking for, and then it waits for the user to enter a value. When the user presses Enter, the value is passed to your variable. General syntax---The general case for accepting input looks something like this:
###Code
# Get some input from the user.
variable = input('Please enter a value: ')
# Do something with the value that was entered.
###Output
_____no_output_____
###Markdown
You need a variable that will hold whatever value the user enters, and you need a message that will be displayed to the user. Example---In the following example, we have a list of names. We ask the user for a name, and we add it to our list of names.
###Code
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know: jessica
['guido', 'tim', 'jesse', 'jessica']
###Markdown
Accepting input in Python 2.7---In Python 3, you always use `input()`. In Python 2.7, you need to use `raw_input()` when you want to accept text strings, and `input()` when you want to accept numerical data.
###Code
# The same program, in Python 2.7
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = raw_input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know: jessica
['guido', 'tim', 'jesse', 'jessica']
###Markdown
Exercises--- Game Preferences- Make a list that includes 3 or 4 games that you like to play.- Print a statement that tells the user what games you like.- Ask the user to tell you a game they like, and store the game in a variable such as `new_game`.- Add the user's game to your list.- Print a new statement that lists all of the games that we like to play (*we* means you and your user). [top]() Using while loops to keep your programs running===Most of the programs we use every day run until we tell them to quit, and in the background this is often done with a while loop. Here is an example of how to let the user enter an arbitrary number of names.
###Code
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know, or enter 'quit': guido
Please tell me someone I should know, or enter 'quit': jesse
Please tell me someone I should know, or enter 'quit': jessica
Please tell me someone I should know, or enter 'quit': tim
Please tell me someone I should know, or enter 'quit': quit
['guido', 'jesse', 'jessica', 'tim', 'quit']
###Markdown
That worked, except we ended up with the name 'quit' in our list. We can use a simple `if` test to eliminate this bug:
###Code
###highlight=[15,16]
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
if new_name != 'quit':
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know, or enter 'quit': guido
Please tell me someone I should know, or enter 'quit': jesse
Please tell me someone I should know, or enter 'quit': jessica
Please tell me someone I should know, or enter 'quit': tim
Please tell me someone I should know, or enter 'quit': quit
['guido', 'jesse', 'jessica', 'tim']
###Markdown
This is pretty cool! We now have a way to accept input from users while our programs run, and we have a way to let our programs run until our users are finished working. Exercises--- Many Games- Modify *[Game Preferences](exercises_input)* so your user can add as many games as they like. [top]() Using while loops to make menus===You now have enough Python under your belt to offer users a set of choices, and then respond to those choices until they choose to quit. Let's look at a simple example, and then analyze the code:
###Code
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
print("\nHere's a bicycle. Have fun!\n")
elif choice == '2':
print("\nHere are some running shoes. Run fast!\n")
elif choice == '3':
print("\nHere's a map. Can you leave a trip plan for us?\n")
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
Welcome to the nature center. What would you like to do?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 1
Here's a bicycle. Have fun!
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 3
Here's a map. Can you leave a trip plan for us?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? q
Thanks for playing. See you later.
Thanks again, bye now.
###Markdown
Our programs are getting rich enough now, that we could do many different things with them. Let's clean this up in one really useful way. There are three main choices here, so let's define a function for each of those items. This way, our menu code remains really simple even as we add more complicated code to the actions of riding a bicycle, going for a run, or climbing a mountain.
###Code
###highlight=[2,3,4,5,6,7,8,9,10,30,31,32,33,34,35]
# Define the actions for each choice we want to offer.
def ride_bicycle():
print("\nHere's a bicycle. Have fun!\n")
def go_running():
print("\nHere are some running shoes. Run fast!\n")
def climb_mountain():
print("\nHere's a map. Can you leave a trip plan for us?\n")
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
ride_bicycle()
elif choice == '2':
go_running()
elif choice == '3':
climb_mountain()
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
Welcome to the nature center. What would you like to do?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 1
Here's a bicycle. Have fun!
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 3
Here's a map. Can you leave a trip plan for us?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? q
Thanks for playing. See you later.
Thanks again, bye now.
###Markdown
This is much cleaner code, and it gives us space to separate the details of taking an action from the act of choosing that action. [top]() Using while loops to process items in a list===In the section on Lists, you saw that we can `pop()` items from a list. You can use a while list to pop items one at a time from one list, and work with them in whatever way you need. Let's look at an example where we process a list of unconfirmed users.
###Code
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop()
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
Confirming user Daria...confirmed!
Confirming user Clarence...confirmed!
Confirming user Billy...confirmed!
Confirming user Ada...confirmed!
Unconfirmed users:
Confirmed users:
- Daria
- Clarence
- Billy
- Ada
###Markdown
This works, but let's make one small improvement. The current program always works with the most recently added user. If users are joining faster than we can confirm them, we will leave some users behind. If we want to work on a 'first come, first served' model, or a 'first in first out' model, we can pop the first item in the list each time.
###Code
###highlight=[10]
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop(0)
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
Confirming user Ada...confirmed!
Confirming user Billy...confirmed!
Confirming user Clarence...confirmed!
Confirming user Daria...confirmed!
Unconfirmed users:
Confirmed users:
- Ada
- Billy
- Clarence
- Daria
###Markdown
This is a little nicer, because we are sure to get to everyone, even when our program is running under a heavy load. We also preserve the order of people as they join our project. Notice that this all came about by adding *one character* to our program! [top]() Accidental Infinite loops===Sometimes we want a while loop to run until a defined action is completed, such as emptying out a list. Sometimes we want a loop to run for an unknown period of time, for example when we are allowing users to give as much input as they want. What we rarely want, however, is a true 'runaway' infinite loop.Take a look at the following example. Can you pick out why this loop will never stop?
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
1
1
1
1
1
...
###Output
_____no_output_____
###Markdown
I faked that output, because if I ran it the output would fill up the browser. You can try to run it on your computer, as long as you know how to interrupt runaway processes:- On most systems, Ctrl-C will interrupt the currently running program.- If you are using Geany, your output is displayed in a popup terminal window. You can either press Ctrl-C, or you can use your pointer to close the terminal window.The loop runs forever, because there is no way for the test condition to ever fail. The programmer probably meant to add a line that increments current_number by 1 each time through the loop:
###Code
###highlight=[7]
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number + 1
###Output
1
2
3
4
5
###Markdown
You will certainly make some loops run infintely at some point. When you do, just interrupt the loop and figure out the logical error you made.Infinite loops will not be a real problem until you have users who run your programs on their machines. You won't want infinite loops then, because your users would have to shut down your program, and they would consider it buggy and unreliable. Learn to spot infinite loops, and make sure they don't pop up in your polished programs later on.Here is one more example of an accidental infinite loop:
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number - 1
1
0
-1
-2
-3
...
###Output
_____no_output_____
###Markdown
While Loops and Input===While loops are really useful because they let your program run until a user decides to quit the program. They set up an infinite loop that runs until the user does something to end the loop. This section also introduces the first way to get input from your program's users. [Previous: If Statements](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/if_statements.ipynb) | [Home](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/index.ipynb) |[Next: Basic Terminal Apps](http://nbviewer.ipython.org/urls/raw.github.com/ehmatthes/intro_programming/master/notebooks/terminal_apps.ipynb) Contents===- [What is a `while` loop?](What-is-a-while-loop?) - [General syntax](General-syntax) - [Example](Example) - [Exercises](Exercises-while)- [Accepting user input](Accepting-user-input) - [General syntax](General-syntax-input) - [Example](Example-input) - [Accepting input in Python 2.7](Accepting-input-in-Python-2.7) - [Exercises](Exercises-input)- [Using while loops to keep your programs running](Using-while-loops-to-keep-your-programs-running) - [Exercises](Exercises-running)- [Using while loops to make menus](Using-while-loops-to-make-menus)- [Using while loops to process items in a list](Using-while-loops-to-process-items-in-a-list)- [Accidental Infinite loops](Accidental-Infinite-loops) - [Exercises](Exercises-infinite)- [Overall Challenges](Overall-Challenges) What is a while loop?===A while loop tests an initial condition. If that condition is true, the loop starts executing. Every time the loop finishes, the condition is reevaluated. As long as the condition remains true, the loop keeps executing. As soon as the condition becomes false, the loop stops executing. General syntax---
###Code
# Set an initial condition.
game_active = True
# Set up the while loop.
while game_active:
# Run the game.
# At some point, the game ends and game_active will be set to False.
# When that happens, the loop will stop executing.
# Do anything else you want done after the loop runs.
###Output
_____no_output_____
###Markdown
- Every while loop needs an initial condition that starts out true.- The `while` statement includes a condition to test.- All of the code in the loop will run as long as the condition remains true.- As soon as something in the loop changes the condition such that the test no longer passes, the loop stops executing.- Any code that is defined after the loop will run at this point. Example---Here is a simple example, showing how a game will stay active as long as the player has enough power.
###Code
# The player's power starts out at 5.
power = 5
# The player is allowed to keep playing as long as their power is over 0.
while power > 0:
print("You are still playing, because your power is %d." % power)
# Your game code would go here, which includes challenges that make it
# possible to lose power.
# We can represent that by just taking away from the power.
power = power - 1
print("\nOh no, your power dropped to 0! Game Over.")
###Output
You are still playing, because your power is 5.
You are still playing, because your power is 4.
You are still playing, because your power is 3.
You are still playing, because your power is 2.
You are still playing, because your power is 1.
Oh no, your power dropped to 0! Game Over.
###Markdown
[top]() Exercises--- Growing Strength- Make a variable called strength, and set its initial value to 5.- Print a message reporting the player's strength.- Set up a while loop that runs until the player's strength increases to a value such as 10.- Inside the while loop, print a message that reports the player's current strength.- Inside the while loop, write a statement that increases the player's strength.- Outside the while loop, print a message reporting that the player has grown too strong, and that they have moved up to a new level of the game.- Bonus: Play around with different cutoff levels for the value of *strength*, and play around with different ways to increase the strength value within the while loop. [top]() Accepting user input===Almost all interesting programs accept input from the user at some point. You can start accepting user input in your programs by using the `input()` function. The input function displays a messaget to the user describing the kind of input you are looking for, and then it waits for the user to enter a value. When the user presses Enter, the value is passed to your variable. General syntax---The general case for accepting input looks something like this:
###Code
# Get some input from the user.
variable = input('Please enter a value: ')
# Do something with the value that was entered.
###Output
_____no_output_____
###Markdown
You need a variable that will hold whatever value the user enters, and you need a message that will be displayed to the user. Example---In the following example, we have a list of names. We ask the user for a name, and we add it to our list of names.
###Code
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know: jessica
['guido', 'tim', 'jesse', 'jessica']
###Markdown
Accepting input in Python 2.7---In Python 3, you always use `input()`. In Python 2.7, you need to use `raw_input()`:
###Code
# The same program, in Python 2.7
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = raw_input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know: jessica
['guido', 'tim', 'jesse', 'jessica']
###Markdown
The function `input()` will work in Python 2.7, but it's not good practice to use it. When you use the `input()` function in Python 2.7, Python runs the code that's entered. This is fine in controlled situations, but it's not a very safe practice overall.If you're using Python 3, you have to use `input()`. If you're using Python 2.7, use `raw_input()`. Exercises--- Game Preferences- Make a list that includes 3 or 4 games that you like to play.- Print a statement that tells the user what games you like.- Ask the user to tell you a game they like, and store the game in a variable such as `new_game`.- Add the user's game to your list.- Print a new statement that lists all of the games that we like to play (*we* means you and your user). [top]() Using while loops to keep your programs running===Most of the programs we use every day run until we tell them to quit, and in the background this is often done with a while loop. Here is an example of how to let the user enter an arbitrary number of names.
###Code
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know, or enter 'quit': guido
Please tell me someone I should know, or enter 'quit': jesse
Please tell me someone I should know, or enter 'quit': jessica
Please tell me someone I should know, or enter 'quit': tim
Please tell me someone I should know, or enter 'quit': quit
['guido', 'jesse', 'jessica', 'tim', 'quit']
###Markdown
That worked, except we ended up with the name 'quit' in our list. We can use a simple `if` test to eliminate this bug:
###Code
###highlight=[15,16]
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
if new_name != 'quit':
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know, or enter 'quit': guido
Please tell me someone I should know, or enter 'quit': jesse
Please tell me someone I should know, or enter 'quit': jessica
Please tell me someone I should know, or enter 'quit': tim
Please tell me someone I should know, or enter 'quit': quit
['guido', 'jesse', 'jessica', 'tim']
###Markdown
This is pretty cool! We now have a way to accept input from users while our programs run, and we have a way to let our programs run until our users are finished working. Exercises--- Many Games- Modify *[Game Preferences](exercises_input)* so your user can add as many games as they like. [top]() Using while loops to make menus===You now have enough Python under your belt to offer users a set of choices, and then respond to those choices until they choose to quit. Let's look at a simple example, and then analyze the code:
###Code
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
print("\nHere's a bicycle. Have fun!\n")
elif choice == '2':
print("\nHere are some running shoes. Run fast!\n")
elif choice == '3':
print("\nHere's a map. Can you leave a trip plan for us?\n")
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
Welcome to the nature center. What would you like to do?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 1
Here's a bicycle. Have fun!
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 3
Here's a map. Can you leave a trip plan for us?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? q
Thanks for playing. See you later.
Thanks again, bye now.
###Markdown
Our programs are getting rich enough now, that we could do many different things with them. Let's clean this up in one really useful way. There are three main choices here, so let's define a function for each of those items. This way, our menu code remains really simple even as we add more complicated code to the actions of riding a bicycle, going for a run, or climbing a mountain.
###Code
###highlight=[2,3,4,5,6,7,8,9,10,30,31,32,33,34,35]
# Define the actions for each choice we want to offer.
def ride_bicycle():
print("\nHere's a bicycle. Have fun!\n")
def go_running():
print("\nHere are some running shoes. Run fast!\n")
def climb_mountain():
print("\nHere's a map. Can you leave a trip plan for us?\n")
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
ride_bicycle()
elif choice == '2':
go_running()
elif choice == '3':
climb_mountain()
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
Welcome to the nature center. What would you like to do?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 1
Here's a bicycle. Have fun!
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 3
Here's a map. Can you leave a trip plan for us?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? q
Thanks for playing. See you later.
Thanks again, bye now.
###Markdown
This is much cleaner code, and it gives us space to separate the details of taking an action from the act of choosing that action. [top]() Using while loops to process items in a list===In the section on Lists, you saw that we can `pop()` items from a list. You can use a while list to pop items one at a time from one list, and work with them in whatever way you need. Let's look at an example where we process a list of unconfirmed users.
###Code
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop()
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
Confirming user Daria...confirmed!
Confirming user Clarence...confirmed!
Confirming user Billy...confirmed!
Confirming user Ada...confirmed!
Unconfirmed users:
Confirmed users:
- Daria
- Clarence
- Billy
- Ada
###Markdown
This works, but let's make one small improvement. The current program always works with the most recently added user. If users are joining faster than we can confirm them, we will leave some users behind. If we want to work on a 'first come, first served' model, or a 'first in first out' model, we can pop the first item in the list each time.
###Code
###highlight=[10]
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop(0)
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
Confirming user Ada...confirmed!
Confirming user Billy...confirmed!
Confirming user Clarence...confirmed!
Confirming user Daria...confirmed!
Unconfirmed users:
Confirmed users:
- Ada
- Billy
- Clarence
- Daria
###Markdown
This is a little nicer, because we are sure to get to everyone, even when our program is running under a heavy load. We also preserve the order of people as they join our project. Notice that this all came about by adding *one character* to our program! [top]() Accidental Infinite loops===Sometimes we want a while loop to run until a defined action is completed, such as emptying out a list. Sometimes we want a loop to run for an unknown period of time, for example when we are allowing users to give as much input as they want. What we rarely want, however, is a true 'runaway' infinite loop.Take a look at the following example. Can you pick out why this loop will never stop?
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
1
1
1
1
1
...
###Output
_____no_output_____
###Markdown
I faked that output, because if I ran it the output would fill up the browser. You can try to run it on your computer, as long as you know how to interrupt runaway processes:- On most systems, Ctrl-C will interrupt the currently running program.- If you are using Geany, your output is displayed in a popup terminal window. You can either press Ctrl-C, or you can use your pointer to close the terminal window.The loop runs forever, because there is no way for the test condition to ever fail. The programmer probably meant to add a line that increments current_number by 1 each time through the loop:
###Code
###highlight=[7]
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number + 1
###Output
1
2
3
4
5
###Markdown
You will certainly make some loops run infintely at some point. When you do, just interrupt the loop and figure out the logical error you made.Infinite loops will not be a real problem until you have users who run your programs on their machines. You won't want infinite loops then, because your users would have to shut down your program, and they would consider it buggy and unreliable. Learn to spot infinite loops, and make sure they don't pop up in your polished programs later on.Here is one more example of an accidental infinite loop:
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number - 1
1
0
-1
-2
-3
...
###Output
_____no_output_____
###Markdown
While Loops and Input===While loops are really useful because they let your program run until a user decides to quit the program. They set up an infinite loop that runs until the user does something to end the loop. This section also introduces the first way to get input from your program's users. [Previous: If Statements](if_statements.ipynb) | [Home](index.ipynb) |[Next: Basic Terminal Apps](terminal_apps.ipynb) Contents===- [What is a `while` loop?](What-is-a-while-loop?) - [General syntax](General-syntax) - [Example](Example) - [Exercises](Exercises-while)- [Accepting user input](Accepting-user-input) - [General syntax](General-syntax-input) - [Example](Example-input) - [Accepting input in Python 2.7](Accepting-input-in-Python-2.7) - [Exercises](Exercises-input)- [Using while loops to keep your programs running](Using-while-loops-to-keep-your-programs-running) - [Exercises](Exercises-running)- [Using while loops to make menus](Using-while-loops-to-make-menus)- [Using while loops to process items in a list](Using-while-loops-to-process-items-in-a-list)- [Accidental Infinite loops](Accidental-Infinite-loops) - [Exercises](Exercises-infinite)- [Overall Challenges](Overall-Challenges) What is a while loop?===A while loop tests an initial condition. If that condition is true, the loop starts executing. Every time the loop finishes, the condition is reevaluated. As long as the condition remains true, the loop keeps executing. As soon as the condition becomes false, the loop stops executing. General syntax---
###Code
# Set an initial condition.
game_active = True
# Set up the while loop.
while game_active:
# Run the game.
# At some point, the game ends and game_active will be set to False.
# When that happens, the loop will stop executing.
# Do anything else you want done after the loop runs.
###Output
_____no_output_____
###Markdown
- Every while loop needs an initial condition that starts out true.- The `while` statement includes a condition to test.- All of the code in the loop will run as long as the condition remains true.- As soon as something in the loop changes the condition such that the test no longer passes, the loop stops executing.- Any code that is defined after the loop will run at this point. Example---Here is a simple example, showing how a game will stay active as long as the player has enough power.
###Code
# The player's power starts out at 5.
power = 5
# The player is allowed to keep playing as long as their power is over 0.
while power > 0:
print("You are still playing, because your power is %d." % power)
# Your game code would go here, which includes challenges that make it
# possible to lose power.
# We can represent that by just taking away from the power.
power = power - 1
print("\nOh no, your power dropped to 0! Game Over.")
###Output
You are still playing, because your power is 5.
You are still playing, because your power is 4.
You are still playing, because your power is 3.
You are still playing, because your power is 2.
You are still playing, because your power is 1.
Oh no, your power dropped to 0! Game Over.
###Markdown
[top]() Exercises--- Growing Strength- Make a variable called strength, and set its initial value to 5.- Print a message reporting the player's strength.- Set up a while loop that runs until the player's strength increases to a value such as 10.- Inside the while loop, print a message that reports the player's current strength.- Inside the while loop, write a statement that increases the player's strength.- Outside the while loop, print a message reporting that the player has grown too strong, and that they have moved up to a new level of the game.- Bonus: Play around with different cutoff levels for the value of *strength*, and play around with different ways to increase the strength value within the while loop. [top]() Accepting user input===Almost all interesting programs accept input from the user at some point. You can start accepting user input in your programs by using the `input()` function. The input function displays a messaget to the user describing the kind of input you are looking for, and then it waits for the user to enter a value. When the user presses Enter, the value is passed to your variable. General syntax---The general case for accepting input looks something like this:
###Code
# Get some input from the user.
variable = input('Please enter a value: ')
# Do something with the value that was entered.
###Output
_____no_output_____
###Markdown
You need a variable that will hold whatever value the user enters, and you need a message that will be displayed to the user. Example---In the following example, we have a list of names. We ask the user for a name, and we add it to our list of names.
###Code
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know: jessica
['guido', 'tim', 'jesse', 'jessica']
###Markdown
Accepting input in Python 2.7---In Python 3, you always use `input()`. In Python 2.7, you need to use `raw_input()`:
###Code
# The same program, in Python 2.7
# Start with a list containing several names.
names = ['guido', 'tim', 'jesse']
# Ask the user for a name.
new_name = raw_input("Please tell me someone I should know: ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know: jessica
['guido', 'tim', 'jesse', 'jessica']
###Markdown
The function `input()` will work in Python 2.7, but it's not good practice to use it. When you use the `input()` function in Python 2.7, Python runs the code that's entered. This is fine in controlled situations, but it's not a very safe practice overall.If you're using Python 3, you have to use `input()`. If you're using Python 2.7, use `raw_input()`. Exercises--- Game Preferences- Make a list that includes 3 or 4 games that you like to play.- Print a statement that tells the user what games you like.- Ask the user to tell you a game they like, and store the game in a variable such as `new_game`.- Add the user's game to your list.- Print a new statement that lists all of the games that we like to play (*we* means you and your user). [top]() Using while loops to keep your programs running===Most of the programs we use every day run until we tell them to quit, and in the background this is often done with a while loop. Here is an example of how to let the user enter an arbitrary number of names.
###Code
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know, or enter 'quit': guido
Please tell me someone I should know, or enter 'quit': jesse
Please tell me someone I should know, or enter 'quit': jessica
Please tell me someone I should know, or enter 'quit': tim
Please tell me someone I should know, or enter 'quit': quit
['guido', 'jesse', 'jessica', 'tim', 'quit']
###Markdown
That worked, except we ended up with the name 'quit' in our list. We can use a simple `if` test to eliminate this bug:
###Code
###highlight=[15,16]
# Start with an empty list. You can 'seed' the list with
# some predefined values if you like.
names = []
# Set new_name to something other than 'quit'.
new_name = ''
# Start a loop that will run until the user enters 'quit'.
while new_name != 'quit':
# Ask the user for a name.
new_name = input("Please tell me someone I should know, or enter 'quit': ")
# Add the new name to our list.
if new_name != 'quit':
names.append(new_name)
# Show that the name has been added to the list.
print(names)
###Output
Please tell me someone I should know, or enter 'quit': guido
Please tell me someone I should know, or enter 'quit': jesse
Please tell me someone I should know, or enter 'quit': jessica
Please tell me someone I should know, or enter 'quit': tim
Please tell me someone I should know, or enter 'quit': quit
['guido', 'jesse', 'jessica', 'tim']
###Markdown
This is pretty cool! We now have a way to accept input from users while our programs run, and we have a way to let our programs run until our users are finished working. Exercises--- Many Games- Modify *[Game Preferences](exercises_input)* so your user can add as many games as they like. [top]() Using while loops to make menus===You now have enough Python under your belt to offer users a set of choices, and then respond to those choices until they choose to quit. Let's look at a simple example, and then analyze the code:
###Code
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
print("\nHere's a bicycle. Have fun!\n")
elif choice == '2':
print("\nHere are some running shoes. Run fast!\n")
elif choice == '3':
print("\nHere's a map. Can you leave a trip plan for us?\n")
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
Welcome to the nature center. What would you like to do?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 1
Here's a bicycle. Have fun!
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 3
Here's a map. Can you leave a trip plan for us?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? q
Thanks for playing. See you later.
Thanks again, bye now.
###Markdown
Our programs are getting rich enough now, that we could do many different things with them. Let's clean this up in one really useful way. There are three main choices here, so let's define a function for each of those items. This way, our menu code remains really simple even as we add more complicated code to the actions of riding a bicycle, going for a run, or climbing a mountain.
###Code
###highlight=[2,3,4,5,6,7,8,9,10,30,31,32,33,34,35]
# Define the actions for each choice we want to offer.
def ride_bicycle():
print("\nHere's a bicycle. Have fun!\n")
def go_running():
print("\nHere are some running shoes. Run fast!\n")
def climb_mountain():
print("\nHere's a map. Can you leave a trip plan for us?\n")
# Give the user some context.
print("\nWelcome to the nature center. What would you like to do?")
# Set an initial value for choice other than the value for 'quit'.
choice = ''
# Start a loop that runs until the user enters the value for 'quit'.
while choice != 'q':
# Give all the choices in a series of print statements.
print("\n[1] Enter 1 to take a bicycle ride.")
print("[2] Enter 2 to go for a run.")
print("[3] Enter 3 to climb a mountain.")
print("[q] Enter q to quit.")
# Ask for the user's choice.
choice = input("\nWhat would you like to do? ")
# Respond to the user's choice.
if choice == '1':
ride_bicycle()
elif choice == '2':
go_running()
elif choice == '3':
climb_mountain()
elif choice == 'q':
print("\nThanks for playing. See you later.\n")
else:
print("\nI don't understand that choice, please try again.\n")
# Print a message that we are all finished.
print("Thanks again, bye now.")
###Output
Welcome to the nature center. What would you like to do?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 1
Here's a bicycle. Have fun!
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? 3
Here's a map. Can you leave a trip plan for us?
[1] Enter 1 to take a bicycle ride.
[2] Enter 2 to go for a run.
[3] Enter 3 to climb a mountain.
[q] Enter q to quit.
What would you like to do? q
Thanks for playing. See you later.
Thanks again, bye now.
###Markdown
This is much cleaner code, and it gives us space to separate the details of taking an action from the act of choosing that action. [top]() Using while loops to process items in a list===In the section on Lists, you saw that we can `pop()` items from a list. You can use a while list to pop items one at a time from one list, and work with them in whatever way you need. Let's look at an example where we process a list of unconfirmed users.
###Code
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop()
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
Confirming user Daria...confirmed!
Confirming user Clarence...confirmed!
Confirming user Billy...confirmed!
Confirming user Ada...confirmed!
Unconfirmed users:
Confirmed users:
- Daria
- Clarence
- Billy
- Ada
###Markdown
This works, but let's make one small improvement. The current program always works with the most recently added user. If users are joining faster than we can confirm them, we will leave some users behind. If we want to work on a 'first come, first served' model, or a 'first in first out' model, we can pop the first item in the list each time.
###Code
###highlight=[10]
# Start with a list of unconfirmed users, and an empty list of confirmed users.
unconfirmed_users = ['ada', 'billy', 'clarence', 'daria']
confirmed_users = []
# Work through the list, and confirm each user.
while len(unconfirmed_users) > 0:
# Get the latest unconfirmed user, and process them.
current_user = unconfirmed_users.pop(0)
print("Confirming user %s...confirmed!" % current_user.title())
# Move the current user to the list of confirmed users.
confirmed_users.append(current_user)
# Prove that we have finished confirming all users.
print("\nUnconfirmed users:")
for user in unconfirmed_users:
print('- ' + user.title())
print("\nConfirmed users:")
for user in confirmed_users:
print('- ' + user.title())
###Output
Confirming user Ada...confirmed!
Confirming user Billy...confirmed!
Confirming user Clarence...confirmed!
Confirming user Daria...confirmed!
Unconfirmed users:
Confirmed users:
- Ada
- Billy
- Clarence
- Daria
###Markdown
This is a little nicer, because we are sure to get to everyone, even when our program is running under a heavy load. We also preserve the order of people as they join our project. Notice that this all came about by adding *one character* to our program! [top]() Accidental Infinite loops===Sometimes we want a while loop to run until a defined action is completed, such as emptying out a list. Sometimes we want a loop to run for an unknown period of time, for example when we are allowing users to give as much input as they want. What we rarely want, however, is a true 'runaway' infinite loop.Take a look at the following example. Can you pick out why this loop will never stop?
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
1
1
1
1
1
...
###Output
_____no_output_____
###Markdown
I faked that output, because if I ran it the output would fill up the browser. You can try to run it on your computer, as long as you know how to interrupt runaway processes:- On most systems, Ctrl-C will interrupt the currently running program.- If you are using Geany, your output is displayed in a popup terminal window. You can either press Ctrl-C, or you can use your pointer to close the terminal window.The loop runs forever, because there is no way for the test condition to ever fail. The programmer probably meant to add a line that increments current_number by 1 each time through the loop:
###Code
###highlight=[7]
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number + 1
###Output
1
2
3
4
5
###Markdown
You will certainly make some loops run infintely at some point. When you do, just interrupt the loop and figure out the logical error you made.Infinite loops will not be a real problem until you have users who run your programs on their machines. You won't want infinite loops then, because your users would have to shut down your program, and they would consider it buggy and unreliable. Learn to spot infinite loops, and make sure they don't pop up in your polished programs later on.Here is one more example of an accidental infinite loop:
###Code
current_number = 1
# Count up to 5, printing the number each time.
while current_number <= 5:
print(current_number)
current_number = current_number - 1
1
0
-1
-2
-3
...
###Output
_____no_output_____ |
sample-input/ipython-notebook/mox-assembly.ipynb | ###Markdown
Simulation Runtime Parameters
###Code
num_threads = 4
track_spacing = 0.05
num_azim = 16
tolerance = 1E-5
max_iters = 50
###Output
_____no_output_____
###Markdown
Initialize Materials
###Code
materials = materialize(filename='../c5g7-materials.h5')
print materials.keys()
###Output
[u'UO2', u'MOX-8.7%', u'Fission Chamber', u'MOX-4.3%', u'Water', u'MOX-7%', u'Control Rod', u'Guide Tube']
###Markdown
Create Bounding Surfaces
###Code
# Create ZCylinder for the fuel as well as to discretize the moderator into rings
fuel_radius = openmoc.ZCylinder(x=0.0, y=0.0, radius=0.54)
moderator_inner_radius = openmoc.ZCylinder(x=0.0, y=0.0, radius=0.62)
moderator_outer_radius = openmoc.ZCylinder(x=0.0, y=0.0, radius=0.58)
# Create planes to bound the entire geometry
left = openmoc.XPlane(x=-10.71, name='left')
right = openmoc.XPlane(x=10.71, name='right')
top = openmoc.YPlane(y=10.71, name='top')
bottom = openmoc.YPlane(y=-10.71, name='bottom')
left.setBoundaryType(openmoc.REFLECTIVE)
right.setBoundaryType(openmoc.REFLECTIVE)
top.setBoundaryType(openmoc.REFLECTIVE)
bottom.setBoundaryType(openmoc.REFLECTIVE)
###Output
_____no_output_____
###Markdown
Create Fuel Pins
###Code
# 4.3% MOX pin cell
mox43_cell = openmoc.Cell()
mox43_cell.setFill(materials['MOX-4.3%'])
mox43_cell.setNumRings(3)
mox43_cell.setNumSectors(8)
mox43_cell.addSurface(-1, fuel_radius)
mox43 = openmoc.Universe(name='MOX-4.3%')
mox43.addCell(mox43_cell)
# 7% MOX pin cell
mox7_cell = openmoc.Cell()
mox7_cell.setFill(materials['MOX-7%'])
mox7_cell.setNumRings(3)
mox7_cell.setNumSectors(8)
mox7_cell.addSurface(-1, fuel_radius)
mox7 = openmoc.Universe(name='MOX-7%')
mox7.addCell(mox7_cell)
# 8.7% MOX pin cell
mox87_cell = openmoc.Cell()
mox87_cell.setFill(materials['MOX-8.7%'])
mox87_cell.setNumRings(3)
mox87_cell.setNumSectors(8)
mox87_cell.addSurface(-1, fuel_radius)
mox87 = openmoc.Universe(name='MOX-8.7%')
mox87.addCell(mox87_cell)
# Fission chamber pin cell
fission_chamber_cell = openmoc.Cell()
fission_chamber_cell.setFill(materials['Fission Chamber'])
fission_chamber_cell.setNumRings(3)
fission_chamber_cell.setNumSectors(8)
fission_chamber_cell.addSurface(-1, fuel_radius)
fission_chamber = openmoc.Universe(name='Fission Chamber')
fission_chamber.addCell(fission_chamber_cell)
# Guide tube pin cell
guide_tube_cell = openmoc.Cell()
guide_tube_cell.setFill(materials['Guide Tube'])
guide_tube_cell.setNumRings(3)
guide_tube_cell.setNumSectors(8)
guide_tube_cell.addSurface(-1, fuel_radius)
guide_tube = openmoc.Universe(name='Guide Tube')
guide_tube.addCell(guide_tube_cell)
# Moderator rings
moderator_ring1 = openmoc.Cell()
moderator_ring2 = openmoc.Cell()
moderator_ring3 = openmoc.Cell()
moderator_ring1.setNumSectors(8)
moderator_ring2.setNumSectors(8)
moderator_ring3.setNumSectors(8)
moderator_ring1.setFill(materials['Water'])
moderator_ring2.setFill(materials['Water'])
moderator_ring3.setFill(materials['Water'])
moderator_ring1.addSurface(+1, fuel_radius)
moderator_ring1.addSurface(-1, moderator_inner_radius)
moderator_ring2.addSurface(+1, moderator_inner_radius)
moderator_ring2.addSurface(-1, moderator_outer_radius)
moderator_ring3.addSurface(+1, moderator_outer_radius)
# Add moderator rings to each pin cell
pins = [mox43, mox7, mox87, fission_chamber, guide_tube]
for pin in pins:
pin.addCell(moderator_ring1)
pin.addCell(moderator_ring2)
pin.addCell(moderator_ring3)
# CellFills for the assembly
assembly1_cell = openmoc.Cell(name='Assembly 1')
assembly1 = openmoc.Universe(name='Assembly 1')
assembly1.addCell(assembly1_cell)
###Output
_____no_output_____
###Markdown
Create Fuel Assembly
###Code
# A mixed enrichment PWR MOX fuel assembly
assembly = openmoc.Lattice(name='MOX Assembly')
assembly.setWidth(width_x=1.26, width_y=1.26)
# Create a template to map to pin cell types
template = [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1],
[1, 2, 2, 2, 2, 4, 2, 2, 4, 2, 2, 4, 2, 2, 2, 2, 1],
[1, 2, 2, 4, 2, 3, 3, 3, 3, 3, 3, 3, 2, 4, 2, 2, 1],
[1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 5, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 1],
[1, 2, 2, 4, 2, 3, 3, 3, 3, 3, 3, 3, 2, 4, 2, 2, 1],
[1, 2, 2, 2, 2, 4, 2, 2, 4, 2, 2, 4, 2, 2, 2, 2, 1],
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
universes = {1 : mox43, 2 : mox7, 3 : mox87,
4 : guide_tube, 5 : fission_chamber}
for i in range(17):
for j in range(17):
template[i][j] = universes[template[i][j]]
assembly.setUniverses([template])
# Root Cell/Universe
root_cell = openmoc.Cell(name='Full Geometry')
root_cell.setFill(assembly)
root_cell.addSurface(+1, left)
root_cell.addSurface(-1, right)
root_cell.addSurface(-1, top)
root_cell.addSurface(+1, bottom)
root_universe = openmoc.Universe(name='Root Universe')
root_universe.addCell(root_cell)
###Output
_____no_output_____
###Markdown
Initialize CMFD
###Code
cmfd = openmoc.Cmfd()
cmfd.setMOCRelaxationFactor(0.6)
cmfd.setSORRelaxationFactor(1.5)
cmfd.setLatticeStructure(17,17)
cmfd.setGroupStructure([1,4,8])
cmfd.setKNearest(3)
###Output
_____no_output_____
###Markdown
Initialize Geometry
###Code
geometry = openmoc.Geometry()
geometry.setRootUniverse(root_universe)
geometry.setCmfd(cmfd)
geometry.initializeFlatSourceRegions()
# Plot the geometry color-coded by materials
plotter.plot_materials(geometry, gridsize=500)
# Load the figure into Matplotlib
plt.imshow(plt.imread('plots/materials-z-0.0.png'))
plt.axis('off')
# Plot the geometry color-coded by cells
plotter.plot_cells(geometry, gridsize=500)
# Load the figure into Matplotlib
plt.imshow(plt.imread('plots/cells-z-0.0.png'))
plt.axis('off')
###Output
[ NORMAL ] Plotting the cells...
###Markdown
Initialize TrackGenerator
###Code
track_generator = openmoc.TrackGenerator(geometry, num_azim, track_spacing)
track_generator.setNumThreads(num_threads)
track_generator.generateTracks()
# Plot the geometry color-coded by flat source region
plotter.plot_flat_source_regions(geometry, gridsize=500)
# Load the figure into Matplotlib
plt.imshow(plt.imread('plots/flat-source-regions-z-0.0.png'))
plt.axis('off')
# Plot the geometry color-coded by CMFD cells
plotter.plot_cmfd_cells(geometry, cmfd, gridsize=500)
# Load the figure into Matplotlib
plt.imshow(plt.imread('plots/cmfd-cells.png'))
plt.axis('off')
###Output
[ NORMAL ] Plotting the CMFD cells...
###Markdown
Run Simulation
###Code
solver = openmoc.CPUSolver(track_generator)
solver.setConvergenceThreshold(tolerance)
solver.setNumThreads(num_threads)
solver.computeEigenvalue(max_iters)
plotter.plot_spatial_fluxes(solver, energy_groups=[1,3,7], gridsize=500)
# Load fast flux figure into Matplotlib
plt.imshow(plt.imread('plots/fsr-flux-group-1-z-0.0.png'))
plt.axis('off')
# Load epithermal flux figure into Matplotlib
plt.imshow(plt.imread('plots/fsr-flux-group-3-z-0.0.png'))
plt.axis('off')
# Load thermal flux figure into Matplotlib
plt.imshow(plt.imread('plots/fsr-flux-group-7-z-0.0.png'))
plt.axis('off')
plotter.plot_fission_rates(solver, gridsize=500)
# Load FSR fission rates figure into Matplotlib
plt.imshow(plt.imread('plots/fission-rates-z-0.0.png'))
plt.axis('off')
###Output
[ NORMAL ] Plotting the flat source region fission rates...
###Markdown
Simulation Runtime Parameters
###Code
num_threads = 4
azim_spacing = 0.05
num_azim = 16
tolerance = 1E-5
max_iters = 50
###Output
_____no_output_____
###Markdown
Initialize Materials
###Code
materials = load_from_hdf5(filename='c5g7-mgxs.h5', directory='..')
print(materials.keys())
###Output
dict_keys(['MOX-7%', 'Water', 'Fission Chamber', 'Control Rod', 'UO2', 'MOX-8.7%', 'Guide Tube', 'MOX-4.3%'])
###Markdown
Create Bounding Surfaces
###Code
# Create ZCylinder for the fuel
fuel_radius = openmoc.ZCylinder(x=0.0, y=0.0, radius=0.54)
# Create planes to bound the entire geometry
boundary = openmoc.RectangularPrism(21.32, 21.32)
boundary.setBoundaryType(openmoc.REFLECTIVE)
###Output
_____no_output_____
###Markdown
Create Fuel Pins
###Code
# 4.3% MOX pin cell
mox43_cell = openmoc.Cell()
mox43_cell.setFill(materials['MOX-4.3%'])
mox43_cell.setNumRings(3)
mox43_cell.setNumSectors(8)
mox43_cell.addSurface(-1, fuel_radius)
mox43 = openmoc.Universe(name='MOX-4.3%')
mox43.addCell(mox43_cell)
# 7% MOX pin cell
mox7_cell = openmoc.Cell()
mox7_cell.setFill(materials['MOX-7%'])
mox7_cell.setNumRings(3)
mox7_cell.setNumSectors(8)
mox7_cell.addSurface(-1, fuel_radius)
mox7 = openmoc.Universe(name='MOX-7%')
mox7.addCell(mox7_cell)
# 8.7% MOX pin cell
mox87_cell = openmoc.Cell()
mox87_cell.setFill(materials['MOX-8.7%'])
mox87_cell.setNumRings(3)
mox87_cell.setNumSectors(8)
mox87_cell.addSurface(-1, fuel_radius)
mox87 = openmoc.Universe(name='MOX-8.7%')
mox87.addCell(mox87_cell)
# Fission chamber pin cell
fission_chamber_cell = openmoc.Cell()
fission_chamber_cell.setFill(materials['Fission Chamber'])
fission_chamber_cell.setNumRings(3)
fission_chamber_cell.setNumSectors(8)
fission_chamber_cell.addSurface(-1, fuel_radius)
fission_chamber = openmoc.Universe(name='Fission Chamber')
fission_chamber.addCell(fission_chamber_cell)
# Guide tube pin cell
guide_tube_cell = openmoc.Cell()
guide_tube_cell.setFill(materials['Guide Tube'])
guide_tube_cell.setNumRings(3)
guide_tube_cell.setNumSectors(8)
guide_tube_cell.addSurface(-1, fuel_radius)
guide_tube = openmoc.Universe(name='Guide Tube')
guide_tube.addCell(guide_tube_cell)
# Moderator rings
moderator = openmoc.Cell()
moderator.setFill(materials['Water'])
moderator.addSurface(+1, fuel_radius)
moderator.setNumRings(3)
moderator.setNumSectors(8)
# Add moderator rings to each pin cell
pins = [mox43, mox7, mox87, fission_chamber, guide_tube]
for pin in pins:
pin.addCell(moderator)
# CellFills for the assembly
assembly1_cell = openmoc.Cell(name='Assembly 1')
assembly1 = openmoc.Universe(name='Assembly 1')
assembly1.addCell(assembly1_cell)
###Output
_____no_output_____
###Markdown
Create Fuel Assembly
###Code
# A mixed enrichment PWR MOX fuel assembly
assembly = openmoc.Lattice(name='MOX Assembly')
assembly.setWidth(width_x=1.26, width_y=1.26)
# Create a template to map to pin cell types
template = [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1],
[1, 2, 2, 2, 2, 4, 2, 2, 4, 2, 2, 4, 2, 2, 2, 2, 1],
[1, 2, 2, 4, 2, 3, 3, 3, 3, 3, 3, 3, 2, 4, 2, 2, 1],
[1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 5, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 1],
[1, 2, 2, 4, 2, 3, 3, 3, 3, 3, 3, 3, 2, 4, 2, 2, 1],
[1, 2, 2, 2, 2, 4, 2, 2, 4, 2, 2, 4, 2, 2, 2, 2, 1],
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
universes = {1 : mox43, 2 : mox7, 3 : mox87,
4 : guide_tube, 5 : fission_chamber}
for i in range(17):
for j in range(17):
template[i][j] = universes[template[i][j]]
assembly.setUniverses([template])
# Root Cell/Universe
root_cell = openmoc.Cell(name='Full Geometry')
root_cell.setFill(assembly)
root_cell.setRegion(boundary)
root_universe = openmoc.Universe(name='Root Universe')
root_universe.addCell(root_cell)
###Output
_____no_output_____
###Markdown
Initialize CMFD
###Code
cmfd = openmoc.Cmfd()
cmfd.setSORRelaxationFactor(1.5)
cmfd.setLatticeStructure(17,17)
cmfd.setGroupStructure([[1,2,3], [4,5,6,7]])
cmfd.setKNearest(3)
###Output
_____no_output_____
###Markdown
Initialize Geometry
###Code
geometry = openmoc.Geometry()
geometry.setRootUniverse(root_universe)
geometry.setCmfd(cmfd)
# Plot the geometry color-coded by materials
fig = plotter.plot_materials(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
# Plot the geometry color-coded by cells
fig = plotter.plot_cells(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the cells...
###Markdown
Initialize TrackGenerator
###Code
track_generator = openmoc.TrackGenerator(geometry, num_azim, azim_spacing)
track_generator.setNumThreads(num_threads)
track_generator.generateTracks()
# Plot the geometry color-coded by flat source region
fig = plotter.plot_flat_source_regions(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
# Plot the geometry color-coded by CMFD cells
fig = plotter.plot_cmfd_cells(geometry, cmfd, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the CMFD cells...
###Markdown
Run Simulation
###Code
solver = openmoc.CPUSolver(track_generator)
solver.setConvergenceThreshold(tolerance)
solver.setNumThreads(num_threads)
solver.computeEigenvalue(max_iters)
# Plot fast, epithermal and thermal flux
figures = plotter.plot_spatial_fluxes(solver, energy_groups=[1,3,7],
gridsize=500, get_figure=True)
map(lambda fig: fig.set_figheight(4), figures)
plt.show()
# Plots FSR fission rates
fig = plotter.plot_fission_rates(solver, gridsize=250,
norm=True, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the flat source region fission rates...
###Markdown
Simulation Runtime Parameters
###Code
num_threads = 4
track_spacing = 0.05
num_azim = 16
tolerance = 1E-5
max_iters = 50
###Output
_____no_output_____
###Markdown
Initialize Materials
###Code
materials = load_from_hdf5(filename='c5g7-mgxs.h5', directory='..')
print materials.keys()
###Output
['UO2', 'MOX-8.7%', 'Fission Chamber', 'MOX-4.3%', 'Water', 'MOX-7%', 'Control Rod', 'Guide Tube']
###Markdown
Create Bounding Surfaces
###Code
# Create ZCylinder for the fuel
fuel_radius = openmoc.ZCylinder(x=0.0, y=0.0, radius=0.54)
# Create planes to bound the entire geometry
left = openmoc.XPlane(x=-10.71, name='left')
right = openmoc.XPlane(x=10.71, name='right')
top = openmoc.YPlane(y=10.71, name='top')
bottom = openmoc.YPlane(y=-10.71, name='bottom')
left.setBoundaryType(openmoc.REFLECTIVE)
right.setBoundaryType(openmoc.REFLECTIVE)
top.setBoundaryType(openmoc.REFLECTIVE)
bottom.setBoundaryType(openmoc.REFLECTIVE)
###Output
_____no_output_____
###Markdown
Create Fuel Pins
###Code
# 4.3% MOX pin cell
mox43_cell = openmoc.Cell()
mox43_cell.setFill(materials['MOX-4.3%'])
mox43_cell.setNumRings(3)
mox43_cell.setNumSectors(8)
mox43_cell.addSurface(-1, fuel_radius)
mox43 = openmoc.Universe(name='MOX-4.3%')
mox43.addCell(mox43_cell)
# 7% MOX pin cell
mox7_cell = openmoc.Cell()
mox7_cell.setFill(materials['MOX-7%'])
mox7_cell.setNumRings(3)
mox7_cell.setNumSectors(8)
mox7_cell.addSurface(-1, fuel_radius)
mox7 = openmoc.Universe(name='MOX-7%')
mox7.addCell(mox7_cell)
# 8.7% MOX pin cell
mox87_cell = openmoc.Cell()
mox87_cell.setFill(materials['MOX-8.7%'])
mox87_cell.setNumRings(3)
mox87_cell.setNumSectors(8)
mox87_cell.addSurface(-1, fuel_radius)
mox87 = openmoc.Universe(name='MOX-8.7%')
mox87.addCell(mox87_cell)
# Fission chamber pin cell
fission_chamber_cell = openmoc.Cell()
fission_chamber_cell.setFill(materials['Fission Chamber'])
fission_chamber_cell.setNumRings(3)
fission_chamber_cell.setNumSectors(8)
fission_chamber_cell.addSurface(-1, fuel_radius)
fission_chamber = openmoc.Universe(name='Fission Chamber')
fission_chamber.addCell(fission_chamber_cell)
# Guide tube pin cell
guide_tube_cell = openmoc.Cell()
guide_tube_cell.setFill(materials['Guide Tube'])
guide_tube_cell.setNumRings(3)
guide_tube_cell.setNumSectors(8)
guide_tube_cell.addSurface(-1, fuel_radius)
guide_tube = openmoc.Universe(name='Guide Tube')
guide_tube.addCell(guide_tube_cell)
# Moderator rings
moderator = openmoc.Cell()
moderator.setFill(materials['Water'])
moderator.addSurface(+1, fuel_radius)
moderator.setNumRings(3)
moderator.setNumSectors(8)
# Add moderator rings to each pin cell
pins = [mox43, mox7, mox87, fission_chamber, guide_tube]
for pin in pins:
pin.addCell(moderator)
# CellFills for the assembly
assembly1_cell = openmoc.Cell(name='Assembly 1')
assembly1 = openmoc.Universe(name='Assembly 1')
assembly1.addCell(assembly1_cell)
###Output
_____no_output_____
###Markdown
Create Fuel Assembly
###Code
# A mixed enrichment PWR MOX fuel assembly
assembly = openmoc.Lattice(name='MOX Assembly')
assembly.setWidth(width_x=1.26, width_y=1.26)
# Create a template to map to pin cell types
template = [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1],
[1, 2, 2, 2, 2, 4, 2, 2, 4, 2, 2, 4, 2, 2, 2, 2, 1],
[1, 2, 2, 4, 2, 3, 3, 3, 3, 3, 3, 3, 2, 4, 2, 2, 1],
[1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 5, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 1],
[1, 2, 2, 4, 2, 3, 3, 3, 3, 3, 3, 3, 2, 4, 2, 2, 1],
[1, 2, 2, 2, 2, 4, 2, 2, 4, 2, 2, 4, 2, 2, 2, 2, 1],
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
universes = {1 : mox43, 2 : mox7, 3 : mox87,
4 : guide_tube, 5 : fission_chamber}
for i in range(17):
for j in range(17):
template[i][j] = universes[template[i][j]]
assembly.setUniverses([template])
# Root Cell/Universe
root_cell = openmoc.Cell(name='Full Geometry')
root_cell.setFill(assembly)
root_cell.addSurface(+1, left)
root_cell.addSurface(-1, right)
root_cell.addSurface(-1, top)
root_cell.addSurface(+1, bottom)
root_universe = openmoc.Universe(name='Root Universe')
root_universe.addCell(root_cell)
###Output
_____no_output_____
###Markdown
Initialize CMFD
###Code
cmfd = openmoc.Cmfd()
cmfd.setSORRelaxationFactor(1.5)
cmfd.setLatticeStructure(17,17)
cmfd.setGroupStructure([1,4,8])
cmfd.setKNearest(3)
###Output
_____no_output_____
###Markdown
Initialize Geometry
###Code
geometry = openmoc.Geometry()
geometry.setRootUniverse(root_universe)
geometry.setCmfd(cmfd)
# Plot the geometry color-coded by materials
fig = plotter.plot_materials(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
# Plot the geometry color-coded by cells
fig = plotter.plot_cells(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the cells...
###Markdown
Initialize TrackGenerator
###Code
track_generator = openmoc.TrackGenerator(geometry, num_azim, track_spacing)
track_generator.setNumThreads(num_threads)
track_generator.generateTracks()
# Plot the geometry color-coded by flat source region
fig = plotter.plot_flat_source_regions(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
# Plot the geometry color-coded by CMFD cells
fig = plotter.plot_cmfd_cells(geometry, cmfd, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the CMFD cells...
###Markdown
Run Simulation
###Code
solver = openmoc.CPUSolver(track_generator)
solver.setConvergenceThreshold(tolerance)
solver.setNumThreads(num_threads)
solver.computeEigenvalue(max_iters)
# Plot fast, epithermal and thermal flux
figures = plotter.plot_spatial_fluxes(solver, energy_groups=[1,3,7],
gridsize=500, get_figure=True)
map(lambda fig: fig.set_figheight(4), figures)
plt.show()
# Plots FSR fission rates
fig = plotter.plot_fission_rates(solver, gridsize=250,
norm=True, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the flat source region fission rates...
###Markdown
Simulation Runtime Parameters
###Code
num_threads = 4
azim_spacing = 0.05
num_azim = 16
tolerance = 1E-5
max_iters = 50
###Output
_____no_output_____
###Markdown
Initialize Materials
###Code
materials = load_from_hdf5(filename='c5g7-mgxs.h5', directory='..')
print materials.keys()
###Output
['UO2', 'MOX-8.7%', 'Fission Chamber', 'MOX-4.3%', 'Water', 'MOX-7%', 'Control Rod', 'Guide Tube']
###Markdown
Create Bounding Surfaces
###Code
# Create ZCylinder for the fuel
fuel_radius = openmoc.ZCylinder(x=0.0, y=0.0, radius=0.54)
# Create planes to bound the entire geometry
left = openmoc.XPlane(x=-10.71, name='left')
right = openmoc.XPlane(x=10.71, name='right')
top = openmoc.YPlane(y=10.71, name='top')
bottom = openmoc.YPlane(y=-10.71, name='bottom')
left.setBoundaryType(openmoc.REFLECTIVE)
right.setBoundaryType(openmoc.REFLECTIVE)
top.setBoundaryType(openmoc.REFLECTIVE)
bottom.setBoundaryType(openmoc.REFLECTIVE)
###Output
_____no_output_____
###Markdown
Create Fuel Pins
###Code
# 4.3% MOX pin cell
mox43_cell = openmoc.Cell()
mox43_cell.setFill(materials['MOX-4.3%'])
mox43_cell.setNumRings(3)
mox43_cell.setNumSectors(8)
mox43_cell.addSurface(-1, fuel_radius)
mox43 = openmoc.Universe(name='MOX-4.3%')
mox43.addCell(mox43_cell)
# 7% MOX pin cell
mox7_cell = openmoc.Cell()
mox7_cell.setFill(materials['MOX-7%'])
mox7_cell.setNumRings(3)
mox7_cell.setNumSectors(8)
mox7_cell.addSurface(-1, fuel_radius)
mox7 = openmoc.Universe(name='MOX-7%')
mox7.addCell(mox7_cell)
# 8.7% MOX pin cell
mox87_cell = openmoc.Cell()
mox87_cell.setFill(materials['MOX-8.7%'])
mox87_cell.setNumRings(3)
mox87_cell.setNumSectors(8)
mox87_cell.addSurface(-1, fuel_radius)
mox87 = openmoc.Universe(name='MOX-8.7%')
mox87.addCell(mox87_cell)
# Fission chamber pin cell
fission_chamber_cell = openmoc.Cell()
fission_chamber_cell.setFill(materials['Fission Chamber'])
fission_chamber_cell.setNumRings(3)
fission_chamber_cell.setNumSectors(8)
fission_chamber_cell.addSurface(-1, fuel_radius)
fission_chamber = openmoc.Universe(name='Fission Chamber')
fission_chamber.addCell(fission_chamber_cell)
# Guide tube pin cell
guide_tube_cell = openmoc.Cell()
guide_tube_cell.setFill(materials['Guide Tube'])
guide_tube_cell.setNumRings(3)
guide_tube_cell.setNumSectors(8)
guide_tube_cell.addSurface(-1, fuel_radius)
guide_tube = openmoc.Universe(name='Guide Tube')
guide_tube.addCell(guide_tube_cell)
# Moderator rings
moderator = openmoc.Cell()
moderator.setFill(materials['Water'])
moderator.addSurface(+1, fuel_radius)
moderator.setNumRings(3)
moderator.setNumSectors(8)
# Add moderator rings to each pin cell
pins = [mox43, mox7, mox87, fission_chamber, guide_tube]
for pin in pins:
pin.addCell(moderator)
# CellFills for the assembly
assembly1_cell = openmoc.Cell(name='Assembly 1')
assembly1 = openmoc.Universe(name='Assembly 1')
assembly1.addCell(assembly1_cell)
###Output
_____no_output_____
###Markdown
Create Fuel Assembly
###Code
# A mixed enrichment PWR MOX fuel assembly
assembly = openmoc.Lattice(name='MOX Assembly')
assembly.setWidth(width_x=1.26, width_y=1.26)
# Create a template to map to pin cell types
template = [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1],
[1, 2, 2, 2, 2, 4, 2, 2, 4, 2, 2, 4, 2, 2, 2, 2, 1],
[1, 2, 2, 4, 2, 3, 3, 3, 3, 3, 3, 3, 2, 4, 2, 2, 1],
[1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 5, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 1],
[1, 2, 2, 4, 2, 3, 3, 3, 3, 3, 3, 3, 2, 4, 2, 2, 1],
[1, 2, 2, 2, 2, 4, 2, 2, 4, 2, 2, 4, 2, 2, 2, 2, 1],
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
universes = {1 : mox43, 2 : mox7, 3 : mox87,
4 : guide_tube, 5 : fission_chamber}
for i in range(17):
for j in range(17):
template[i][j] = universes[template[i][j]]
assembly.setUniverses([template])
# Root Cell/Universe
root_cell = openmoc.Cell(name='Full Geometry')
root_cell.setFill(assembly)
root_cell.addSurface(+1, left)
root_cell.addSurface(-1, right)
root_cell.addSurface(-1, top)
root_cell.addSurface(+1, bottom)
root_universe = openmoc.Universe(name='Root Universe')
root_universe.addCell(root_cell)
###Output
_____no_output_____
###Markdown
Initialize CMFD
###Code
cmfd = openmoc.Cmfd()
cmfd.setSORRelaxationFactor(1.5)
cmfd.setLatticeStructure(17,17)
cmfd.setGroupStructure([[1,2,3], [4,5,6,7]])
cmfd.setKNearest(3)
###Output
_____no_output_____
###Markdown
Initialize Geometry
###Code
geometry = openmoc.Geometry()
geometry.setRootUniverse(root_universe)
geometry.setCmfd(cmfd)
# Plot the geometry color-coded by materials
fig = plotter.plot_materials(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
# Plot the geometry color-coded by cells
fig = plotter.plot_cells(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the cells...
###Markdown
Initialize TrackGenerator
###Code
track_generator = openmoc.TrackGenerator(geometry, num_azim, azim_spacing)
track_generator.setNumThreads(num_threads)
track_generator.generateTracks()
# Plot the geometry color-coded by flat source region
fig = plotter.plot_flat_source_regions(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
# Plot the geometry color-coded by CMFD cells
fig = plotter.plot_cmfd_cells(geometry, cmfd, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the CMFD cells...
###Markdown
Run Simulation
###Code
solver = openmoc.CPUSolver(track_generator)
solver.setConvergenceThreshold(tolerance)
solver.setNumThreads(num_threads)
solver.computeEigenvalue(max_iters)
# Plot fast, epithermal and thermal flux
figures = plotter.plot_spatial_fluxes(solver, energy_groups=[1,3,7],
gridsize=500, get_figure=True)
map(lambda fig: fig.set_figheight(4), figures)
plt.show()
# Plots FSR fission rates
fig = plotter.plot_fission_rates(solver, gridsize=250,
norm=True, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the flat source region fission rates...
###Markdown
Simulation Runtime Parameters
###Code
num_threads = 4
azim_spacing = 0.05
num_azim = 16
tolerance = 1E-5
max_iters = 50
###Output
_____no_output_____
###Markdown
Initialize Materials
###Code
materials = load_from_hdf5(filename='c5g7-mgxs.h5', directory='..')
print(materials.keys())
###Output
dict_keys(['MOX-7%', 'Water', 'Fission Chamber', 'Control Rod', 'UO2', 'MOX-8.7%', 'Guide Tube', 'MOX-4.3%'])
###Markdown
Create Bounding Surfaces
###Code
# Create ZCylinder for the fuel
fuel_radius = openmoc.ZCylinder(x=0.0, y=0.0, radius=0.54)
# Create planes to bound the entire geometry
boundary = openmoc.RectangularPrism(21.32, 21.32)
boundary.setBoundaryType(openmoc.REFLECTIVE)
###Output
_____no_output_____
###Markdown
Create Fuel Pins
###Code
# 4.3% MOX pin cell
mox43_cell = openmoc.Cell()
mox43_cell.setFill(materials['MOX-4.3%'])
mox43_cell.setNumRings(3)
mox43_cell.setNumSectors(8)
mox43_cell.addSurface(-1, fuel_radius)
mox43 = openmoc.Universe(name='MOX-4.3%')
mox43.addCell(mox43_cell)
# 7% MOX pin cell
mox7_cell = openmoc.Cell()
mox7_cell.setFill(materials['MOX-7%'])
mox7_cell.setNumRings(3)
mox7_cell.setNumSectors(8)
mox7_cell.addSurface(-1, fuel_radius)
mox7 = openmoc.Universe(name='MOX-7%')
mox7.addCell(mox7_cell)
# 8.7% MOX pin cell
mox87_cell = openmoc.Cell()
mox87_cell.setFill(materials['MOX-8.7%'])
mox87_cell.setNumRings(3)
mox87_cell.setNumSectors(8)
mox87_cell.addSurface(-1, fuel_radius)
mox87 = openmoc.Universe(name='MOX-8.7%')
mox87.addCell(mox87_cell)
# Fission chamber pin cell
fission_chamber_cell = openmoc.Cell()
fission_chamber_cell.setFill(materials['Fission Chamber'])
fission_chamber_cell.setNumRings(3)
fission_chamber_cell.setNumSectors(8)
fission_chamber_cell.addSurface(-1, fuel_radius)
fission_chamber = openmoc.Universe(name='Fission Chamber')
fission_chamber.addCell(fission_chamber_cell)
# Guide tube pin cell
guide_tube_cell = openmoc.Cell()
guide_tube_cell.setFill(materials['Guide Tube'])
guide_tube_cell.setNumRings(3)
guide_tube_cell.setNumSectors(8)
guide_tube_cell.addSurface(-1, fuel_radius)
guide_tube = openmoc.Universe(name='Guide Tube')
guide_tube.addCell(guide_tube_cell)
# Moderator rings
moderator = openmoc.Cell()
moderator.setFill(materials['Water'])
moderator.addSurface(+1, fuel_radius)
moderator.setNumRings(3)
moderator.setNumSectors(8)
# Add moderator rings to each pin cell
pins = [mox43, mox7, mox87, fission_chamber, guide_tube]
for pin in pins:
pin.addCell(moderator)
# CellFills for the assembly
assembly1_cell = openmoc.Cell(name='Assembly 1')
assembly1 = openmoc.Universe(name='Assembly 1')
assembly1.addCell(assembly1_cell)
###Output
_____no_output_____
###Markdown
Create Fuel Assembly
###Code
# A mixed enrichment PWR MOX fuel assembly
assembly = openmoc.Lattice(name='MOX Assembly')
assembly.setWidth(width_x=1.26, width_y=1.26)
# Create a template to map to pin cell types
template = [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1],
[1, 2, 2, 2, 2, 4, 2, 2, 4, 2, 2, 4, 2, 2, 2, 2, 1],
[1, 2, 2, 4, 2, 3, 3, 3, 3, 3, 3, 3, 2, 4, 2, 2, 1],
[1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 5, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 1],
[1, 2, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 3, 3, 4, 2, 1],
[1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 2, 1],
[1, 2, 2, 4, 2, 3, 3, 3, 3, 3, 3, 3, 2, 4, 2, 2, 1],
[1, 2, 2, 2, 2, 4, 2, 2, 4, 2, 2, 4, 2, 2, 2, 2, 1],
[1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
universes = {1 : mox43, 2 : mox7, 3 : mox87,
4 : guide_tube, 5 : fission_chamber}
for i in range(17):
for j in range(17):
template[i][j] = universes[template[i][j]]
assembly.setUniverses([template])
# Root Cell/Universe
root_cell = openmoc.Cell(name='Full Geometry')
root_cell.setFill(assembly)
root_cell.setRegion(boundary)
root_universe = openmoc.Universe(name='Root Universe')
root_universe.addCell(root_cell)
###Output
_____no_output_____
###Markdown
Initialize CMFD
###Code
cmfd = openmoc.Cmfd()
cmfd.setCMFDRelaxationFactor(0.7)
cmfd.setLatticeStructure(17,17)
cmfd.setGroupStructure([[1,2,3], [4,5,6,7]])
cmfd.setKNearest(3)
###Output
_____no_output_____
###Markdown
Initialize Geometry
###Code
geometry = openmoc.Geometry()
geometry.setRootUniverse(root_universe)
geometry.setCmfd(cmfd)
geometry.initializeFlatSourceRegions()
# Plot the geometry color-coded by materials
fig = plotter.plot_materials(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
# Plot the geometry color-coded by cells
fig = plotter.plot_cells(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the cells...
###Markdown
Initialize TrackGenerator
###Code
track_generator = openmoc.TrackGenerator(geometry, num_azim, azim_spacing)
track_generator.setNumThreads(num_threads)
track_generator.generateTracks()
# Plot the geometry color-coded by flat source region
fig = plotter.plot_flat_source_regions(geometry, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
# Plot the geometry color-coded by CMFD cells
fig = plotter.plot_cmfd_cells(geometry, cmfd, gridsize=500, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the CMFD cells...
###Markdown
Run Simulation
###Code
solver = openmoc.CPUSolver(track_generator)
solver.setConvergenceThreshold(tolerance)
solver.setNumThreads(num_threads)
solver.computeEigenvalue(max_iters)
# Plot fast, epithermal and thermal flux
figures = plotter.plot_spatial_fluxes(solver, energy_groups=[1,3,7],
gridsize=500, get_figure=True)
map(lambda fig: fig.set_figheight(4), figures)
plt.show()
# Plots FSR fission rates
fig = plotter.plot_fission_rates(solver, gridsize=250,
norm=True, get_figure=True)
fig.set_figheight(4)
plt.show()
###Output
[ NORMAL ] Plotting the flat source region fission rates...
|
demos/transforming_annos-Copy1.ipynb | ###Markdown
Importing Dependencies Instance Segmentation of Powder Particles and SatellitesThis example is used to generate a visualization of an individual image
###Code
## regular module imports
import cv2
import json
import matplotlib.pyplot as plt
import numpy as np
import os
from pathlib import Path
import pickle
import skimage.io
import sys
## detectron2
from detectron2 import model_zoo
from detectron2.config import get_cfg
from detectron2.data import (
DatasetCatalog,
MetadataCatalog,
)
from detectron2.engine import DefaultTrainer, DefaultPredictor
from detectron2.structures import BoxMode
#from detectron2.evaluation import coco_evaluation
from detectron2.data.datasets.coco import convert_to_coco_json
from detectron2.evaluation.coco_evaluation import instances_to_coco_json
from detectron2.utils.visualizer import GenericMask
import pycocotools.mask as mask_util
from skimage import measure
from imantics import Polygons, Mask
###Output
_____no_output_____
###Markdown
Setting System Path
###Code
root = '../'
sys.path.append(root)
from sat_helpers import data_utils, visualize, export_anno
EXPERIMENT_NAME = 'satellite' # can be 'particles' or 'satellite'
###Output
_____no_output_____
###Markdown
Establishing Methods
###Code
def flip_save_image(name, horizontally, vertically, save=True):
new_name = name
img_path = Path('Auto_annotate_images', image_name +'.png')
img = cv2.imread(str(img_path))
if horizontally:
new_name += 'x'
img = cv2.flip(img, 1)
if vertically:
new_name += 'y'
img = cv2.flip(img, 0)
new_img_path = Path('Auto_annotate_images', new_name +'.png')
if save:
cv2.imwrite(str(new_img_path), img)
return new_name
def invert_list(input_list, list_range):
output_list = []
for i in input_list:
output_list.append(i)
for i in range(len(output_list)):
output_list[i] = list_range - output_list[i]
return output_list
def invert_shape(input_dict, img_width, img_height, horizontal, vertical):
if horizontal:
input_dict['shape_attributes']['all_points_x'] = invert_list(input_dict['shape_attributes']['all_points_x'], img_width)
if vertical:
input_dict['shape_attributes']['all_points_y'] = invert_list(input_dict['shape_attributes']['all_points_y'], img_height)
return input_dict
def invert_x_y_regions(input_list, img_width, img_height, horizontal, vertical):
output_list = []
for i in input_list:
output_list.append(invert_shape(i, img_width, img_height, horizontal, vertical))
return output_list
###TODO: Finish up this method. The name of the image must be changed, including the additional image size
###Then these methods must be created for both horizontal and verticle shifts
###Create an automated program to create all of the neccesary images and test http://www.learningaboutelectronics.com/Articles/How-to-flip-an-image-horizontally-vertically-in-Python-OpenCV.php#:~:text=To%20horizontally%20flip%20an%20image,1%20(for%20horizontal%20flipping).
###Import new docs into VIA and see how they look
def flip_and_save(name, horizontally, vertically, save=True):
new_name = name
img_path = Path(root, '..', 'SEM_Images', 'initial_paper_complete_set', name +'.png')
img = cv2.imread(str(img_path))
if horizontally:
new_name += 'X'
img = cv2.flip(img, 1)
if vertically:
new_name += 'Y'
img = cv2.flip(img, 0)
new_img_path = Path(root, '..', 'SEM_Images', 'initial_paper_complete_set', 'geometric', new_name +'.png')
if save:
cv2.imwrite(str(new_img_path), img)
return new_name
print('')
def color_and_save(name, transformation):
#transformation: 0-1 = darker, 1 = no change, 1+ = lighter
im = Image.open(root + '../SEM_Images/initial_paper_complete_set/geometric/' + name + '.png')
enhancer = ImageEnhance.Brightness(im)
factor = transformation
im_output = enhancer.enhance(factor)
name_change = name
if factor < 1:
name_change += 'd'
elif factor > 1:
name_change += 'b'
else:
name_change += 's'
im_output.save(root + '../SEM_Images/initial_paper_complete_set/photometric/' + name_change + '.png')
image_name = "S02_02_SE1_300X18"
img_path = Path(root, 'data', 'SEM', image_name +'.png')
image_size = os.path.getsize(img_path)
print(image_size)
import PIL
image = PIL.Image.open(img_path)
width, height = image.size
print(width, height)
###Output
491805
1024 768
###Markdown
Transforming AnnotationsBelow are procedures to transform annotations to adhere to data augmentation. These transformations will be saved as JSON files. You should take the JSON file and load import it into VIA. From here, load in a couple images just to verify that the satellite locations of annotations are matching the satellite location in the image itself. Add in any settings you wish to have and save as a VIA project. This may now be used as a training file. Collecting Image InformationKnowing the pixel resolution and size of file is imperative towards creating new annotations for augmented images. Loading in annotations
###Code
json_path_train = Path('..', 'data', 'VIA', f'{EXPERIMENT_NAME}_training.json') # path to training data
assert json_path_train.is_file(), 'training file not found!'
f = open(json_path_train)
data = json.load(f)
###Output
_____no_output_____
###Markdown
Transforming Annotations for Photometric and Geometric Transformations [In Progress of Editing]
###Code
new_annos = []
new_dict = {}
for i in data['_via_img_metadata']:
image_names = []
image_sizes = []
img_name = i.split('.')[0]
image_names.append(img_name+'s') #Standard: Unchanged Photo or Geo
image_names.append(img_name+'d') #Darker: Unchanged Geo, darkened image
image_names.append(img_name+'b') #Brighter: Unchanged Geo, Brightened Image
image_names.append(img_name+'Xb')
image_names.append(img_name+'Xd')
image_names.append(img_name+'Xs')
image_names.append(img_name+'Yb')
image_names.append(img_name+'Yd')
image_names.append(img_name+'Ys')
image_names.append(img_name+'XYs')
image_names.append(img_name+'XYb')
image_names.append(img_name+'XYd')
for j in image_names:
image_sizes.append(os.path.getsize(Path(root, 'data', 'SEM', 'photometric', j +'.png')))
writable_dict = {'regions': data['_via_img_metadata'][i]['regions']}
with open('temp_dict1.json', 'w') as t:
json.dump(writable_dict, t)
with open('temp_dict2.json', 'w') as t:
json.dump(writable_dict, t)
with open('temp_dict3.json', 'w') as t:
json.dump(writable_dict, t)
with open('temp_dict4.json', 'w') as t:
json.dump(writable_dict, t)
json_temp_path1 = Path('temp_dict1.json')
json_temp1 = open(json_temp_path1)
initial = json.load(json_temp1)
json_temp_path2 = Path('temp_dict2.json')
json_temp2 = open(json_temp_path2)
inverted_x = json.load(json_temp2)
json_temp_path3 = Path('temp_dict3.json')
json_temp3 = open(json_temp_path3)
inverted_y = json.load(json_temp3)
json_temp_path4 = Path('temp_dict4.json')
json_temp4 = open(json_temp_path4)
inverted_xy = json.load(json_temp4)
inverted_x['regions'] = invert_x_y_regions(inverted_x['regions'], 1024, 768, False, True)
inverted_y['regions'] = invert_x_y_regions(inverted_y['regions'], 1024, 768, True, False)
inverted_xy['regions'] = invert_x_y_regions(inverted_xy['regions'], 1024, 768, True, True)
print('-'*30)
for k in range(len(image_names)):
temp_dict = {}
temp_dict['filename'] = image_names[k] + '.png'
temp_dict['size'] = image_sizes[k]
if k < 3:
temp_dict['regions'] = initial['regions']
elif k < 6:
temp_dict['regions'] = inverted_y['regions']
elif k < 9:
temp_dict['regions'] = inverted_x['regions']
elif k < 12:
temp_dict['regions'] = inverted_xy['regions']
new_dict[image_names[k] +'.png' + str(image_sizes[k])] = temp_dict
###Output
_____no_output_____
###Markdown
Saving Annotations
###Code
with open(ocean_images + '/satellite_auto_training_v2.6.json', 'w') as f:
json.dump(new_dict, f)
#print("Number of Images", str(len(new_annos)))
#print(new_dict)
###Output
_____no_output_____
###Markdown
Transforming Annotations for Geometric Transformations
###Code
new_annos = []
new_dict = {}
for i in data['_via_img_metadata']:
image_names = []
image_sizes = []
img_name = i.split('.')[0]
image_names.append(img_name)
image_names.append(img_name+'X') #Augmented Over X Axis
image_names.append(img_name+'Y') #Augmented Over y Axis
image_names.append(img_name+'XY')#Augmented Over X and Y Axis
for j in image_names:
image_sizes.append(os.path.getsize(Path(ocean_images, 'geometric', j +'.png')))
#print(data['_via_img_metadata'][i])
writable_dict = {'regions': data['_via_img_metadata'][i]['regions']}
#print(writable_dict)
with open(ocean_images + '/temp_dict1.json', 'w') as t:
json.dump(writable_dict, t)
with open(ocean_images + '/temp_dict2.json', 'w') as t:
json.dump(writable_dict, t)
with open(ocean_images + '/temp_dict3.json', 'w') as t:
json.dump(writable_dict, t)
with open(ocean_images + '/temp_dict4.json', 'w') as t:
json.dump(writable_dict, t)
json_temp_path1 = Path(ocean_images, 'temp_dict1.json')
json_temp1 = open(json_temp_path1)
initial = json.load(json_temp1)
json_temp_path2 = Path(ocean_images, 'temp_dict2.json')
json_temp2 = open(json_temp_path2)
inverted_x = json.load(json_temp2)
json_temp_path3 = Path(ocean_images, 'temp_dict3.json')
json_temp3 = open(json_temp_path3)
inverted_y = json.load(json_temp3)
json_temp_path4 = Path(ocean_images, 'temp_dict4.json')
json_temp4 = open(json_temp_path4)
inverted_xy = json.load(json_temp4)
inverted_x['regions'] = invert_x_y_regions(inverted_x['regions'], 1024, 768, False, True)
inverted_y['regions'] = invert_x_y_regions(inverted_y['regions'], 1024, 768, True, False)
inverted_xy['regions'] = invert_x_y_regions(inverted_xy['regions'], 1024, 768, True, True)
print('-'*30)
for k in range(len(image_names)):
temp_dict = {}
temp_dict['filename'] = image_names[k] + '.png'
temp_dict['size'] = image_sizes[k]
if k == 0:
temp_dict['regions'] = initial['regions']
elif k == 1:
temp_dict['regions'] = inverted_y['regions']
elif k == 2:
temp_dict['regions'] = inverted_x['regions']
elif k == 3:
temp_dict['regions'] = inverted_xy['regions']
new_dict[image_names[k] +'.png' + str(image_sizes[k])] = temp_dict
###Output
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
###Markdown
Saving Annotations
###Code
with open(ocean_images + '/satellite_auto_training_v3.6.json', 'w') as f:
json.dump(new_dict, f)
#print("Number of Images", str(len(new_annos)))
#print(new_dict)
###Output
_____no_output_____
###Markdown
Transforming Annotations for Photometric Transformations
###Code
new_annos = []
new_dict = {}
for i in data['_via_img_metadata']:
image_names = []
image_sizes = []
img_name = i.split('.')[0]
image_names.append(img_name+'s') #Unchanged
image_names.append(img_name+'b') #Brightened
image_names.append(img_name+'d') #Darkened
for j in image_names:
image_sizes.append(os.path.getsize(Path(ocean_images, 'photometric', j +'.png')))
print('-'*30)
for k in range(len(image_names)):
temp_dict = {}
temp_dict['filename'] = image_names[k] + '.png'
temp_dict['size'] = image_sizes[k]
temp_dict['regions'] = data['_via_img_metadata'][i]['regions']
new_dict[image_names[k] +'.png' + str(image_sizes[k])] = temp_dict
###Output
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
------------------------------
###Markdown
Saving Annotations
###Code
with open(ocean_images + '/satellite_auto_training_v4.6.json', 'w') as f:
json.dump(new_dict, f)
#print("Number of Images", str(len(new_annos)))
#print(new_dict)
###Output
_____no_output_____ |
Lab3-Opt1/bring-custom-script.ipynb | ###Markdown
Lab: Bring your own script with Amazon SageMaker TensorFlow script mode training and servingScript mode is a training script format for TensorFlow that lets you execute any TensorFlow training script in SageMaker with minimal modification. The [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk) handles transferring your script to a SageMaker training instance. On the training instance, SageMaker's native TensorFlow support sets up training-related environment variables and executes your training script. In this tutorial, we use the SageMaker Python SDK to launch a training job and deploy the trained model.Script mode supports training with a Python script, a Python module, or a shell script. In this example, we use a Python script to train a classification model on the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). In this example, we will show how easily you can train a SageMaker using TensorFlow 1.x and TensorFlow 2.0 scripts with SageMaker Python SDK. In addition, this notebook demonstrates how to perform real time inference with the [SageMaker TensorFlow Serving container](https://github.com/aws/sagemaker-tensorflow-serving-container). The TensorFlow Serving container is the default inference method for script mode. For full documentation on the TensorFlow Serving container, please visit [here](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Set up the environmentLet's start by setting up the environment:
###Code
# cell 01
import os
import sagemaker
from sagemaker import get_execution_role
sagemaker_session = sagemaker.Session()
role = get_execution_role()
region = sagemaker_session.boto_session.region_name
###Output
_____no_output_____
###Markdown
Training DataThe MNIST dataset has been loaded to the public S3 buckets `sagemaker-sample-data-` under the prefix `tensorflow/mnist`. There are four .npy file under this prefix:- train_data.npy- eval_data.npy- train_labels.npy- eval_labels.npy
###Code
# cell 02
training_data_uri = 's3://sagemaker-sample-data-{}/tensorflow/mnist'.format(region)
###Output
_____no_output_____
###Markdown
Construct a script for distributed trainingThis tutorial's training script was adapted from TensorFlow's official [CNN MNIST example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/layers/cnn_mnist.py). We have modified it to handle the `model_dir` parameter passed in by SageMaker. This is an S3 path which can be used for data sharing during distributed training and checkpointing and/or model persistence. We have also added an argument-parsing function to handle processing training-related variables.At the end of the training job we have added a step to export the trained model to the path stored in the environment variable `SM_MODEL_DIR`, which always points to `/opt/ml/model`. This is critical because SageMaker uploads all the model artifacts in this folder to S3 at end of training.Here is the entire script:
###Code
# cell 03
!pygmentize 'mnist.py'
# TensorFlow 2.1 script
!pygmentize 'mnist-2.py'
###Output
[37m# Copyright 2018-2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.[39;49;00m
[37m#[39;49;00m
[37m# Licensed under the Apache License, Version 2.0 (the "License"). You[39;49;00m
[37m# may not use this file except in compliance with the License. A copy of[39;49;00m
[37m# the License is located at[39;49;00m
[37m#[39;49;00m
[37m# http://aws.amazon.com/apache2.0/[39;49;00m
[37m#[39;49;00m
[37m# or in the "license" file accompanying this file. This file is[39;49;00m
[37m# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF[39;49;00m
[37m# ANY KIND, either express or implied. See the License for the specific[39;49;00m
[37m# language governing permissions and limitations under the License.[39;49;00m
[33m"""Convolutional Neural Network Estimator for MNIST, built with tf.layers."""[39;49;00m
[34mfrom[39;49;00m [04m[36m__future__[39;49;00m [34mimport[39;49;00m absolute_import
[34mfrom[39;49;00m [04m[36m__future__[39;49;00m [34mimport[39;49;00m division
[34mfrom[39;49;00m [04m[36m__future__[39;49;00m [34mimport[39;49;00m print_function
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mtensorflow[39;49;00m [34mas[39;49;00m [04m[36mtf[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mfrom[39;49;00m [04m[36mtensorflow[39;49;00m[04m[36m.[39;49;00m[04m[36mpython[39;49;00m[04m[36m.[39;49;00m[04m[36mplatform[39;49;00m [34mimport[39;49;00m tf_logging
[34mimport[39;49;00m [04m[36mlogging[39;49;00m [34mas[39;49;00m [04m[36m_logging[39;49;00m
[34mimport[39;49;00m [04m[36msys[39;49;00m [34mas[39;49;00m [04m[36m_sys[39;49;00m
[34mdef[39;49;00m [32mcnn_model_fn[39;49;00m(features, labels, mode):
[33m"""Model function for CNN."""[39;49;00m
[37m# Input Layer[39;49;00m
[37m# Reshape X to 4-D tensor: [batch_size, width, height, channels][39;49;00m
[37m# MNIST images are 28x28 pixels, and have one color channel[39;49;00m
input_layer = tf.reshape(features[[33m"[39;49;00m[33mx[39;49;00m[33m"[39;49;00m], [-[34m1[39;49;00m, [34m28[39;49;00m, [34m28[39;49;00m, [34m1[39;49;00m])
[37m# Convolutional Layer #1[39;49;00m
[37m# Computes 32 features using a 5x5 filter with ReLU activation.[39;49;00m
[37m# Padding is added to preserve width and height.[39;49;00m
[37m# Input Tensor Shape: [batch_size, 28, 28, 1][39;49;00m
[37m# Output Tensor Shape: [batch_size, 28, 28, 32][39;49;00m
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=[34m32[39;49;00m,
kernel_size=[[34m5[39;49;00m, [34m5[39;49;00m],
padding=[33m"[39;49;00m[33msame[39;49;00m[33m"[39;49;00m,
activation=tf.nn.relu)
[37m# Pooling Layer #1[39;49;00m
[37m# First max pooling layer with a 2x2 filter and stride of 2[39;49;00m
[37m# Input Tensor Shape: [batch_size, 28, 28, 32][39;49;00m
[37m# Output Tensor Shape: [batch_size, 14, 14, 32][39;49;00m
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[[34m2[39;49;00m, [34m2[39;49;00m], strides=[34m2[39;49;00m)
[37m# Convolutional Layer #2[39;49;00m
[37m# Computes 64 features using a 5x5 filter.[39;49;00m
[37m# Padding is added to preserve width and height.[39;49;00m
[37m# Input Tensor Shape: [batch_size, 14, 14, 32][39;49;00m
[37m# Output Tensor Shape: [batch_size, 14, 14, 64][39;49;00m
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=[34m64[39;49;00m,
kernel_size=[[34m5[39;49;00m, [34m5[39;49;00m],
padding=[33m"[39;49;00m[33msame[39;49;00m[33m"[39;49;00m,
activation=tf.nn.relu)
[37m# Pooling Layer #2[39;49;00m
[37m# Second max pooling layer with a 2x2 filter and stride of 2[39;49;00m
[37m# Input Tensor Shape: [batch_size, 14, 14, 64][39;49;00m
[37m# Output Tensor Shape: [batch_size, 7, 7, 64][39;49;00m
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[[34m2[39;49;00m, [34m2[39;49;00m], strides=[34m2[39;49;00m)
[37m# Flatten tensor into a batch of vectors[39;49;00m
[37m# Input Tensor Shape: [batch_size, 7, 7, 64][39;49;00m
[37m# Output Tensor Shape: [batch_size, 7 * 7 * 64][39;49;00m
pool2_flat = tf.reshape(pool2, [-[34m1[39;49;00m, [34m7[39;49;00m * [34m7[39;49;00m * [34m64[39;49;00m])
[37m# Dense Layer[39;49;00m
[37m# Densely connected layer with 1024 neurons[39;49;00m
[37m# Input Tensor Shape: [batch_size, 7 * 7 * 64][39;49;00m
[37m# Output Tensor Shape: [batch_size, 1024][39;49;00m
dense = tf.layers.dense(inputs=pool2_flat, units=[34m1024[39;49;00m, activation=tf.nn.relu)
[37m# Add dropout operation; 0.6 probability that element will be kept[39;49;00m
dropout = tf.layers.dropout(
inputs=dense, rate=[34m0.4[39;49;00m, training=mode == tf.estimator.ModeKeys.TRAIN)
[37m# Logits layer[39;49;00m
[37m# Input Tensor Shape: [batch_size, 1024][39;49;00m
[37m# Output Tensor Shape: [batch_size, 10][39;49;00m
logits = tf.layers.dense(inputs=dropout, units=[34m10[39;49;00m)
predictions = {
[37m# Generate predictions (for PREDICT and EVAL mode)[39;49;00m
[33m"[39;49;00m[33mclasses[39;49;00m[33m"[39;49;00m: tf.argmax([36minput[39;49;00m=logits, axis=[34m1[39;49;00m),
[37m# Add `softmax_tensor` to the graph. It is used for PREDICT and by the[39;49;00m
[37m# `logging_hook`.[39;49;00m
[33m"[39;49;00m[33mprobabilities[39;49;00m[33m"[39;49;00m: tf.nn.softmax(logits, name=[33m"[39;49;00m[33msoftmax_tensor[39;49;00m[33m"[39;49;00m)
}
[34mif[39;49;00m mode == tf.estimator.ModeKeys.PREDICT:
[34mreturn[39;49;00m tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
[37m# Calculate Loss (for both TRAIN and EVAL modes)[39;49;00m
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
[37m# Configure the Training Op (for TRAIN mode)[39;49;00m
[34mif[39;49;00m mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=[34m0.001[39;49;00m)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
[34mreturn[39;49;00m tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
[37m# Add evaluation metrics (for EVAL mode)[39;49;00m
eval_metric_ops = {
[33m"[39;49;00m[33maccuracy[39;49;00m[33m"[39;49;00m: tf.metrics.accuracy(
labels=labels, predictions=predictions[[33m"[39;49;00m[33mclasses[39;49;00m[33m"[39;49;00m])}
[34mreturn[39;49;00m tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
[34mdef[39;49;00m [32m_load_training_data[39;49;00m(base_dir):
x_train = np.load(os.path.join(base_dir, [33m'[39;49;00m[33mtrain_data.npy[39;49;00m[33m'[39;49;00m))
y_train = np.load(os.path.join(base_dir, [33m'[39;49;00m[33mtrain_labels.npy[39;49;00m[33m'[39;49;00m))
[34mreturn[39;49;00m x_train, y_train
[34mdef[39;49;00m [32m_load_testing_data[39;49;00m(base_dir):
x_test = np.load(os.path.join(base_dir, [33m'[39;49;00m[33meval_data.npy[39;49;00m[33m'[39;49;00m))
y_test = np.load(os.path.join(base_dir, [33m'[39;49;00m[33meval_labels.npy[39;49;00m[33m'[39;49;00m))
[34mreturn[39;49;00m x_test, y_test
[34mdef[39;49;00m [32m_parse_args[39;49;00m():
parser = argparse.ArgumentParser()
[37m# Data, model, and output directories[39;49;00m
[37m# model_dir is always passed in from SageMaker. By default this is a S3 path under the default bucket.[39;49;00m
parser.add_argument([33m'[39;49;00m[33m--model_dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m)
parser.add_argument([33m'[39;49;00m[33m--sm-model-dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ.get([33m'[39;49;00m[33mSM_MODEL_DIR[39;49;00m[33m'[39;49;00m))
parser.add_argument([33m'[39;49;00m[33m--train[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ.get([33m'[39;49;00m[33mSM_CHANNEL_TRAINING[39;49;00m[33m'[39;49;00m))
parser.add_argument([33m'[39;49;00m[33m--hosts[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mlist[39;49;00m, default=json.loads(os.environ.get([33m'[39;49;00m[33mSM_HOSTS[39;49;00m[33m'[39;49;00m)))
parser.add_argument([33m'[39;49;00m[33m--current-host[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ.get([33m'[39;49;00m[33mSM_CURRENT_HOST[39;49;00m[33m'[39;49;00m))
[34mreturn[39;49;00m parser.parse_known_args()
[34mdef[39;49;00m [32mserving_input_fn[39;49;00m():
inputs = {[33m'[39;49;00m[33mx[39;49;00m[33m'[39;49;00m: tf.placeholder(tf.float32, [[34mNone[39;49;00m, [34m784[39;49;00m])}
[34mreturn[39;49;00m tf.estimator.export.ServingInputReceiver(inputs, inputs)
[34mif[39;49;00m [31m__name__[39;49;00m == [33m"[39;49;00m[33m__main__[39;49;00m[33m"[39;49;00m:
args, unknown = _parse_args()
train_data, train_labels = _load_training_data(args.train)
eval_data, eval_labels = _load_testing_data(args.train)
[37m# Create the Estimator[39;49;00m
mnist_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn, model_dir=args.model_dir)
[37m# Set up logging for predictions[39;49;00m
[37m# Log the values in the "Softmax" tensor with label "probabilities"[39;49;00m
tensors_to_log = {[33m"[39;49;00m[33mprobabilities[39;49;00m[33m"[39;49;00m: [33m"[39;49;00m[33msoftmax_tensor[39;49;00m[33m"[39;49;00m}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=[34m50[39;49;00m)
[37m# Train the model[39;49;00m
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={[33m"[39;49;00m[33mx[39;49;00m[33m"[39;49;00m: train_data},
y=train_labels,
batch_size=[34m100[39;49;00m,
num_epochs=[34mNone[39;49;00m,
shuffle=[34mTrue[39;49;00m)
[37m# Evaluate the model and print results[39;49;00m
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={[33m"[39;49;00m[33mx[39;49;00m[33m"[39;49;00m: eval_data},
y=eval_labels,
num_epochs=[34m1[39;49;00m,
shuffle=[34mFalse[39;49;00m)
train_spec = tf.estimator.TrainSpec(train_input_fn, max_steps=[34m20000[39;49;00m)
eval_spec = tf.estimator.EvalSpec(eval_input_fn)
tf.estimator.train_and_evaluate(mnist_classifier, train_spec, eval_spec)
[34mif[39;49;00m args.current_host == args.hosts[[34m0[39;49;00m]:
mnist_classifier.export_savedmodel(args.sm_model_dir, serving_input_fn)
[37m# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.[39;49;00m
[37m#[39;49;00m
[37m# Licensed under the Apache License, Version 2.0 (the "License"). You[39;49;00m
[37m# may not use this file except in compliance with the License. A copy of[39;49;00m
[37m# the License is located at[39;49;00m
[37m#[39;49;00m
[37m# http://aws.amazon.com/apache2.0/[39;49;00m
[37m#[39;49;00m
[37m# or in the "license" file accompanying this file. This file is[39;49;00m
[37m# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF[39;49;00m
[37m# ANY KIND, either express or implied. See the License for the specific[39;49;00m
[37m# language governing permissions and limitations under the License.import tensorflow as tf[39;49;00m
[34mimport[39;49;00m [04m[36mtensorflow[39;49;00m [34mas[39;49;00m [04m[36mtf[39;49;00m
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mjson[39;49;00m
[34mdef[39;49;00m [32mmodel[39;49;00m(x_train, y_train, x_test, y_test):
[33m"""Generate a simple model"""[39;49;00m
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense([34m1024[39;49;00m, activation=tf.nn.relu),
tf.keras.layers.Dropout([34m0.4[39;49;00m),
tf.keras.layers.Dense([34m10[39;49;00m, activation=tf.nn.softmax)
])
model.compile(optimizer=[33m'[39;49;00m[33madam[39;49;00m[33m'[39;49;00m,
loss=[33m'[39;49;00m[33msparse_categorical_crossentropy[39;49;00m[33m'[39;49;00m,
metrics=[[33m'[39;49;00m[33maccuracy[39;49;00m[33m'[39;49;00m])
model.fit(x_train, y_train)
model.evaluate(x_test, y_test)
[34mreturn[39;49;00m model
[34mdef[39;49;00m [32m_load_training_data[39;49;00m(base_dir):
[33m"""Load MNIST training data"""[39;49;00m
x_train = np.load(os.path.join(base_dir, [33m'[39;49;00m[33mtrain_data.npy[39;49;00m[33m'[39;49;00m))
y_train = np.load(os.path.join(base_dir, [33m'[39;49;00m[33mtrain_labels.npy[39;49;00m[33m'[39;49;00m))
[34mreturn[39;49;00m x_train, y_train
[34mdef[39;49;00m [32m_load_testing_data[39;49;00m(base_dir):
[33m"""Load MNIST testing data"""[39;49;00m
x_test = np.load(os.path.join(base_dir, [33m'[39;49;00m[33meval_data.npy[39;49;00m[33m'[39;49;00m))
y_test = np.load(os.path.join(base_dir, [33m'[39;49;00m[33meval_labels.npy[39;49;00m[33m'[39;49;00m))
[34mreturn[39;49;00m x_test, y_test
[34mdef[39;49;00m [32m_parse_args[39;49;00m():
parser = argparse.ArgumentParser()
[37m# Data, model, and output directories[39;49;00m
[37m# model_dir is always passed in from SageMaker. By default this is a S3 path under the default bucket.[39;49;00m
parser.add_argument([33m'[39;49;00m[33m--model_dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m)
parser.add_argument([33m'[39;49;00m[33m--sm-model-dir[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ.get([33m'[39;49;00m[33mSM_MODEL_DIR[39;49;00m[33m'[39;49;00m))
parser.add_argument([33m'[39;49;00m[33m--train[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ.get([33m'[39;49;00m[33mSM_CHANNEL_TRAINING[39;49;00m[33m'[39;49;00m))
parser.add_argument([33m'[39;49;00m[33m--hosts[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mlist[39;49;00m, default=json.loads(os.environ.get([33m'[39;49;00m[33mSM_HOSTS[39;49;00m[33m'[39;49;00m)))
parser.add_argument([33m'[39;49;00m[33m--current-host[39;49;00m[33m'[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=os.environ.get([33m'[39;49;00m[33mSM_CURRENT_HOST[39;49;00m[33m'[39;49;00m))
[34mreturn[39;49;00m parser.parse_known_args()
[34mif[39;49;00m [31m__name__[39;49;00m == [33m"[39;49;00m[33m__main__[39;49;00m[33m"[39;49;00m:
args, unknown = _parse_args()
train_data, train_labels = _load_training_data(args.train)
eval_data, eval_labels = _load_testing_data(args.train)
mnist_classifier = model(train_data, train_labels, eval_data, eval_labels)
[34mif[39;49;00m args.current_host == args.hosts[[34m0[39;49;00m]:
[37m# save model to an S3 directory with version number '00000001' in Tensorflow SavedModel Format[39;49;00m
[37m# To export the model as h5 format use model.save('my_model.h5')[39;49;00m
mnist_classifier.save(os.path.join(args.sm_model_dir, [33m'[39;49;00m[33m000000001[39;49;00m[33m'[39;49;00m))
###Markdown
Create a training job using the TensorFlow estimatorThe `sagemaker.tensorflow.TensorFlow` estimator handles locating the script mode container, uploading your script to a S3 location and creating a SageMaker training job. Let's call out a couple important parameters here:`py_version` is set to `'py3'` to indicate that we are using script mode since legacy mode supports only Python 2. Though Python 2 will be deprecated soon, you can use script mode with Python 2 by setting py_version to `py2` and `script_mode` to True.`distribution` is used to configure the distributed training setup. It's required only if you are doing distributed training either across a cluster of instances or across multiple GPUs. Here we are using parameter servers as the distributed training schema. SageMaker training jobs run on homogeneous clusters. To make parameter server more performant in the SageMaker setup, we run a parameter server on every instance in the cluster, so there is no need to specify the number of parameter servers to launch. Script mode also supports distributed training with [Horovod](https://github.com/horovod/horovod). You can find the full documentation on how to configure distributions [here](https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/tensorflowdistributed-training).
###Code
# cell 04
from sagemaker.tensorflow import TensorFlow
mnist_estimator = TensorFlow(entry_point='mnist.py',
role=role,
instance_count=2,
instance_type='ml.p3.2xlarge',
framework_version='1.15.2',
py_version='py3',
distribution={'parameter_server': {'enabled': True}})
###Output
_____no_output_____
###Markdown
You can also initiate an estimator to train with TensorFlow 2.1 script. The only things that you will need to change are the script name and `framework_version`
###Code
# cell 05
mnist_estimator2 = TensorFlow(entry_point='mnist-2.py',
role=role,
instance_count=2,
instance_type='ml.p3.2xlarge',
framework_version='2.1.0',
py_version='py3',
distribution={'parameter_server': {'enabled': True}})
###Output
_____no_output_____
###Markdown
Calling `fit`To start a training job, we call `estimator.fit(training_data_uri)`.An S3 location is used here as the input. fit creates a default channel named 'training', which points to this S3 location. In the training script we can then access the training data from the location stored in SM_CHANNEL_TRAINING. fit accepts a couple other types of input as well. See the API doc [here](https://sagemaker.readthedocs.io/en/stable/estimators.htmlsagemaker.estimator.EstimatorBase.fit) for details.When training starts, the TensorFlow container executes mnist.py, passing hyperparameters and model_dir from the estimator as script arguments. Because we didn't define either in this example, no hyperparameters are passed, and model_dir defaults to `s3:///`, so the script execution is as follows:`python mnist.py --model_dir s3:///`When training is complete, the training job will upload the saved model for TensorFlow serving.
###Code
# cell 06
mnist_estimator.fit(training_data_uri)
###Output
2021-09-16 20:06:48 Starting - Starting the training job...
2021-09-16 20:06:57 Starting - Launching requested ML instancesProfilerReport-1631822807: InProgress
.........
2021-09-16 20:08:39 Starting - Preparing the instances for training.........
2021-09-16 20:10:19 Downloading - Downloading input data...
2021-09-16 20:10:40 Training - Downloading the training image..[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/__init__.py:1473: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead.
[0m
[35m2021-09-16 20:10:58,409 sagemaker-containers INFO Imported framework sagemaker_tensorflow_container.training[0m
[35m2021-09-16 20:10:58,698 sagemaker_tensorflow_container.training INFO Running distributed training job with parameter servers[0m
[35m2021-09-16 20:10:58,698 sagemaker_tensorflow_container.training INFO Launching parameter server process[0m
[35m2021-09-16 20:10:58,699 sagemaker_tensorflow_container.training INFO Running distributed training job with parameter servers[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/sagemaker_tensorflow_container/training.py:99: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
[0m
[35m2021-09-16 20:10:58,699 tensorflow WARNING From /usr/local/lib/python3.6/dist-packages/sagemaker_tensorflow_container/training.py:99: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/sagemaker_tensorflow_container/training.py:101: The name tf.train.Server is deprecated. Please use tf.distribute.Server instead.
[0m
[35m2021-09-16 20:10:58,699 tensorflow WARNING From /usr/local/lib/python3.6/dist-packages/sagemaker_tensorflow_container/training.py:101: The name tf.train.Server is deprecated. Please use tf.distribute.Server instead.
[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/__init__.py:1473: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead.
[0m
[34m2021-09-16 20:10:57,169 sagemaker-containers INFO Imported framework sagemaker_tensorflow_container.training[0m
[34m2021-09-16 20:10:57,504 sagemaker_tensorflow_container.training INFO Running distributed training job with parameter servers[0m
[34m2021-09-16 20:10:57,504 sagemaker_tensorflow_container.training INFO Launching parameter server process[0m
[34m2021-09-16 20:10:57,504 sagemaker_tensorflow_container.training INFO Running distributed training job with parameter servers[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/sagemaker_tensorflow_container/training.py:99: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
[0m
[34m2021-09-16 20:10:57,505 tensorflow WARNING From /usr/local/lib/python3.6/dist-packages/sagemaker_tensorflow_container/training.py:99: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.
[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/sagemaker_tensorflow_container/training.py:101: The name tf.train.Server is deprecated. Please use tf.distribute.Server instead.
[0m
[34m2021-09-16 20:10:57,505 tensorflow WARNING From /usr/local/lib/python3.6/dist-packages/sagemaker_tensorflow_container/training.py:101: The name tf.train.Server is deprecated. Please use tf.distribute.Server instead.
[0m
[34m2021-09-16 20:10:58,365 sagemaker_tensorflow_container.training INFO Launching worker process[0m
[34m2021-09-16 20:10:58,598 sagemaker-containers INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"additional_framework_parameters": {
"sagemaker_parameter_server_enabled": true
},
"channel_input_dirs": {
"training": "/opt/ml/input/data/training"
},
"current_host": "algo-1",
"framework_module": "sagemaker_tensorflow_container.training:main",
"hosts": [
"algo-1",
"algo-2"
],
"hyperparameters": {
"model_dir": "s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model"
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"training": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "tensorflow-training-2021-09-16-20-06-47-336",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/source/sourcedir.tar.gz",
"module_name": "mnist",
"network_interface_name": "eth0",
"num_cpus": 8,
"num_gpus": 1,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1",
"algo-2"
],
"network_interface_name": "eth0"
},
"user_entry_point": "mnist.py"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_HOSTS=["algo-1","algo-2"][0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_HPS={"model_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model"}[0m
[34mSM_USER_ENTRY_POINT=mnist.py[0m
[34mSM_FRAMEWORK_PARAMS={"sagemaker_parameter_server_enabled":true}[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1","algo-2"],"network_interface_name":"eth0"}[0m
[34mSM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_CHANNELS=["training"][0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mSM_MODULE_NAME=mnist[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_NUM_CPUS=8[0m
[34mSM_NUM_GPUS=1[0m
[34mSM_MODEL_DIR=/opt/ml/model[0m
[34mSM_MODULE_DIR=s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/source/sourcedir.tar.gz[0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{"sagemaker_parameter_server_enabled":true},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1","algo-2"],"hyperparameters":{"model_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"tensorflow-training-2021-09-16-20-06-47-336","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/source/sourcedir.tar.gz","module_name":"mnist","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1","algo-2"],"network_interface_name":"eth0"},"user_entry_point":"mnist.py"}[0m
[34mSM_USER_ARGS=["--model_dir","s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model"][0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_CHANNEL_TRAINING=/opt/ml/input/data/training[0m
[34mSM_HP_MODEL_DIR=s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model[0m
[34mTF_CONFIG={"cluster": {"master": ["algo-1:2222"], "ps": ["algo-1:2223", "algo-2:2223"], "worker": ["algo-2:2222"]}, "environment": "cloud", "task": {"index": 0, "type": "master"}}[0m
[34mPYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/lib/python36.zip:/usr/lib/python3.6:/usr/lib/python3.6/lib-dynload:/usr/local/lib/python3.6/dist-packages:/usr/lib/python3/dist-packages
[0m
[34mInvoking script with the following command:
[0m
[34m/usr/bin/python3 mnist.py --model_dir s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model
[0m
[35m2021-09-16 20:10:59,565 sagemaker_tensorflow_container.training INFO Launching worker process[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/__init__.py:1473: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead.
[0m
[34mWARNING:tensorflow:From mnist.py:161: The name tf.train.LoggingTensorHook is deprecated. Please use tf.estimator.LoggingTensorHook instead.
[0m
[34mWARNING:tensorflow:From mnist.py:165: The name tf.estimator.inputs.numpy_input_fn is deprecated. Please use tf.compat.v1.estimator.inputs.numpy_input_fn instead.
[0m
[35m2021-09-16 20:11:00,246 sagemaker-containers INFO Invoking user script
[0m
[35mTraining Env:
[0m
[35m{
"additional_framework_parameters": {
"sagemaker_parameter_server_enabled": true
},
"channel_input_dirs": {
"training": "/opt/ml/input/data/training"
},
"current_host": "algo-2",
"framework_module": "sagemaker_tensorflow_container.training:main",
"hosts": [
"algo-1",
"algo-2"
],
"hyperparameters": {
"model_dir": "s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model"
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"training": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": false,
"job_name": "tensorflow-training-2021-09-16-20-06-47-336",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/source/sourcedir.tar.gz",
"module_name": "mnist",
"network_interface_name": "eth0",
"num_cpus": 8,
"num_gpus": 1,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-2",
"hosts": [
"algo-1",
"algo-2"
],
"network_interface_name": "eth0"
},
"user_entry_point": "mnist.py"[0m
[35m}
[0m
[35mEnvironment variables:
[0m
[35mSM_HOSTS=["algo-1","algo-2"][0m
[35mSM_NETWORK_INTERFACE_NAME=eth0[0m
[35mSM_HPS={"model_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model"}[0m
[35mSM_USER_ENTRY_POINT=mnist.py[0m
[35mSM_FRAMEWORK_PARAMS={"sagemaker_parameter_server_enabled":true}[0m
[35mSM_RESOURCE_CONFIG={"current_host":"algo-2","hosts":["algo-1","algo-2"],"network_interface_name":"eth0"}[0m
[35mSM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[35mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[35mSM_CHANNELS=["training"][0m
[35mSM_CURRENT_HOST=algo-2[0m
[35mSM_MODULE_NAME=mnist[0m
[35mSM_LOG_LEVEL=20[0m
[35mSM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main[0m
[35mSM_INPUT_DIR=/opt/ml/input[0m
[35mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[35mSM_OUTPUT_DIR=/opt/ml/output[0m
[35mSM_NUM_CPUS=8[0m
[35mSM_NUM_GPUS=1[0m
[35mSM_MODEL_DIR=/opt/ml/model[0m
[35mSM_MODULE_DIR=s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/source/sourcedir.tar.gz[0m
[35mSM_TRAINING_ENV={"additional_framework_parameters":{"sagemaker_parameter_server_enabled":true},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-2","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1","algo-2"],"hyperparameters":{"model_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":false,"job_name":"tensorflow-training-2021-09-16-20-06-47-336","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/source/sourcedir.tar.gz","module_name":"mnist","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-2","hosts":["algo-1","algo-2"],"network_interface_name":"eth0"},"user_entry_point":"mnist.py"}[0m
[35mSM_USER_ARGS=["--model_dir","s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model"][0m
[35mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[35mSM_CHANNEL_TRAINING=/opt/ml/input/data/training[0m
[35mSM_HP_MODEL_DIR=s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model[0m
[35mTF_CONFIG={"cluster": {"master": ["algo-1:2222"], "ps": ["algo-1:2223", "algo-2:2223"], "worker": ["algo-2:2222"]}, "environment": "cloud", "task": {"index": 0, "type": "worker"}}[0m
[35mPYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/lib/python36.zip:/usr/lib/python3.6:/usr/lib/python3.6/lib-dynload:/usr/local/lib/python3.6/dist-packages:/usr/lib/python3/dist-packages
[0m
[35mInvoking script with the following command:
[0m
[35m/usr/bin/python3 mnist.py --model_dir s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model
[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_queue_runner.py:62: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_queue_runner.py:62: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mINFO:tensorflow:Calling model_fn.[0m
[34mINFO:tensorflow:Calling model_fn.[0m
[34mWARNING:tensorflow:From mnist.py:46: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse `tf.keras.layers.Conv2D` instead.[0m
[34mWARNING:tensorflow:From mnist.py:46: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse `tf.keras.layers.Conv2D` instead.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mPlease use `layer.__call__` method instead.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mPlease use `layer.__call__` method instead.[0m
[34mWARNING:tensorflow:From mnist.py:52: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.MaxPooling2D instead.[0m
[34mWARNING:tensorflow:From mnist.py:52: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.MaxPooling2D instead.[0m
[34mWARNING:tensorflow:From mnist.py:81: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.Dense instead.[0m
[34mWARNING:tensorflow:From mnist.py:81: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.Dense instead.[0m
[34mWARNING:tensorflow:From mnist.py:85: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.dropout instead.[0m
[34mWARNING:tensorflow:From mnist.py:85: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.dropout instead.[0m
[34mWARNING:tensorflow:From mnist.py:103: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
[0m
[34mWARNING:tensorflow:From mnist.py:103: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/losses/losses_impl.py:121: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse tf.where in 2.0, which has the same broadcast rule as np.where[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/losses/losses_impl.py:121: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse tf.where in 2.0, which has the same broadcast rule as np.where[0m
[34mWARNING:tensorflow:From mnist.py:107: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.
[0m
[34mWARNING:tensorflow:From mnist.py:107: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.
[0m
[34mWARNING:tensorflow:From mnist.py:110: The name tf.train.get_global_step is deprecated. Please use tf.compat.v1.train.get_global_step instead.
[0m
[34mWARNING:tensorflow:From mnist.py:110: The name tf.train.get_global_step is deprecated. Please use tf.compat.v1.train.get_global_step instead.
[0m
[34mINFO:tensorflow:Done calling model_fn.[0m
[34mINFO:tensorflow:Done calling model_fn.[0m
[34mINFO:tensorflow:Create CheckpointSaverHook.[0m
[34mINFO:tensorflow:Create CheckpointSaverHook.[0m
[34mINFO:tensorflow:Graph was finalized.[0m
[34mINFO:tensorflow:Graph was finalized.[0m
[34mINFO:tensorflow:Running local_init_op.[0m
[34mINFO:tensorflow:Running local_init_op.[0m
[34mINFO:tensorflow:Done running local_init_op.[0m
[34mINFO:tensorflow:Done running local_init_op.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py:888: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py:888: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/__init__.py:1473: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead.
[0m
[35mWARNING:tensorflow:From mnist.py:161: The name tf.train.LoggingTensorHook is deprecated. Please use tf.estimator.LoggingTensorHook instead.
[0m
[35mWARNING:tensorflow:From mnist.py:165: The name tf.estimator.inputs.numpy_input_fn is deprecated. Please use tf.compat.v1.estimator.inputs.numpy_input_fn instead.
[0m
[34mINFO:tensorflow:Saving checkpoints for 0 into s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model/model.ckpt.[0m
[34mINFO:tensorflow:Saving checkpoints for 0 into s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model/model.ckpt.[0m
2021-09-16 20:11:20 Training - Training image download completed. Training in progress.[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_queue_runner.py:62: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mTo construct input pipelines, use the `tf.data` module.[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_queue_runner.py:62: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mTo construct input pipelines, use the `tf.data` module.[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mTo construct input pipelines, use the `tf.data` module.[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mTo construct input pipelines, use the `tf.data` module.[0m
[35mINFO:tensorflow:Calling model_fn.[0m
[35mINFO:tensorflow:Calling model_fn.[0m
[35mWARNING:tensorflow:From mnist.py:46: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse `tf.keras.layers.Conv2D` instead.[0m
[35mWARNING:tensorflow:From mnist.py:46: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse `tf.keras.layers.Conv2D` instead.[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mPlease use `layer.__call__` method instead.[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mPlease use `layer.__call__` method instead.[0m
[35mWARNING:tensorflow:From mnist.py:52: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse keras.layers.MaxPooling2D instead.[0m
[35mWARNING:tensorflow:From mnist.py:52: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse keras.layers.MaxPooling2D instead.[0m
[35mWARNING:tensorflow:From mnist.py:81: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse keras.layers.Dense instead.[0m
[35mWARNING:tensorflow:From mnist.py:81: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse keras.layers.Dense instead.[0m
[35mWARNING:tensorflow:From mnist.py:85: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse keras.layers.dropout instead.[0m
[35mWARNING:tensorflow:From mnist.py:85: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse keras.layers.dropout instead.[0m
[35mWARNING:tensorflow:From mnist.py:103: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
[0m
[35mWARNING:tensorflow:From mnist.py:103: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/losses/losses_impl.py:121: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse tf.where in 2.0, which has the same broadcast rule as np.where[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/losses/losses_impl.py:121: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mUse tf.where in 2.0, which has the same broadcast rule as np.where[0m
[35mWARNING:tensorflow:From mnist.py:107: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.
[0m
[35mWARNING:tensorflow:From mnist.py:107: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.
[0m
[35mWARNING:tensorflow:From mnist.py:110: The name tf.train.get_global_step is deprecated. Please use tf.compat.v1.train.get_global_step instead.
[0m
[35mWARNING:tensorflow:From mnist.py:110: The name tf.train.get_global_step is deprecated. Please use tf.compat.v1.train.get_global_step instead.
[0m
[35mINFO:tensorflow:Done calling model_fn.[0m
[35mINFO:tensorflow:Done calling model_fn.[0m
[35mINFO:tensorflow:Create CheckpointSaverHook.[0m
[35mINFO:tensorflow:Create CheckpointSaverHook.[0m
[34mINFO:tensorflow:loss = 2.2954834, step = 0[0m
[34mINFO:tensorflow:loss = 2.2954834, step = 0[0m
[35mINFO:tensorflow:Graph was finalized.[0m
[35mINFO:tensorflow:Graph was finalized.[0m
[35mINFO:tensorflow:Running local_init_op.[0m
[35mINFO:tensorflow:Running local_init_op.[0m
[35mINFO:tensorflow:Done running local_init_op.[0m
[35mINFO:tensorflow:Done running local_init_op.[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py:888: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mTo construct input pipelines, use the `tf.data` module.[0m
[35mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py:888: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[35mInstructions for updating:[0m
[35mTo construct input pipelines, use the `tf.data` module.[0m
[34mINFO:tensorflow:loss = 2.2745461, step = 100 (5.838 sec)[0m
[34mINFO:tensorflow:loss = 2.2745461, step = 100 (5.838 sec)[0m
[35mINFO:tensorflow:loss = 2.2926042, step = 48[0m
[35mINFO:tensorflow:loss = 2.2926042, step = 48[0m
[35mINFO:tensorflow:global_step/sec: 51.4906[0m
[35mINFO:tensorflow:global_step/sec: 51.4906[0m
[35mINFO:tensorflow:loss = 2.2561707, step = 253 (2.950 sec)[0m
[35mINFO:tensorflow:loss = 2.2561707, step = 253 (2.950 sec)[0m
[35mINFO:tensorflow:global_step/sec: 53.5832[0m
[35mINFO:tensorflow:global_step/sec: 53.5832[0m
[34mINFO:tensorflow:loss = 2.2436347, step = 405 (5.795 sec)[0m
[34mINFO:tensorflow:loss = 2.2436347, step = 405 (5.795 sec)[0m
[35mINFO:tensorflow:loss = 2.2361007, step = 397 (2.594 sec)[0m
[35mINFO:tensorflow:loss = 2.2361007, step = 397 (2.594 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.6111[0m
[35mINFO:tensorflow:global_step/sec: 55.6111[0m
[35mINFO:tensorflow:global_step/sec: 56.9572[0m
[35mINFO:tensorflow:global_step/sec: 56.9572[0m
[35mINFO:tensorflow:loss = 2.2011986, step = 541 (2.514 sec)[0m
[35mINFO:tensorflow:loss = 2.2011986, step = 541 (2.514 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.771[0m
[35mINFO:tensorflow:global_step/sec: 57.771[0m
[35mINFO:tensorflow:loss = 2.127076, step = 685 (2.552 sec)[0m
[35mINFO:tensorflow:loss = 2.127076, step = 685 (2.552 sec)[0m
[34mINFO:tensorflow:loss = 2.074747, step = 733 (5.780 sec)[0m
[34mINFO:tensorflow:loss = 2.074747, step = 733 (5.780 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.7782[0m
[35mINFO:tensorflow:global_step/sec: 55.7782[0m
[35mINFO:tensorflow:global_step/sec: 56.1525[0m
[35mINFO:tensorflow:global_step/sec: 56.1525[0m
[35mINFO:tensorflow:loss = 2.0147517, step = 829 (2.573 sec)[0m
[35mINFO:tensorflow:loss = 2.0147517, step = 829 (2.573 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.9316[0m
[35mINFO:tensorflow:global_step/sec: 56.9316[0m
[35mINFO:tensorflow:loss = 1.9200367, step = 972 (2.482 sec)[0m
[35mINFO:tensorflow:loss = 1.9200367, step = 972 (2.482 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.2994[0m
[35mINFO:tensorflow:global_step/sec: 57.2994[0m
[35mINFO:tensorflow:global_step/sec: 56.4243[0m
[35mINFO:tensorflow:global_step/sec: 56.4243[0m
[35mINFO:tensorflow:loss = 1.7404916, step = 1115 (2.524 sec)[0m
[35mINFO:tensorflow:loss = 1.7404916, step = 1115 (2.524 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.6605[0m
[35mINFO:tensorflow:global_step/sec: 56.6605[0m
[34mINFO:tensorflow:loss = 1.7871726, step = 1063 (5.832 sec)[0m
[34mINFO:tensorflow:loss = 1.7871726, step = 1063 (5.832 sec)[0m
[35mINFO:tensorflow:loss = 1.4855206, step = 1258 (2.499 sec)[0m
[35mINFO:tensorflow:loss = 1.4855206, step = 1258 (2.499 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.5519[0m
[35mINFO:tensorflow:global_step/sec: 57.5519[0m
[34mINFO:tensorflow:loss = 1.2466033, step = 1398 (5.863 sec)[0m
[34mINFO:tensorflow:loss = 1.2466033, step = 1398 (5.863 sec)[0m
[35mINFO:tensorflow:loss = 1.2739695, step = 1400 (2.496 sec)[0m
[35mINFO:tensorflow:loss = 1.2739695, step = 1400 (2.496 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.1537[0m
[35mINFO:tensorflow:global_step/sec: 57.1537[0m
[35mINFO:tensorflow:global_step/sec: 56.7678[0m
[35mINFO:tensorflow:global_step/sec: 56.7678[0m
[35mINFO:tensorflow:loss = 1.0284219, step = 1543 (2.516 sec)[0m
[35mINFO:tensorflow:loss = 1.0284219, step = 1543 (2.516 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.1591[0m
[35mINFO:tensorflow:global_step/sec: 57.1591[0m
[35mINFO:tensorflow:loss = 1.0724763, step = 1686 (2.500 sec)[0m
[35mINFO:tensorflow:loss = 1.0724763, step = 1686 (2.500 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0078[0m
[35mINFO:tensorflow:global_step/sec: 57.0078[0m
[34mINFO:tensorflow:loss = 1.0590838, step = 1730 (5.822 sec)[0m
[34mINFO:tensorflow:loss = 1.0590838, step = 1730 (5.822 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.312[0m
[35mINFO:tensorflow:global_step/sec: 58.312[0m
[35mINFO:tensorflow:loss = 0.7005032, step = 1828 (2.467 sec)[0m
[35mINFO:tensorflow:loss = 0.7005032, step = 1828 (2.467 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.5212[0m
[35mINFO:tensorflow:global_step/sec: 57.5212[0m
[35mINFO:tensorflow:loss = 0.7522633, step = 1971 (2.452 sec)[0m
[35mINFO:tensorflow:loss = 0.7522633, step = 1971 (2.452 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.1853[0m
[35mINFO:tensorflow:global_step/sec: 58.1853[0m
[34mINFO:tensorflow:loss = 0.6497122, step = 2065 (5.794 sec)[0m
[34mINFO:tensorflow:loss = 0.6497122, step = 2065 (5.794 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0309[0m
[35mINFO:tensorflow:global_step/sec: 57.0309[0m
[35mINFO:tensorflow:loss = 0.60943335, step = 2115 (2.513 sec)[0m
[35mINFO:tensorflow:loss = 0.60943335, step = 2115 (2.513 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.6189[0m
[35mINFO:tensorflow:global_step/sec: 55.6189[0m
[35mINFO:tensorflow:loss = 0.5678159, step = 2257 (2.515 sec)[0m
[35mINFO:tensorflow:loss = 0.5678159, step = 2257 (2.515 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.4011[0m
[35mINFO:tensorflow:global_step/sec: 57.4011[0m
[34mINFO:tensorflow:loss = 0.50929654, step = 2396 (5.807 sec)[0m
[34mINFO:tensorflow:loss = 0.50929654, step = 2396 (5.807 sec)[0m
[35mINFO:tensorflow:loss = 0.53587013, step = 2400 (2.493 sec)[0m
[35mINFO:tensorflow:loss = 0.53587013, step = 2400 (2.493 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.7177[0m
[35mINFO:tensorflow:global_step/sec: 57.7177[0m
[35mINFO:tensorflow:global_step/sec: 57.9098[0m
[35mINFO:tensorflow:global_step/sec: 57.9098[0m
[35mINFO:tensorflow:loss = 0.5242331, step = 2542 (2.471 sec)[0m
[35mINFO:tensorflow:loss = 0.5242331, step = 2542 (2.471 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.1542[0m
[35mINFO:tensorflow:global_step/sec: 56.1542[0m
[35mINFO:tensorflow:loss = 0.52879256, step = 2686 (2.560 sec)[0m
[35mINFO:tensorflow:loss = 0.52879256, step = 2686 (2.560 sec)[0m
[34mINFO:tensorflow:loss = 0.5177074, step = 2726 (5.839 sec)[0m
[34mINFO:tensorflow:loss = 0.5177074, step = 2726 (5.839 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.6859[0m
[35mINFO:tensorflow:global_step/sec: 55.6859[0m
[35mINFO:tensorflow:global_step/sec: 57.1356[0m
[35mINFO:tensorflow:global_step/sec: 57.1356[0m
[35mINFO:tensorflow:loss = 0.5711662, step = 2829 (2.525 sec)[0m
[35mINFO:tensorflow:loss = 0.5711662, step = 2829 (2.525 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.7502[0m
[35mINFO:tensorflow:global_step/sec: 55.7502[0m
[35mINFO:tensorflow:loss = 0.6014549, step = 2972 (2.586 sec)[0m
[35mINFO:tensorflow:loss = 0.6014549, step = 2972 (2.586 sec)[0m
[35mINFO:tensorflow:global_step/sec: 54.6768[0m
[35mINFO:tensorflow:global_step/sec: 54.6768[0m
[34mINFO:tensorflow:loss = 0.4224535, step = 3056 (5.899 sec)[0m
[34mINFO:tensorflow:loss = 0.4224535, step = 3056 (5.899 sec)[0m
[35mINFO:tensorflow:loss = 0.41673452, step = 3115 (2.571 sec)[0m
[35mINFO:tensorflow:loss = 0.41673452, step = 3115 (2.571 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.685[0m
[35mINFO:tensorflow:global_step/sec: 56.685[0m
[35mINFO:tensorflow:global_step/sec: 57.8721[0m
[35mINFO:tensorflow:global_step/sec: 57.8721[0m
[35mINFO:tensorflow:loss = 0.47405872, step = 3257 (2.461 sec)[0m
[35mINFO:tensorflow:loss = 0.47405872, step = 3257 (2.461 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.1034[0m
[35mINFO:tensorflow:global_step/sec: 58.1034[0m
[35mINFO:tensorflow:loss = 0.46215898, step = 3399 (2.445 sec)[0m
[35mINFO:tensorflow:loss = 0.46215898, step = 3399 (2.445 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.5289[0m
[35mINFO:tensorflow:global_step/sec: 57.5289[0m
[34mINFO:tensorflow:loss = 0.62549764, step = 3393 (5.843 sec)[0m
[34mINFO:tensorflow:loss = 0.62549764, step = 3393 (5.843 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.936[0m
[35mINFO:tensorflow:global_step/sec: 56.936[0m
[35mINFO:tensorflow:loss = 0.37619048, step = 3541 (2.504 sec)[0m
[35mINFO:tensorflow:loss = 0.37619048, step = 3541 (2.504 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.1533[0m
[35mINFO:tensorflow:global_step/sec: 56.1533[0m
[35mINFO:tensorflow:loss = 0.42665356, step = 3685 (2.521 sec)[0m
[35mINFO:tensorflow:loss = 0.42665356, step = 3685 (2.521 sec)[0m
[34mINFO:tensorflow:loss = 0.48706517, step = 3724 (5.823 sec)[0m
[34mINFO:tensorflow:loss = 0.48706517, step = 3724 (5.823 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.3555[0m
[35mINFO:tensorflow:global_step/sec: 57.3555[0m
[35mINFO:tensorflow:global_step/sec: 57.7266[0m
[35mINFO:tensorflow:global_step/sec: 57.7266[0m
[35mINFO:tensorflow:loss = 0.41680586, step = 3828 (2.481 sec)[0m
[35mINFO:tensorflow:loss = 0.41680586, step = 3828 (2.481 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.5655[0m
[35mINFO:tensorflow:global_step/sec: 56.5655[0m
[35mINFO:tensorflow:loss = 0.300087, step = 3970 (2.511 sec)[0m
[35mINFO:tensorflow:loss = 0.300087, step = 3970 (2.511 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.5497[0m
[35mINFO:tensorflow:global_step/sec: 57.5497[0m
[34mINFO:tensorflow:loss = 0.45476463, step = 4057 (5.813 sec)[0m
[34mINFO:tensorflow:loss = 0.45476463, step = 4057 (5.813 sec)[0m
[35mINFO:tensorflow:loss = 0.4062317, step = 4113 (2.478 sec)[0m
[35mINFO:tensorflow:loss = 0.4062317, step = 4113 (2.478 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.2274[0m
[35mINFO:tensorflow:global_step/sec: 57.2274[0m
[35mINFO:tensorflow:global_step/sec: 56.3875[0m
[35mINFO:tensorflow:global_step/sec: 56.3875[0m
[35mINFO:tensorflow:loss = 0.5309701, step = 4257 (2.550 sec)[0m
[35mINFO:tensorflow:loss = 0.5309701, step = 4257 (2.550 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.8925[0m
[35mINFO:tensorflow:global_step/sec: 56.8925[0m
[34mINFO:tensorflow:loss = 0.21458116, step = 4389 (5.821 sec)[0m
[34mINFO:tensorflow:loss = 0.21458116, step = 4389 (5.821 sec)[0m
[35mINFO:tensorflow:loss = 0.36624157, step = 4400 (2.500 sec)[0m
[35mINFO:tensorflow:loss = 0.36624157, step = 4400 (2.500 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.556[0m
[35mINFO:tensorflow:global_step/sec: 57.556[0m
[35mINFO:tensorflow:global_step/sec: 57.2521[0m
[35mINFO:tensorflow:global_step/sec: 57.2521[0m
[35mINFO:tensorflow:loss = 0.49583623, step = 4543 (2.499 sec)[0m
[35mINFO:tensorflow:loss = 0.49583623, step = 4543 (2.499 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.8395[0m
[35mINFO:tensorflow:global_step/sec: 56.8395[0m
[35mINFO:tensorflow:loss = 0.46379796, step = 4687 (2.528 sec)[0m
[35mINFO:tensorflow:loss = 0.46379796, step = 4687 (2.528 sec)[0m
[34mINFO:tensorflow:loss = 0.44374493, step = 4720 (5.801 sec)[0m
[34mINFO:tensorflow:loss = 0.44374493, step = 4720 (5.801 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.949[0m
[35mINFO:tensorflow:global_step/sec: 56.949[0m
[35mINFO:tensorflow:loss = 0.30171025, step = 4830 (2.540 sec)[0m
[35mINFO:tensorflow:loss = 0.30171025, step = 4830 (2.540 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.9343[0m
[35mINFO:tensorflow:global_step/sec: 55.9343[0m
[35mINFO:tensorflow:global_step/sec: 41.3833[0m
[35mINFO:tensorflow:global_step/sec: 41.3833[0m
[35mINFO:tensorflow:loss = 0.28367224, step = 4981 (3.728 sec)[0m
[35mINFO:tensorflow:loss = 0.28367224, step = 4981 (3.728 sec)[0m
[34mINFO:tensorflow:loss = 0.27646813, step = 5033 (6.606 sec)[0m
[34mINFO:tensorflow:loss = 0.27646813, step = 5033 (6.606 sec)[0m
[35mINFO:tensorflow:global_step/sec: 46.4105[0m
[35mINFO:tensorflow:global_step/sec: 46.4105[0m
[35mINFO:tensorflow:loss = 0.29159746, step = 5124 (2.510 sec)[0m
[35mINFO:tensorflow:loss = 0.29159746, step = 5124 (2.510 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0166[0m
[35mINFO:tensorflow:global_step/sec: 57.0166[0m
[35mINFO:tensorflow:global_step/sec: 56.9185[0m
[35mINFO:tensorflow:global_step/sec: 56.9185[0m
[35mINFO:tensorflow:loss = 0.18128662, step = 5269 (2.674 sec)[0m
[35mINFO:tensorflow:loss = 0.18128662, step = 5269 (2.674 sec)[0m
[35mINFO:tensorflow:global_step/sec: 49.8195[0m
[35mINFO:tensorflow:global_step/sec: 49.8195[0m
[35mINFO:tensorflow:loss = 0.35507596, step = 5416 (2.708 sec)[0m
[35mINFO:tensorflow:loss = 0.35507596, step = 5416 (2.708 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.6874[0m
[35mINFO:tensorflow:global_step/sec: 56.6874[0m
[34mINFO:tensorflow:loss = 0.37173232, step = 5353 (5.871 sec)[0m
[34mINFO:tensorflow:loss = 0.37173232, step = 5353 (5.871 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.3806[0m
[35mINFO:tensorflow:global_step/sec: 56.3806[0m
[35mINFO:tensorflow:loss = 0.34065664, step = 5559 (2.536 sec)[0m
[35mINFO:tensorflow:loss = 0.34065664, step = 5559 (2.536 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.5542[0m
[35mINFO:tensorflow:global_step/sec: 57.5542[0m
[34mINFO:tensorflow:loss = 0.3344571, step = 5684 (5.798 sec)[0m
[34mINFO:tensorflow:loss = 0.3344571, step = 5684 (5.798 sec)[0m
[35mINFO:tensorflow:loss = 0.30168325, step = 5702 (2.481 sec)[0m
[35mINFO:tensorflow:loss = 0.30168325, step = 5702 (2.481 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.4561[0m
[35mINFO:tensorflow:global_step/sec: 58.4561[0m
[35mINFO:tensorflow:global_step/sec: 57.7019[0m
[35mINFO:tensorflow:global_step/sec: 57.7019[0m
[35mINFO:tensorflow:loss = 0.2911222, step = 5845 (2.455 sec)[0m
[35mINFO:tensorflow:loss = 0.2911222, step = 5845 (2.455 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0253[0m
[35mINFO:tensorflow:global_step/sec: 57.0253[0m
[35mINFO:tensorflow:loss = 0.21680266, step = 5988 (2.508 sec)[0m
[35mINFO:tensorflow:loss = 0.21680266, step = 5988 (2.508 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.6847[0m
[35mINFO:tensorflow:global_step/sec: 56.6847[0m
[34mINFO:tensorflow:loss = 0.35249835, step = 6018 (5.818 sec)[0m
[34mINFO:tensorflow:loss = 0.35249835, step = 6018 (5.818 sec)[0m
[35mINFO:tensorflow:loss = 0.3836916, step = 6131 (2.571 sec)[0m
[35mINFO:tensorflow:loss = 0.3836916, step = 6131 (2.571 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.4508[0m
[35mINFO:tensorflow:global_step/sec: 55.4508[0m
[35mINFO:tensorflow:global_step/sec: 56.7366[0m
[35mINFO:tensorflow:global_step/sec: 56.7366[0m
[35mINFO:tensorflow:loss = 0.24565752, step = 6275 (2.540 sec)[0m
[35mINFO:tensorflow:loss = 0.24565752, step = 6275 (2.540 sec)[0m
[34mINFO:tensorflow:loss = 0.29038733, step = 6346 (5.899 sec)[0m
[34mINFO:tensorflow:loss = 0.29038733, step = 6346 (5.899 sec)[0m
[35mINFO:tensorflow:global_step/sec: 54.7091[0m
[35mINFO:tensorflow:global_step/sec: 54.7091[0m
[35mINFO:tensorflow:loss = 0.30838695, step = 6418 (2.604 sec)[0m
[35mINFO:tensorflow:loss = 0.30838695, step = 6418 (2.604 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.9708[0m
[35mINFO:tensorflow:global_step/sec: 55.9708[0m
[35mINFO:tensorflow:global_step/sec: 57.3198[0m
[35mINFO:tensorflow:global_step/sec: 57.3198[0m
[35mINFO:tensorflow:loss = 0.4008826, step = 6566 (2.746 sec)[0m
[35mINFO:tensorflow:loss = 0.4008826, step = 6566 (2.746 sec)[0m
[35mINFO:tensorflow:global_step/sec: 52.0119[0m
[35mINFO:tensorflow:global_step/sec: 52.0119[0m
[35mINFO:tensorflow:loss = 0.23975208, step = 6709 (2.513 sec)[0m
[35mINFO:tensorflow:loss = 0.23975208, step = 6709 (2.513 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.5885[0m
[35mINFO:tensorflow:global_step/sec: 56.5885[0m
[35mINFO:tensorflow:global_step/sec: 57.5071[0m
[35mINFO:tensorflow:global_step/sec: 57.5071[0m
[35mINFO:tensorflow:loss = 0.30594742, step = 6853 (2.526 sec)[0m
[35mINFO:tensorflow:loss = 0.30594742, step = 6853 (2.526 sec)[0m
[34mINFO:tensorflow:loss = 0.19532168, step = 6669 (5.840 sec)[0m
[34mINFO:tensorflow:loss = 0.19532168, step = 6669 (5.840 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.2491[0m
[35mINFO:tensorflow:global_step/sec: 58.2491[0m
[35mINFO:tensorflow:loss = 0.2506567, step = 6995 (2.478 sec)[0m
[35mINFO:tensorflow:loss = 0.2506567, step = 6995 (2.478 sec)[0m
[34mINFO:tensorflow:loss = 0.14033364, step = 7000 (5.797 sec)[0m
[34mINFO:tensorflow:loss = 0.14033364, step = 7000 (5.797 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.6462[0m
[35mINFO:tensorflow:global_step/sec: 56.6462[0m
[35mINFO:tensorflow:loss = 0.2734958, step = 7138 (2.478 sec)[0m
[35mINFO:tensorflow:loss = 0.2734958, step = 7138 (2.478 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.3654[0m
[35mINFO:tensorflow:global_step/sec: 57.3654[0m
[35mINFO:tensorflow:global_step/sec: 57.9321[0m
[35mINFO:tensorflow:global_step/sec: 57.9321[0m
[35mINFO:tensorflow:loss = 0.1888851, step = 7281 (2.487 sec)[0m
[35mINFO:tensorflow:loss = 0.1888851, step = 7281 (2.487 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.7036[0m
[35mINFO:tensorflow:global_step/sec: 57.7036[0m
[35mINFO:tensorflow:loss = 0.26898587, step = 7423 (2.467 sec)[0m
[35mINFO:tensorflow:loss = 0.26898587, step = 7423 (2.467 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.4708[0m
[35mINFO:tensorflow:global_step/sec: 57.4708[0m
[34mINFO:tensorflow:loss = 0.22412702, step = 7334 (5.788 sec)[0m
[34mINFO:tensorflow:loss = 0.22412702, step = 7334 (5.788 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0941[0m
[35mINFO:tensorflow:global_step/sec: 57.0941[0m
[35mINFO:tensorflow:loss = 0.34000534, step = 7567 (2.510 sec)[0m
[35mINFO:tensorflow:loss = 0.34000534, step = 7567 (2.510 sec)[0m
[34mINFO:tensorflow:loss = 0.22982903, step = 7667 (5.805 sec)[0m
[34mINFO:tensorflow:loss = 0.22982903, step = 7667 (5.805 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.2274[0m
[35mINFO:tensorflow:global_step/sec: 57.2274[0m
[35mINFO:tensorflow:loss = 0.30206588, step = 7709 (2.488 sec)[0m
[35mINFO:tensorflow:loss = 0.30206588, step = 7709 (2.488 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.8791[0m
[35mINFO:tensorflow:global_step/sec: 56.8791[0m
[35mINFO:tensorflow:global_step/sec: 58.4431[0m
[35mINFO:tensorflow:global_step/sec: 58.4431[0m
[35mINFO:tensorflow:loss = 0.26828128, step = 7852 (2.489 sec)[0m
[35mINFO:tensorflow:loss = 0.26828128, step = 7852 (2.489 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.5834[0m
[35mINFO:tensorflow:global_step/sec: 56.5834[0m
[34mINFO:tensorflow:loss = 0.16675243, step = 7999 (5.808 sec)[0m
[34mINFO:tensorflow:loss = 0.16675243, step = 7999 (5.808 sec)[0m
[35mINFO:tensorflow:loss = 0.25512514, step = 7996 (2.520 sec)[0m
[35mINFO:tensorflow:loss = 0.25512514, step = 7996 (2.520 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.2225[0m
[35mINFO:tensorflow:global_step/sec: 57.2225[0m
[35mINFO:tensorflow:loss = 0.27602226, step = 8138 (2.473 sec)[0m
[35mINFO:tensorflow:loss = 0.27602226, step = 8138 (2.473 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.372[0m
[35mINFO:tensorflow:global_step/sec: 57.372[0m
[35mINFO:tensorflow:global_step/sec: 58.2008[0m
[35mINFO:tensorflow:global_step/sec: 58.2008[0m
[35mINFO:tensorflow:loss = 0.11981669, step = 8281 (2.460 sec)[0m
[35mINFO:tensorflow:loss = 0.11981669, step = 8281 (2.460 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.009[0m
[35mINFO:tensorflow:global_step/sec: 58.009[0m
[34mINFO:tensorflow:loss = 0.17542213, step = 8335 (5.803 sec)[0m
[34mINFO:tensorflow:loss = 0.17542213, step = 8335 (5.803 sec)[0m
[35mINFO:tensorflow:loss = 0.19444342, step = 8422 (2.430 sec)[0m
[35mINFO:tensorflow:loss = 0.19444342, step = 8422 (2.430 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.6761[0m
[35mINFO:tensorflow:global_step/sec: 57.6761[0m
[35mINFO:tensorflow:global_step/sec: 57.7402[0m
[35mINFO:tensorflow:global_step/sec: 57.7402[0m
[35mINFO:tensorflow:loss = 0.27100074, step = 8565 (2.489 sec)[0m
[35mINFO:tensorflow:loss = 0.27100074, step = 8565 (2.489 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0671[0m
[35mINFO:tensorflow:global_step/sec: 57.0671[0m
[34mINFO:tensorflow:loss = 0.16941744, step = 8669 (5.829 sec)[0m
[34mINFO:tensorflow:loss = 0.16941744, step = 8669 (5.829 sec)[0m
[35mINFO:tensorflow:loss = 0.19395807, step = 8708 (2.495 sec)[0m
[35mINFO:tensorflow:loss = 0.19395807, step = 8708 (2.495 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.4812[0m
[35mINFO:tensorflow:global_step/sec: 57.4812[0m
[35mINFO:tensorflow:loss = 0.20495605, step = 8850 (2.477 sec)[0m
[35mINFO:tensorflow:loss = 0.20495605, step = 8850 (2.477 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.4315[0m
[35mINFO:tensorflow:global_step/sec: 57.4315[0m
[35mINFO:tensorflow:global_step/sec: 57.7804[0m
[35mINFO:tensorflow:global_step/sec: 57.7804[0m
[35mINFO:tensorflow:loss = 0.2844035, step = 8993 (2.486 sec)[0m
[35mINFO:tensorflow:loss = 0.2844035, step = 8993 (2.486 sec)[0m
[34mINFO:tensorflow:loss = 0.22306198, step = 9005 (5.843 sec)[0m
[34mINFO:tensorflow:loss = 0.22306198, step = 9005 (5.843 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.7109[0m
[35mINFO:tensorflow:global_step/sec: 56.7109[0m
[35mINFO:tensorflow:loss = 0.2006827, step = 9137 (2.535 sec)[0m
[35mINFO:tensorflow:loss = 0.2006827, step = 9137 (2.535 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.6586[0m
[35mINFO:tensorflow:global_step/sec: 56.6586[0m
[35mINFO:tensorflow:global_step/sec: 57.6252[0m
[35mINFO:tensorflow:global_step/sec: 57.6252[0m
[35mINFO:tensorflow:loss = 0.2524469, step = 9280 (2.505 sec)[0m
[35mINFO:tensorflow:loss = 0.2524469, step = 9280 (2.505 sec)[0m
[34mINFO:tensorflow:loss = 0.17755957, step = 9334 (5.787 sec)[0m
[34mINFO:tensorflow:loss = 0.17755957, step = 9334 (5.787 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.9586[0m
[35mINFO:tensorflow:global_step/sec: 55.9586[0m
[35mINFO:tensorflow:loss = 0.10439791, step = 9424 (2.562 sec)[0m
[35mINFO:tensorflow:loss = 0.10439791, step = 9424 (2.562 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.3665[0m
[35mINFO:tensorflow:global_step/sec: 56.3665[0m
[35mINFO:tensorflow:global_step/sec: 56.3458[0m
[35mINFO:tensorflow:global_step/sec: 56.3458[0m
[35mINFO:tensorflow:loss = 0.23072785, step = 9568 (2.558 sec)[0m
[35mINFO:tensorflow:loss = 0.23072785, step = 9568 (2.558 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.7732[0m
[35mINFO:tensorflow:global_step/sec: 56.7732[0m
[35mINFO:tensorflow:loss = 0.34461078, step = 9710 (2.523 sec)[0m
[35mINFO:tensorflow:loss = 0.34461078, step = 9710 (2.523 sec)[0m
[35mINFO:tensorflow:global_step/sec: 54.9482[0m
[35mINFO:tensorflow:global_step/sec: 54.9482[0m
[34mINFO:tensorflow:loss = 0.3046969, step = 9665 (5.876 sec)[0m
[34mINFO:tensorflow:loss = 0.3046969, step = 9665 (5.876 sec)[0m
[35mINFO:tensorflow:loss = 0.22720785, step = 9855 (2.662 sec)[0m
[35mINFO:tensorflow:loss = 0.22720785, step = 9855 (2.662 sec)[0m
[35mINFO:tensorflow:global_step/sec: 54.8926[0m
[35mINFO:tensorflow:global_step/sec: 54.8926[0m
[35mINFO:tensorflow:global_step/sec: 57.0332[0m
[35mINFO:tensorflow:global_step/sec: 57.0332[0m
[34mINFO:tensorflow:loss = 0.32701835, step = 9994 (5.881 sec)[0m
[34mINFO:tensorflow:loss = 0.32701835, step = 9994 (5.881 sec)[0m
[35mINFO:tensorflow:loss = 0.24437013, step = 9998 (2.481 sec)[0m
[35mINFO:tensorflow:loss = 0.24437013, step = 9998 (2.481 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.9883[0m
[35mINFO:tensorflow:global_step/sec: 56.9883[0m
[35mINFO:tensorflow:loss = 0.20211881, step = 10141 (2.492 sec)[0m
[35mINFO:tensorflow:loss = 0.20211881, step = 10141 (2.492 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.1278[0m
[35mINFO:tensorflow:global_step/sec: 58.1278[0m
[35mINFO:tensorflow:global_step/sec: 57.5975[0m
[35mINFO:tensorflow:global_step/sec: 57.5975[0m
[35mINFO:tensorflow:loss = 0.14256616, step = 10283 (2.473 sec)[0m
[35mINFO:tensorflow:loss = 0.14256616, step = 10283 (2.473 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.4659[0m
[35mINFO:tensorflow:global_step/sec: 57.4659[0m
[35mINFO:tensorflow:loss = 0.15586074, step = 10426 (2.511 sec)[0m
[35mINFO:tensorflow:loss = 0.15586074, step = 10426 (2.511 sec)[0m
[34mINFO:tensorflow:loss = 0.21184765, step = 10328 (5.803 sec)[0m
[34mINFO:tensorflow:loss = 0.21184765, step = 10328 (5.803 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.3838[0m
[35mINFO:tensorflow:global_step/sec: 57.3838[0m
[35mINFO:tensorflow:global_step/sec: 57.8238[0m
[35mINFO:tensorflow:global_step/sec: 57.8238[0m
[35mINFO:tensorflow:loss = 0.24844013, step = 10569 (2.476 sec)[0m
[35mINFO:tensorflow:loss = 0.24844013, step = 10569 (2.476 sec)[0m
[34mINFO:tensorflow:loss = 0.17391844, step = 10661 (5.805 sec)[0m
[34mINFO:tensorflow:loss = 0.17391844, step = 10661 (5.805 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.9204[0m
[35mINFO:tensorflow:global_step/sec: 56.9204[0m
[35mINFO:tensorflow:loss = 0.2034304, step = 10712 (2.489 sec)[0m
[35mINFO:tensorflow:loss = 0.2034304, step = 10712 (2.489 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0607[0m
[35mINFO:tensorflow:global_step/sec: 57.0607[0m
[35mINFO:tensorflow:loss = 0.27913177, step = 10855 (2.465 sec)[0m
[35mINFO:tensorflow:loss = 0.27913177, step = 10855 (2.465 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.3037[0m
[35mINFO:tensorflow:global_step/sec: 58.3037[0m
[35mINFO:tensorflow:global_step/sec: 57.0916[0m
[35mINFO:tensorflow:global_step/sec: 57.0916[0m
[35mINFO:tensorflow:loss = 0.12128683, step = 10997 (2.499 sec)[0m
[35mINFO:tensorflow:loss = 0.12128683, step = 10997 (2.499 sec)[0m
[34mINFO:tensorflow:loss = 0.16105618, step = 10994 (5.816 sec)[0m
[34mINFO:tensorflow:loss = 0.16105618, step = 10994 (5.816 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.1513[0m
[35mINFO:tensorflow:global_step/sec: 57.1513[0m
[35mINFO:tensorflow:loss = 0.1999646, step = 11140 (2.456 sec)[0m
[35mINFO:tensorflow:loss = 0.1999646, step = 11140 (2.456 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.7283[0m
[35mINFO:tensorflow:global_step/sec: 57.7283[0m
[35mINFO:tensorflow:global_step/sec: 57.6759[0m
[35mINFO:tensorflow:global_step/sec: 57.6759[0m
[35mINFO:tensorflow:loss = 0.18620001, step = 11283 (2.496 sec)[0m
[35mINFO:tensorflow:loss = 0.18620001, step = 11283 (2.496 sec)[0m
[34mINFO:tensorflow:loss = 0.20600975, step = 11329 (5.798 sec)[0m
[34mINFO:tensorflow:loss = 0.20600975, step = 11329 (5.798 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.7883[0m
[35mINFO:tensorflow:global_step/sec: 57.7883[0m
[35mINFO:tensorflow:loss = 0.18061455, step = 11426 (2.491 sec)[0m
[35mINFO:tensorflow:loss = 0.18061455, step = 11426 (2.491 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.849[0m
[35mINFO:tensorflow:global_step/sec: 56.849[0m
[35mINFO:tensorflow:global_step/sec: 57.9571[0m
[35mINFO:tensorflow:global_step/sec: 57.9571[0m
[35mINFO:tensorflow:loss = 0.2227596, step = 11568 (2.479 sec)[0m
[35mINFO:tensorflow:loss = 0.2227596, step = 11568 (2.479 sec)[0m
[34mINFO:tensorflow:loss = 0.14307758, step = 11663 (5.827 sec)[0m
[34mINFO:tensorflow:loss = 0.14307758, step = 11663 (5.827 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.6302[0m
[35mINFO:tensorflow:global_step/sec: 57.6302[0m
[35mINFO:tensorflow:loss = 0.25797403, step = 11711 (2.473 sec)[0m
[35mINFO:tensorflow:loss = 0.25797403, step = 11711 (2.473 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.6268[0m
[35mINFO:tensorflow:global_step/sec: 56.6268[0m
[35mINFO:tensorflow:loss = 0.17914896, step = 11853 (2.494 sec)[0m
[35mINFO:tensorflow:loss = 0.17914896, step = 11853 (2.494 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.1401[0m
[35mINFO:tensorflow:global_step/sec: 57.1401[0m
[35mINFO:tensorflow:global_step/sec: 57.7868[0m
[35mINFO:tensorflow:global_step/sec: 57.7868[0m
[34mINFO:tensorflow:loss = 0.160532, step = 11997 (5.842 sec)[0m
[34mINFO:tensorflow:loss = 0.160532, step = 11997 (5.842 sec)[0m
[35mINFO:tensorflow:loss = 0.12292495, step = 11996 (2.510 sec)[0m
[35mINFO:tensorflow:loss = 0.12292495, step = 11996 (2.510 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.4663[0m
[35mINFO:tensorflow:global_step/sec: 57.4663[0m
[35mINFO:tensorflow:loss = 0.20030466, step = 12139 (2.475 sec)[0m
[35mINFO:tensorflow:loss = 0.20030466, step = 12139 (2.475 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.8347[0m
[35mINFO:tensorflow:global_step/sec: 57.8347[0m
[35mINFO:tensorflow:global_step/sec: 56.612[0m
[35mINFO:tensorflow:global_step/sec: 56.612[0m
[35mINFO:tensorflow:loss = 0.10235819, step = 12282 (2.503 sec)[0m
[35mINFO:tensorflow:loss = 0.10235819, step = 12282 (2.503 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.2927[0m
[35mINFO:tensorflow:global_step/sec: 57.2927[0m
[35mINFO:tensorflow:loss = 0.21848847, step = 12424 (2.487 sec)[0m
[35mINFO:tensorflow:loss = 0.21848847, step = 12424 (2.487 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.5386[0m
[35mINFO:tensorflow:global_step/sec: 57.5386[0m
[34mINFO:tensorflow:loss = 0.15582989, step = 12332 (5.832 sec)[0m
[34mINFO:tensorflow:loss = 0.15582989, step = 12332 (5.832 sec)[0m
[35mINFO:tensorflow:loss = 0.10917436, step = 12567 (2.473 sec)[0m
[35mINFO:tensorflow:loss = 0.10917436, step = 12567 (2.473 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.3351[0m
[35mINFO:tensorflow:global_step/sec: 57.3351[0m
[34mINFO:tensorflow:loss = 0.23116527, step = 12666 (5.817 sec)[0m
[34mINFO:tensorflow:loss = 0.23116527, step = 12666 (5.817 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.6542[0m
[35mINFO:tensorflow:global_step/sec: 57.6542[0m
[35mINFO:tensorflow:loss = 0.16368721, step = 12710 (2.491 sec)[0m
[35mINFO:tensorflow:loss = 0.16368721, step = 12710 (2.491 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.7453[0m
[35mINFO:tensorflow:global_step/sec: 56.7453[0m
[35mINFO:tensorflow:loss = 0.18820876, step = 12853 (2.522 sec)[0m
[35mINFO:tensorflow:loss = 0.18820876, step = 12853 (2.522 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0306[0m
[35mINFO:tensorflow:global_step/sec: 57.0306[0m
[35mINFO:tensorflow:global_step/sec: 55.7452[0m
[35mINFO:tensorflow:global_step/sec: 55.7452[0m
[35mINFO:tensorflow:loss = 0.24909233, step = 12996 (2.544 sec)[0m
[35mINFO:tensorflow:loss = 0.24909233, step = 12996 (2.544 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.6961[0m
[35mINFO:tensorflow:global_step/sec: 56.6961[0m
[34mINFO:tensorflow:loss = 0.18483959, step = 12998 (5.871 sec)[0m
[34mINFO:tensorflow:loss = 0.18483959, step = 12998 (5.871 sec)[0m
[35mINFO:tensorflow:loss = 0.08217529, step = 13138 (2.500 sec)[0m
[35mINFO:tensorflow:loss = 0.08217529, step = 13138 (2.500 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.4307[0m
[35mINFO:tensorflow:global_step/sec: 56.4307[0m
[35mINFO:tensorflow:global_step/sec: 55.9983[0m
[35mINFO:tensorflow:global_step/sec: 55.9983[0m
[35mINFO:tensorflow:loss = 0.11268627, step = 13281 (2.573 sec)[0m
[35mINFO:tensorflow:loss = 0.11268627, step = 13281 (2.573 sec)[0m
[34mINFO:tensorflow:loss = 0.31167662, step = 13332 (5.908 sec)[0m
[34mINFO:tensorflow:loss = 0.31167662, step = 13332 (5.908 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.3075[0m
[35mINFO:tensorflow:global_step/sec: 56.3075[0m
[35mINFO:tensorflow:loss = 0.17678915, step = 13425 (2.515 sec)[0m
[35mINFO:tensorflow:loss = 0.17678915, step = 13425 (2.515 sec)[0m
[35mINFO:tensorflow:global_step/sec: 52.1279[0m
[35mINFO:tensorflow:global_step/sec: 52.1279[0m
[35mINFO:tensorflow:loss = 0.14223203, step = 13572 (2.762 sec)[0m
[35mINFO:tensorflow:loss = 0.14223203, step = 13572 (2.762 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.8755[0m
[35mINFO:tensorflow:global_step/sec: 56.8755[0m
[34mINFO:tensorflow:loss = 0.20779502, step = 13655 (5.817 sec)[0m
[34mINFO:tensorflow:loss = 0.20779502, step = 13655 (5.817 sec)[0m
[35mINFO:tensorflow:global_step/sec: 59.3596[0m
[35mINFO:tensorflow:global_step/sec: 59.3596[0m
[35mINFO:tensorflow:loss = 0.19207956, step = 13713 (2.393 sec)[0m
[35mINFO:tensorflow:loss = 0.19207956, step = 13713 (2.393 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.4543[0m
[35mINFO:tensorflow:global_step/sec: 58.4543[0m
[35mINFO:tensorflow:loss = 0.19355455, step = 13856 (2.474 sec)[0m
[35mINFO:tensorflow:loss = 0.19355455, step = 13856 (2.474 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0694[0m
[35mINFO:tensorflow:global_step/sec: 57.0694[0m
[34mINFO:tensorflow:loss = 0.14230403, step = 13993 (5.810 sec)[0m
[34mINFO:tensorflow:loss = 0.14230403, step = 13993 (5.810 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.2833[0m
[35mINFO:tensorflow:global_step/sec: 58.2833[0m
[35mINFO:tensorflow:loss = 0.24801956, step = 13998 (2.447 sec)[0m
[35mINFO:tensorflow:loss = 0.24801956, step = 13998 (2.447 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.3727[0m
[35mINFO:tensorflow:global_step/sec: 58.3727[0m
[35mINFO:tensorflow:loss = 0.055497225, step = 14140 (2.436 sec)[0m
[35mINFO:tensorflow:loss = 0.055497225, step = 14140 (2.436 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.4596[0m
[35mINFO:tensorflow:global_step/sec: 58.4596[0m
[35mINFO:tensorflow:global_step/sec: 58.5227[0m
[35mINFO:tensorflow:global_step/sec: 58.5227[0m
[35mINFO:tensorflow:loss = 0.24672821, step = 14282 (2.425 sec)[0m
[35mINFO:tensorflow:loss = 0.24672821, step = 14282 (2.425 sec)[0m
[34mINFO:tensorflow:loss = 0.16653611, step = 14331 (5.819 sec)[0m
[34mINFO:tensorflow:loss = 0.16653611, step = 14331 (5.819 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.1089[0m
[35mINFO:tensorflow:global_step/sec: 57.1089[0m
[35mINFO:tensorflow:loss = 0.15520479, step = 14425 (2.503 sec)[0m
[35mINFO:tensorflow:loss = 0.15520479, step = 14425 (2.503 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.8254[0m
[35mINFO:tensorflow:global_step/sec: 57.8254[0m
[35mINFO:tensorflow:loss = 0.09654978, step = 14567 (2.448 sec)[0m
[35mINFO:tensorflow:loss = 0.09654978, step = 14567 (2.448 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.4113[0m
[35mINFO:tensorflow:global_step/sec: 57.4113[0m
[34mINFO:tensorflow:loss = 0.23944193, step = 14665 (5.809 sec)[0m
[34mINFO:tensorflow:loss = 0.23944193, step = 14665 (5.809 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0789[0m
[35mINFO:tensorflow:global_step/sec: 57.0789[0m
[35mINFO:tensorflow:loss = 0.1262199, step = 14710 (2.526 sec)[0m
[35mINFO:tensorflow:loss = 0.1262199, step = 14710 (2.526 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.7052[0m
[35mINFO:tensorflow:global_step/sec: 56.7052[0m
[35mINFO:tensorflow:loss = 0.08927595, step = 14853 (2.476 sec)[0m
[35mINFO:tensorflow:loss = 0.08927595, step = 14853 (2.476 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.9999[0m
[35mINFO:tensorflow:global_step/sec: 57.9999[0m
[35mINFO:tensorflow:global_step/sec: 57.6417[0m
[35mINFO:tensorflow:global_step/sec: 57.6417[0m
[35mINFO:tensorflow:loss = 0.16157451, step = 14995 (2.467 sec)[0m
[35mINFO:tensorflow:loss = 0.16157451, step = 14995 (2.467 sec)[0m
[34mINFO:tensorflow:loss = 0.10740583, step = 15000 (5.821 sec)[0m
[34mINFO:tensorflow:loss = 0.10740583, step = 15000 (5.821 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.8354[0m
[35mINFO:tensorflow:global_step/sec: 56.8354[0m
[35mINFO:tensorflow:loss = 0.1930085, step = 15139 (2.536 sec)[0m
[35mINFO:tensorflow:loss = 0.1930085, step = 15139 (2.536 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.3141[0m
[35mINFO:tensorflow:global_step/sec: 57.3141[0m
[35mINFO:tensorflow:loss = 0.15453514, step = 15281 (2.472 sec)[0m
[35mINFO:tensorflow:loss = 0.15453514, step = 15281 (2.472 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.6998[0m
[35mINFO:tensorflow:global_step/sec: 57.6998[0m
[35mINFO:tensorflow:global_step/sec: 56.9341[0m
[35mINFO:tensorflow:global_step/sec: 56.9341[0m
[34mINFO:tensorflow:loss = 0.10112428, step = 15331 (5.806 sec)[0m
[34mINFO:tensorflow:loss = 0.10112428, step = 15331 (5.806 sec)[0m
[35mINFO:tensorflow:loss = 0.09991829, step = 15424 (2.483 sec)[0m
[35mINFO:tensorflow:loss = 0.09991829, step = 15424 (2.483 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.7352[0m
[35mINFO:tensorflow:global_step/sec: 57.7352[0m
[35mINFO:tensorflow:loss = 0.16043451, step = 15566 (2.452 sec)[0m
[35mINFO:tensorflow:loss = 0.16043451, step = 15566 (2.452 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.5107[0m
[35mINFO:tensorflow:global_step/sec: 58.5107[0m
[34mINFO:tensorflow:loss = 0.1015558, step = 15668 (5.816 sec)[0m
[34mINFO:tensorflow:loss = 0.1015558, step = 15668 (5.816 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.6283[0m
[35mINFO:tensorflow:global_step/sec: 57.6283[0m
[35mINFO:tensorflow:loss = 0.12172205, step = 15709 (2.462 sec)[0m
[35mINFO:tensorflow:loss = 0.12172205, step = 15709 (2.462 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.042[0m
[35mINFO:tensorflow:global_step/sec: 57.042[0m
[35mINFO:tensorflow:loss = 0.15883157, step = 15852 (2.533 sec)[0m
[35mINFO:tensorflow:loss = 0.15883157, step = 15852 (2.533 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.1021[0m
[35mINFO:tensorflow:global_step/sec: 57.1021[0m
[34mINFO:tensorflow:loss = 0.17166412, step = 16000 (5.846 sec)[0m
[34mINFO:tensorflow:loss = 0.17166412, step = 16000 (5.846 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.8067[0m
[35mINFO:tensorflow:global_step/sec: 55.8067[0m
[35mINFO:tensorflow:loss = 0.15792176, step = 15995 (2.534 sec)[0m
[35mINFO:tensorflow:loss = 0.15792176, step = 15995 (2.534 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.0208[0m
[35mINFO:tensorflow:global_step/sec: 58.0208[0m
[35mINFO:tensorflow:loss = 0.20665689, step = 16137 (2.457 sec)[0m
[35mINFO:tensorflow:loss = 0.20665689, step = 16137 (2.457 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.6965[0m
[35mINFO:tensorflow:global_step/sec: 57.6965[0m
[35mINFO:tensorflow:loss = 0.2217081, step = 16280 (2.492 sec)[0m
[35mINFO:tensorflow:loss = 0.2217081, step = 16280 (2.492 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.7769[0m
[35mINFO:tensorflow:global_step/sec: 56.7769[0m
[34mINFO:tensorflow:loss = 0.20729604, step = 16335 (5.837 sec)[0m
[34mINFO:tensorflow:loss = 0.20729604, step = 16335 (5.837 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.1868[0m
[35mINFO:tensorflow:global_step/sec: 55.1868[0m
[35mINFO:tensorflow:loss = 0.17344408, step = 16424 (2.575 sec)[0m
[35mINFO:tensorflow:loss = 0.17344408, step = 16424 (2.575 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.897[0m
[35mINFO:tensorflow:global_step/sec: 57.897[0m
[35mINFO:tensorflow:loss = 0.1898638, step = 16566 (2.513 sec)[0m
[35mINFO:tensorflow:loss = 0.1898638, step = 16566 (2.513 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.5644[0m
[35mINFO:tensorflow:global_step/sec: 55.5644[0m
[35mINFO:tensorflow:global_step/sec: 55.0371[0m
[35mINFO:tensorflow:global_step/sec: 55.0371[0m
[35mINFO:tensorflow:loss = 0.18384063, step = 16710 (2.589 sec)[0m
[35mINFO:tensorflow:loss = 0.18384063, step = 16710 (2.589 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.2667[0m
[35mINFO:tensorflow:global_step/sec: 58.2667[0m
[34mINFO:tensorflow:loss = 0.17657202, step = 16665 (5.940 sec)[0m
[34mINFO:tensorflow:loss = 0.17657202, step = 16665 (5.940 sec)[0m
[35mINFO:tensorflow:loss = 0.21803997, step = 16852 (2.484 sec)[0m
[35mINFO:tensorflow:loss = 0.21803997, step = 16852 (2.484 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.8801[0m
[35mINFO:tensorflow:global_step/sec: 56.8801[0m
[35mINFO:tensorflow:loss = 0.21366791, step = 16996 (2.528 sec)[0m
[35mINFO:tensorflow:loss = 0.21366791, step = 16996 (2.528 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.9829[0m
[35mINFO:tensorflow:global_step/sec: 55.9829[0m
[34mINFO:tensorflow:loss = 0.15419021, step = 16999 (5.846 sec)[0m
[34mINFO:tensorflow:loss = 0.15419021, step = 16999 (5.846 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.0736[0m
[35mINFO:tensorflow:global_step/sec: 56.0736[0m
[35mINFO:tensorflow:loss = 0.2296798, step = 17139 (2.575 sec)[0m
[35mINFO:tensorflow:loss = 0.2296798, step = 17139 (2.575 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.1833[0m
[35mINFO:tensorflow:global_step/sec: 56.1833[0m
[35mINFO:tensorflow:loss = 0.08553794, step = 17282 (2.495 sec)[0m
[35mINFO:tensorflow:loss = 0.08553794, step = 17282 (2.495 sec)[0m
[34mINFO:tensorflow:loss = 0.15850148, step = 17327 (5.804 sec)[0m
[34mINFO:tensorflow:loss = 0.15850148, step = 17327 (5.804 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.0872[0m
[35mINFO:tensorflow:global_step/sec: 57.0872[0m
[35mINFO:tensorflow:global_step/sec: 57.5813[0m
[35mINFO:tensorflow:global_step/sec: 57.5813[0m
[35mINFO:tensorflow:loss = 0.11419461, step = 17425 (2.504 sec)[0m
[35mINFO:tensorflow:loss = 0.11419461, step = 17425 (2.504 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.1697[0m
[35mINFO:tensorflow:global_step/sec: 57.1697[0m
[35mINFO:tensorflow:loss = 0.08663907, step = 17568 (2.482 sec)[0m
[35mINFO:tensorflow:loss = 0.08663907, step = 17568 (2.482 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.6704[0m
[35mINFO:tensorflow:global_step/sec: 57.6704[0m
[34mINFO:tensorflow:loss = 0.13845807, step = 17660 (5.792 sec)[0m
[34mINFO:tensorflow:loss = 0.13845807, step = 17660 (5.792 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.9172[0m
[35mINFO:tensorflow:global_step/sec: 57.9172[0m
[35mINFO:tensorflow:loss = 0.15934029, step = 17710 (2.466 sec)[0m
[35mINFO:tensorflow:loss = 0.15934029, step = 17710 (2.466 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.1304[0m
[35mINFO:tensorflow:global_step/sec: 57.1304[0m
[35mINFO:tensorflow:loss = 0.22709247, step = 17853 (2.514 sec)[0m
[35mINFO:tensorflow:loss = 0.22709247, step = 17853 (2.514 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.8482[0m
[35mINFO:tensorflow:global_step/sec: 56.8482[0m
[34mINFO:tensorflow:loss = 0.13299839, step = 17992 (5.847 sec)[0m
[34mINFO:tensorflow:loss = 0.13299839, step = 17992 (5.847 sec)[0m
[35mINFO:tensorflow:loss = 0.21900745, step = 17997 (2.580 sec)[0m
[35mINFO:tensorflow:loss = 0.21900745, step = 17997 (2.580 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.2587[0m
[35mINFO:tensorflow:global_step/sec: 55.2587[0m
[35mINFO:tensorflow:global_step/sec: 54.9711[0m
[35mINFO:tensorflow:global_step/sec: 54.9711[0m
[35mINFO:tensorflow:loss = 0.082070276, step = 18142 (2.599 sec)[0m
[35mINFO:tensorflow:loss = 0.082070276, step = 18142 (2.599 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.9162[0m
[35mINFO:tensorflow:global_step/sec: 56.9162[0m
[35mINFO:tensorflow:loss = 0.083548754, step = 18286 (2.576 sec)[0m
[35mINFO:tensorflow:loss = 0.083548754, step = 18286 (2.576 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.3074[0m
[35mINFO:tensorflow:global_step/sec: 55.3074[0m
[34mINFO:tensorflow:loss = 0.12091987, step = 18317 (5.896 sec)[0m
[34mINFO:tensorflow:loss = 0.12091987, step = 18317 (5.896 sec)[0m
[35mINFO:tensorflow:global_step/sec: 32.1533[0m
[35mINFO:tensorflow:global_step/sec: 32.1533[0m
[35mINFO:tensorflow:loss = 0.148964, step = 18437 (4.037 sec)[0m
[35mINFO:tensorflow:loss = 0.148964, step = 18437 (4.037 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.2621[0m
[35mINFO:tensorflow:global_step/sec: 56.2621[0m
[35mINFO:tensorflow:loss = 0.20415884, step = 18580 (2.539 sec)[0m
[35mINFO:tensorflow:loss = 0.20415884, step = 18580 (2.539 sec)[0m
[35mINFO:tensorflow:global_step/sec: 56.8285[0m
[35mINFO:tensorflow:global_step/sec: 56.8285[0m
[34mINFO:tensorflow:loss = 0.12795177, step = 18628 (6.793 sec)[0m
[34mINFO:tensorflow:loss = 0.12795177, step = 18628 (6.793 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.1354[0m
[35mINFO:tensorflow:global_step/sec: 57.1354[0m
[35mINFO:tensorflow:loss = 0.1973075, step = 18723 (2.491 sec)[0m
[35mINFO:tensorflow:loss = 0.1973075, step = 18723 (2.491 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.2353[0m
[35mINFO:tensorflow:global_step/sec: 58.2353[0m
[35mINFO:tensorflow:loss = 0.14648238, step = 18866 (2.467 sec)[0m
[35mINFO:tensorflow:loss = 0.14648238, step = 18866 (2.467 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.3719[0m
[35mINFO:tensorflow:global_step/sec: 57.3719[0m
[35mINFO:tensorflow:global_step/sec: 57.754[0m
[35mINFO:tensorflow:global_step/sec: 57.754[0m
[35mINFO:tensorflow:loss = 0.093736105, step = 19008 (2.471 sec)[0m
[35mINFO:tensorflow:loss = 0.093736105, step = 19008 (2.471 sec)[0m
[34mINFO:tensorflow:loss = 0.20427212, step = 18962 (5.810 sec)[0m
[34mINFO:tensorflow:loss = 0.20427212, step = 18962 (5.810 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.4019[0m
[35mINFO:tensorflow:global_step/sec: 57.4019[0m
[35mINFO:tensorflow:loss = 0.1464021, step = 19150 (2.458 sec)[0m
[35mINFO:tensorflow:loss = 0.1464021, step = 19150 (2.458 sec)[0m
[35mINFO:tensorflow:global_step/sec: 58.2388[0m
[35mINFO:tensorflow:global_step/sec: 58.2388[0m
[34mINFO:tensorflow:loss = 0.13492624, step = 19301 (5.864 sec)[0m
[34mINFO:tensorflow:loss = 0.13492624, step = 19301 (5.864 sec)[0m
[35mINFO:tensorflow:loss = 0.24031706, step = 19292 (2.453 sec)[0m
[35mINFO:tensorflow:loss = 0.24031706, step = 19292 (2.453 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.6565[0m
[35mINFO:tensorflow:global_step/sec: 57.6565[0m
[35mINFO:tensorflow:global_step/sec: 56.3794[0m
[35mINFO:tensorflow:global_step/sec: 56.3794[0m
[35mINFO:tensorflow:loss = 0.22467005, step = 19435 (2.517 sec)[0m
[35mINFO:tensorflow:loss = 0.22467005, step = 19435 (2.517 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.4037[0m
[35mINFO:tensorflow:global_step/sec: 57.4037[0m
[35mINFO:tensorflow:loss = 0.15721545, step = 19578 (2.496 sec)[0m
[35mINFO:tensorflow:loss = 0.15721545, step = 19578 (2.496 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.331[0m
[35mINFO:tensorflow:global_step/sec: 57.331[0m
[34mINFO:tensorflow:loss = 0.07191126, step = 19634 (5.850 sec)[0m
[34mINFO:tensorflow:loss = 0.07191126, step = 19634 (5.850 sec)[0m
[35mINFO:tensorflow:global_step/sec: 55.2166[0m
[35mINFO:tensorflow:global_step/sec: 55.2166[0m
[35mINFO:tensorflow:loss = 0.114803344, step = 19721 (2.560 sec)[0m
[35mINFO:tensorflow:loss = 0.114803344, step = 19721 (2.560 sec)[0m
[35mINFO:tensorflow:global_step/sec: 57.1237[0m
[35mINFO:tensorflow:global_step/sec: 57.1237[0m
[35mINFO:tensorflow:loss = 0.118882746, step = 19865 (2.590 sec)[0m
[35mINFO:tensorflow:loss = 0.118882746, step = 19865 (2.590 sec)[0m
[35mINFO:tensorflow:global_step/sec: 54.1728[0m
[35mINFO:tensorflow:global_step/sec: 54.1728[0m
[34mINFO:tensorflow:loss = 0.12659298, step = 19961 (5.925 sec)[0m
[34mINFO:tensorflow:loss = 0.12659298, step = 19961 (5.925 sec)[0m
[34mINFO:tensorflow:Saving checkpoints for 20002 into s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model/model.ckpt.[0m
[34mINFO:tensorflow:Saving checkpoints for 20002 into s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model/model.ckpt.[0m
[35mINFO:tensorflow:Loss for final step: 0.10822134.[0m
[35mINFO:tensorflow:Loss for final step: 0.10822134.[0m
[35m2021-09-16 20:17:08,900 sagemaker_tensorflow_container.training INFO master algo-1 is still up, waiting for it to exit[0m
[34mINFO:tensorflow:Calling model_fn.[0m
[34mINFO:tensorflow:Calling model_fn.[0m
[34mWARNING:tensorflow:From mnist.py:115: The name tf.metrics.accuracy is deprecated. Please use tf.compat.v1.metrics.accuracy instead.
[0m
[34mWARNING:tensorflow:From mnist.py:115: The name tf.metrics.accuracy is deprecated. Please use tf.compat.v1.metrics.accuracy instead.
[0m
[34mINFO:tensorflow:Done calling model_fn.[0m
[34mINFO:tensorflow:Done calling model_fn.[0m
[34mINFO:tensorflow:Starting evaluation at 2021-09-16T20:17:09Z[0m
[34mINFO:tensorflow:Starting evaluation at 2021-09-16T20:17:09Z[0m
[34mINFO:tensorflow:Graph was finalized.[0m
[34mINFO:tensorflow:Graph was finalized.[0m
[34mINFO:tensorflow:Restoring parameters from s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model/model.ckpt-20002[0m
[34mINFO:tensorflow:Restoring parameters from s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model/model.ckpt-20002[0m
[34mINFO:tensorflow:Running local_init_op.[0m
[34mINFO:tensorflow:Running local_init_op.[0m
[34mINFO:tensorflow:Done running local_init_op.[0m
[34mINFO:tensorflow:Done running local_init_op.[0m
[34mINFO:tensorflow:Evaluation [10/100][0m
[34mINFO:tensorflow:Evaluation [10/100][0m
[34mINFO:tensorflow:Evaluation [20/100][0m
[34mINFO:tensorflow:Evaluation [20/100][0m
[34mINFO:tensorflow:Evaluation [30/100][0m
[34mINFO:tensorflow:Evaluation [30/100][0m
[34mINFO:tensorflow:Evaluation [40/100][0m
[34mINFO:tensorflow:Evaluation [40/100][0m
[34mINFO:tensorflow:Evaluation [50/100][0m
[34mINFO:tensorflow:Evaluation [50/100][0m
[34mINFO:tensorflow:Evaluation [60/100][0m
[34mINFO:tensorflow:Evaluation [60/100][0m
[34mINFO:tensorflow:Evaluation [70/100][0m
[34mINFO:tensorflow:Evaluation [70/100][0m
[34mINFO:tensorflow:Finished evaluation at 2021-09-16-20:17:10[0m
[34mINFO:tensorflow:Finished evaluation at 2021-09-16-20:17:10[0m
[34mINFO:tensorflow:Saving dict for global step 20002: accuracy = 0.9696, global_step = 20002, loss = 0.1032469[0m
[34mINFO:tensorflow:Saving dict for global step 20002: accuracy = 0.9696, global_step = 20002, loss = 0.1032469[0m
[34mINFO:tensorflow:Saving 'checkpoint_path' summary for global step 20002: s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model/model.ckpt-20002[0m
[34mINFO:tensorflow:Saving 'checkpoint_path' summary for global step 20002: s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model/model.ckpt-20002[0m
[34mINFO:tensorflow:Loss for final step: 0.07562591.[0m
[34mINFO:tensorflow:Loss for final step: 0.07562591.[0m
[34mWARNING:tensorflow:From mnist.py:184: Estimator.export_savedmodel (from tensorflow_estimator.python.estimator.estimator) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mThis function has been renamed, use `export_saved_model` instead.[0m
[34mWARNING:tensorflow:From mnist.py:184: Estimator.export_savedmodel (from tensorflow_estimator.python.estimator.estimator) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mThis function has been renamed, use `export_saved_model` instead.[0m
[34mWARNING:tensorflow:From mnist.py:145: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
[0m
[34mWARNING:tensorflow:From mnist.py:145: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
[0m
[34mINFO:tensorflow:Calling model_fn.[0m
[34mINFO:tensorflow:Calling model_fn.[0m
[34mINFO:tensorflow:Done calling model_fn.[0m
[34mINFO:tensorflow:Done calling model_fn.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mThis function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.[0m
[34mINFO:tensorflow:Signatures INCLUDED in export for Classify: None[0m
[34mINFO:tensorflow:Signatures INCLUDED in export for Classify: None[0m
[34mINFO:tensorflow:Signatures INCLUDED in export for Regress: None[0m
[34mINFO:tensorflow:Signatures INCLUDED in export for Regress: None[0m
[34mINFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default'][0m
[34mINFO:tensorflow:Signatures INCLUDED in export for Predict: ['serving_default'][0m
[34mINFO:tensorflow:Signatures INCLUDED in export for Train: None[0m
[34mINFO:tensorflow:Signatures INCLUDED in export for Train: None[0m
[34mINFO:tensorflow:Signatures INCLUDED in export for Eval: None[0m
[34mINFO:tensorflow:Signatures INCLUDED in export for Eval: None[0m
[34mINFO:tensorflow:Restoring parameters from s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model/model.ckpt-20002[0m
[34mINFO:tensorflow:Restoring parameters from s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-06-47-336/model/model.ckpt-20002[0m
[34mINFO:tensorflow:Assets added to graph.[0m
[34mINFO:tensorflow:Assets added to graph.[0m
[34mINFO:tensorflow:No assets to write.[0m
[34mINFO:tensorflow:No assets to write.[0m
[34mINFO:tensorflow:SavedModel written to: /opt/ml/model/temp-1631823432/saved_model.pb[0m
[34mINFO:tensorflow:SavedModel written to: /opt/ml/model/temp-1631823432/saved_model.pb[0m
[34m2021-09-16 20:17:13,584 sagemaker-containers INFO Reporting training SUCCESS[0m
2021-09-16 20:19:29 Uploading - Uploading generated training model[35m2021-09-16 20:19:28,641 sagemaker_tensorflow_container.training INFO master algo-1 is down, stopping parameter server[0m
[35m2021-09-16 20:19:28,642 sagemaker_tensorflow_container.training WARNING No model artifact is saved under path /opt/ml/model. Your training job will not save any model files to S3.[0m
[35mFor details of how to construct your training script see:[0m
[35mhttps://sagemaker.readthedocs.io/en/stable/using_tf.html#adapting-your-local-tensorflow-script[0m
[35m2021-09-16 20:19:28,642 sagemaker-containers INFO Reporting training SUCCESS[0m
2021-09-16 20:19:42 Completed - Training job completed
ProfilerReport-1631822807: IssuesFound
Training seconds: 1118
Billable seconds: 1118
###Markdown
Calling fit to train a model with TensorFlow 2.1 script.
###Code
# cell 07
mnist_estimator2.fit(training_data_uri)
###Output
2021-09-16 20:20:22 Starting - Starting the training job...
2021-09-16 20:20:46 Starting - Launching requested ML instancesProfilerReport-1631823622: InProgress
.........
2021-09-16 20:22:06 Starting - Preparing the instances for training.........
2021-09-16 20:23:49 Downloading - Downloading input data
2021-09-16 20:23:49 Training - Downloading the training image........[34m2021-09-16 20:25:05,511 sagemaker-containers INFO Imported framework sagemaker_tensorflow_container.training[0m
[34m2021-09-16 20:25:05,862 sagemaker_tensorflow_container.training INFO Running distributed training job with parameter servers[0m
[34m2021-09-16 20:25:05,863 sagemaker_tensorflow_container.training INFO Launching parameter server process[0m
[34m2021-09-16 20:25:05,863 sagemaker_tensorflow_container.training INFO Running distributed training job with parameter servers[0m
[34m2021-09-16 20:25:06,767 sagemaker_tensorflow_container.training INFO Launching worker process[0m
[34m2021-09-16 20:25:07,227 sagemaker-containers INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"additional_framework_parameters": {
"sagemaker_parameter_server_enabled": true
},
"channel_input_dirs": {
"training": "/opt/ml/input/data/training"
},
"current_host": "algo-1",
"framework_module": "sagemaker_tensorflow_container.training:main",
"hosts": [
"algo-1",
"algo-2"
],
"hyperparameters": {
"model_dir": "s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model"
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"training": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "tensorflow-training-2021-09-16-20-20-22-213",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/source/sourcedir.tar.gz",
"module_name": "mnist-2",
"network_interface_name": "eth0",
"num_cpus": 8,
"num_gpus": 1,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1",
"algo-2"
],
"network_interface_name": "eth0"
},
"user_entry_point": "mnist-2.py"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_HOSTS=["algo-1","algo-2"][0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_HPS={"model_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model"}[0m
[34mSM_USER_ENTRY_POINT=mnist-2.py[0m
[34mSM_FRAMEWORK_PARAMS={"sagemaker_parameter_server_enabled":true}[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1","algo-2"],"network_interface_name":"eth0"}[0m
[34mSM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_CHANNELS=["training"][0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mSM_MODULE_NAME=mnist-2[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_NUM_CPUS=8[0m
[34mSM_NUM_GPUS=1[0m
[34mSM_MODEL_DIR=/opt/ml/model[0m
[34mSM_MODULE_DIR=s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/source/sourcedir.tar.gz[0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{"sagemaker_parameter_server_enabled":true},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1","algo-2"],"hyperparameters":{"model_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":true,"job_name":"tensorflow-training-2021-09-16-20-20-22-213","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/source/sourcedir.tar.gz","module_name":"mnist-2","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1","algo-2"],"network_interface_name":"eth0"},"user_entry_point":"mnist-2.py"}[0m
[34mSM_USER_ARGS=["--model_dir","s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model"][0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_CHANNEL_TRAINING=/opt/ml/input/data/training[0m
[34mSM_HP_MODEL_DIR=s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model[0m
[34mTF_CONFIG={"cluster": {"master": ["algo-1:2222"], "ps": ["algo-1:2223", "algo-2:2223"], "worker": ["algo-2:2222"]}, "environment": "cloud", "task": {"index": 0, "type": "master"}}[0m
[34mPYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/lib/python36.zip:/usr/lib/python3.6:/usr/lib/python3.6/lib-dynload:/usr/local/lib/python3.6/dist-packages:/usr/lib/python3/dist-packages
[0m
[34mInvoking script with the following command:
[0m
[34m/usr/bin/python3 mnist-2.py --model_dir s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model
[0m
[35m2021-09-16 20:25:04,650 sagemaker-containers INFO Imported framework sagemaker_tensorflow_container.training[0m
[35m2021-09-16 20:25:04,915 sagemaker_tensorflow_container.training INFO Running distributed training job with parameter servers[0m
[35m2021-09-16 20:25:04,916 sagemaker_tensorflow_container.training INFO Launching parameter server process[0m
[35m2021-09-16 20:25:04,916 sagemaker_tensorflow_container.training INFO Running distributed training job with parameter servers[0m
[35m2021-09-16 20:25:05,821 sagemaker_tensorflow_container.training INFO Launching worker process[0m
[35m2021-09-16 20:25:06,084 sagemaker-containers INFO Invoking user script
[0m
[35mTraining Env:
[0m
[35m{
"additional_framework_parameters": {
"sagemaker_parameter_server_enabled": true
},
"channel_input_dirs": {
"training": "/opt/ml/input/data/training"
},
"current_host": "algo-2",
"framework_module": "sagemaker_tensorflow_container.training:main",
"hosts": [
"algo-1",
"algo-2"
],
"hyperparameters": {
"model_dir": "s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model"
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {
"training": {
"TrainingInputMode": "File",
"S3DistributionType": "FullyReplicated",
"RecordWrapperType": "None"
}
},
"input_dir": "/opt/ml/input",
"is_master": false,
"job_name": "tensorflow-training-2021-09-16-20-20-22-213",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/source/sourcedir.tar.gz",
"module_name": "mnist-2",
"network_interface_name": "eth0",
"num_cpus": 8,
"num_gpus": 1,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-2",
"hosts": [
"algo-1",
"algo-2"
],
"network_interface_name": "eth0"
},
"user_entry_point": "mnist-2.py"[0m
[35m}
[0m
[35mEnvironment variables:
[0m
[35mSM_HOSTS=["algo-1","algo-2"][0m
[35mSM_NETWORK_INTERFACE_NAME=eth0[0m
[35mSM_HPS={"model_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model"}[0m
[35mSM_USER_ENTRY_POINT=mnist-2.py[0m
[35mSM_FRAMEWORK_PARAMS={"sagemaker_parameter_server_enabled":true}[0m
[35mSM_RESOURCE_CONFIG={"current_host":"algo-2","hosts":["algo-1","algo-2"],"network_interface_name":"eth0"}[0m
[35mSM_INPUT_DATA_CONFIG={"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}}[0m
[35mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[35mSM_CHANNELS=["training"][0m
[35mSM_CURRENT_HOST=algo-2[0m
[35mSM_MODULE_NAME=mnist-2[0m
[35mSM_LOG_LEVEL=20[0m
[35mSM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main[0m
[35mSM_INPUT_DIR=/opt/ml/input[0m
[35mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[35mSM_OUTPUT_DIR=/opt/ml/output[0m
[35mSM_NUM_CPUS=8[0m
[35mSM_NUM_GPUS=1[0m
[35mSM_MODEL_DIR=/opt/ml/model[0m
[35mSM_MODULE_DIR=s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/source/sourcedir.tar.gz[0m
[35mSM_TRAINING_ENV={"additional_framework_parameters":{"sagemaker_parameter_server_enabled":true},"channel_input_dirs":{"training":"/opt/ml/input/data/training"},"current_host":"algo-2","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1","algo-2"],"hyperparameters":{"model_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model"},"input_config_dir":"/opt/ml/input/config","input_data_config":{"training":{"RecordWrapperType":"None","S3DistributionType":"FullyReplicated","TrainingInputMode":"File"}},"input_dir":"/opt/ml/input","is_master":false,"job_name":"tensorflow-training-2021-09-16-20-20-22-213","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/source/sourcedir.tar.gz","module_name":"mnist-2","network_interface_name":"eth0","num_cpus":8,"num_gpus":1,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-2","hosts":["algo-1","algo-2"],"network_interface_name":"eth0"},"user_entry_point":"mnist-2.py"}[0m
[35mSM_USER_ARGS=["--model_dir","s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model"][0m
[35mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[35mSM_CHANNEL_TRAINING=/opt/ml/input/data/training[0m
[35mSM_HP_MODEL_DIR=s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model[0m
[35mTF_CONFIG={"cluster": {"master": ["algo-1:2222"], "ps": ["algo-1:2223", "algo-2:2223"], "worker": ["algo-2:2222"]}, "environment": "cloud", "task": {"index": 0, "type": "worker"}}[0m
[35mPYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/lib/python36.zip:/usr/lib/python3.6:/usr/lib/python3.6/lib-dynload:/usr/local/lib/python3.6/dist-packages:/usr/lib/python3/dist-packages
[0m
[35mInvoking script with the following command:
[0m
[35m/usr/bin/python3 mnist-2.py --model_dir s3://sagemaker-us-east-1-051018513262/tensorflow-training-2021-09-16-20-20-22-213/model
[0m
[35mTrain on 55000 samples[0m
[34mTrain on 55000 samples[0m
2021-09-16 20:25:27 Uploading - Uploading generated training model[35m#015 32/55000 [..............................] - ETA: 29:40 - loss: 2.5009 - accuracy: 0.1250#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 736/55000 [..............................] - ETA: 1:20 - loss: 1.2955 - accuracy: 0.5910 #010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 1472/55000 [..............................] - ETA: 41s - loss: 0.9084 - accuracy: 0.7194 #010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 2208/55000 [>.............................] - ETA: 28s - loss: 0.7612 - accuracy: 0.7582#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 2944/55000 [>.............................] - ETA: 21s - loss: 0.6797 - accuracy: 0.7874#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 3680/55000 [=>............................] - ETA: 18s - loss: 0.6420 - accuracy: 0.7997#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 4384/55000 [=>............................] - ETA: 15s - loss: 0.6002 - accuracy: 0.8148#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 5120/55000 [=>............................] - ETA: 13s - loss: 0.5672 - accuracy: 0.8258#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 5856/55000 [==>...........................] - ETA: 12s - loss: 0.5433 - accuracy: 0.8344#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 6592/55000 [==>...........................] - ETA: 11s - loss: 0.5168 - accuracy: 0.8436#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 7328/55000 [==>...........................] - ETA: 10s - loss: 0.4974 - accuracy: 0.8515#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 8064/55000 [===>..........................] - ETA: 9s - loss: 0.4802 - accuracy: 0.8555 #010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 8800/55000 [===>..........................] - ETA: 8s - loss: 0.4610 - accuracy: 0.8610#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 9536/55000 [====>.........................] - ETA: 8s - loss: 0.4489 - accuracy: 0.8650#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01510272/55000 [====>.........................] - ETA: 7s - loss: 0.4369 - accuracy: 0.8680#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511008/55000 [=====>........................] - ETA: 7s - loss: 0.4225 - accuracy: 0.8724#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511744/55000 [=====>........................] - ETA: 6s - loss: 0.4129 - accuracy: 0.8750#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01512480/55000 [=====>........................] - ETA: 6s - loss: 0.4036 - accuracy: 0.8772#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01513216/55000 [======>.......................] - ETA: 6s - loss: 0.3950 - accuracy: 0.8801#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01513952/55000 [======>.......................] - ETA: 5s - loss: 0.3890 - accuracy: 0.8817#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01514688/55000 [=======>......................] - ETA: 5s - loss: 0.3838 - accuracy: 0.8832#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01515424/55000 [=======>......................] - ETA: 5s - loss: 0.3752 - accuracy: 0.8856#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01516160/55000 [=======>......................] - ETA: 5s - loss: 0.3676 - accuracy: 0.8884#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01516896/55000 [========>.....................] - ETA: 5s - loss: 0.3618 - accuracy: 0.8901#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01517632/55000 [========>.....................] - ETA: 4s - loss: 0.3571 - accuracy: 0.8917#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01518368/55000 [=========>....................] - ETA: 4s - loss: 0.3523 - accuracy: 0.8933#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01519040/55000 [=========>....................] - ETA: 4s - loss: 0.3495 - accuracy: 0.8946#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01519776/55000 [=========>....................] - ETA: 4s - loss: 0.3442 - accuracy: 0.8963#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01520512/55000 [==========>...................] - ETA: 4s - loss: 0.3386 - accuracy: 0.8979#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01521248/55000 [==========>...................] - ETA: 4s - loss: 0.3358 - accuracy: 0.8987#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01521984/55000 [==========>...................] - ETA: 3s - loss: 0.3319 - accuracy: 0.9000#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01522720/55000 [===========>..................] - ETA: 3s - loss: 0.3281 - accuracy: 0.9010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01523424/55000 [===========>..................] - ETA: 3s - loss: 0.3254 - accuracy: 0.9016#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01524160/55000 [============>.................] - ETA: 3s - loss: 0.3220 - accuracy: 0.9025#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01524896/55000 [============>.................] - ETA: 3s - loss: 0.3182 - accuracy: 0.9038#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01525632/55000 [============>.................] - ETA: 3s - loss: 0.3139 - accuracy: 0.9050#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01526368/55000 [=============>................] - ETA: 3s - loss: 0.3107 - accuracy: 0.9059#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01527104/55000 [=============>................] - ETA: 3s - loss: 0.3079 - accuracy: 0.9068#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01527840/55000 [==============>...............] - ETA: 2s - loss: 0.3049 - accuracy: 0.9077#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01528576/55000 [==============>...............] - ETA: 2s - loss: 0.3023 - accuracy: 0.9085#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01529312/55000 [==============>...............] - ETA: 2s - loss: 0.3011 - accuracy: 0.9091#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01530048/55000 [===============>..............] - ETA: 2s - loss: 0.2982 - accuracy: 0.9100#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01530784/55000 [===============>..............] - ETA: 2s - loss: 0.2953 - accuracy: 0.9110#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01531520/55000 [================>.............] - ETA: 2s - loss: 0.2919 - accuracy: 0.9119#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01532256/55000 [================>.............] - ETA: 2s - loss: 0.2886 - accuracy: 0.9130#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01532992/55000 [================>.............] - ETA: 2s - loss: 0.2846 - accuracy: 0.9142#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01533696/55000 [=================>............] - ETA: 2s - loss: 0.2828 - accuracy: 0.9148#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01534432/55000 [=================>............] - ETA: 2s - loss: 0.2797 - accuracy: 0.9156#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01535168/55000 [==================>...........] - ETA: 1s - loss: 0.2771 - accuracy: 0.9164#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01535904/55000 [==================>...........] - ETA: 1s - loss: 0.2756 - accuracy: 0.9168#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01536640/55000 [==================>...........] - ETA: 1s - loss: 0.2741 - accuracy: 0.9174#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01537376/55000 [===================>..........] - ETA: 1s - loss: 0.2714 - accuracy: 0.9182#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01538048/55000 [===================>..........] - ETA: 1s - loss: 0.2695 - accuracy: 0.9189#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01538720/55000 [====================>.........] - ETA: 1s - loss: 0.2671 - accuracy: 0.9194#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01539456/55000 [====================>.........] - ETA: 1s - loss: 0.2656 - accuracy: 0.9196#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01540192/55000 [====================>.........] - ETA: 1s - loss: 0.2634 - accuracy: 0.9202#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01540928/55000 [=====================>........] - ETA: 1s - loss: 0.2613 - accuracy: 0.9208#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01541664/55000 [=====================>........] - ETA: 1s - loss: 0.2598 - accuracy: 0.9213#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01542400/55000 [======================>.......] - ETA: 1s - loss: 0.2571 - accuracy: 0.9221#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01543136/55000 [======================>.......] - ETA: 1s - loss: 0.2555 - accuracy: 0.9225#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01543872/55000 [======================>.......] - ETA: 1s - loss: 0.2540 - accuracy: 0.9229#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01544608/55000 [=======================>......] - ETA: 0s - loss: 0.2525 - accuracy: 0.9235#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01545344/55000 [=======================>......] - ETA: 0s - loss: 0.2503 - accuracy: 0.9242#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01546080/55000 [========================>.....] - ETA: 0s - loss: 0.2487 - accuracy: 0.9246#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01546816/55000 [========================>.....] - ETA: 0s - loss: 0.2474 - accuracy: 0.9250#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01547520/55000 [========================>.....] - ETA: 0s - loss: 0.2465 - accuracy: 0.9253#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01548256/55000 [=========================>....] - ETA: 0s - loss: 0.2453 - accuracy: 0.9256#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01548992/55000 [=========================>....] - ETA: 0s - loss: 0.2440 - accuracy: 0.9260#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01549728/55000 [==========================>...] - ETA: 0s - loss: 0.2423 - accuracy: 0.9265#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01550432/55000 [==========================>...] - ETA: 0s - loss: 0.2407 - accuracy: 0.9271#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01551168/55000 [==========================>...] - ETA: 0s - loss: 0.2395 - accuracy: 0.9275#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01551904/55000 [===========================>..] - ETA: 0s - loss: 0.2388 - accuracy: 0.9279#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01552640/55000 [===========================>..] - ETA: 0s - loss: 0.2374 - accuracy: 0.9283#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01553376/55000 [============================>.] - ETA: 0s - loss: 0.2361 - accuracy: 0.9287#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01554112/55000 [============================>.] - ETA: 0s - loss: 0.2352 - accuracy: 0.9289#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01554848/55000 [============================>.] - ETA: 0s - loss: 0.2347 - accuracy: 0.9292#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01555000/55000 [==============================] - 5s 89us/sample - loss: 0.2345 - accuracy: 0.9293[0m
[34m#015 32/55000 [..............................] - ETA: 30:18 - loss: 2.3007 - accuracy: 0.1562#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 736/55000 [..............................] - ETA: 1:21 - loss: 1.2338 - accuracy: 0.6182 #010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 1472/55000 [..............................] - ETA: 42s - loss: 0.9143 - accuracy: 0.7310 #010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 2176/55000 [>.............................] - ETA: 29s - loss: 0.7825 - accuracy: 0.7665#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 2912/55000 [>.............................] - ETA: 22s - loss: 0.6947 - accuracy: 0.7940#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 3648/55000 [>.............................] - ETA: 18s - loss: 0.6519 - accuracy: 0.8092#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 4384/55000 [=>............................] - ETA: 15s - loss: 0.6076 - accuracy: 0.8221#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 5120/55000 [=>............................] - ETA: 13s - loss: 0.5783 - accuracy: 0.8297#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 5856/55000 [==>...........................] - ETA: 12s - loss: 0.5481 - accuracy: 0.8398#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 6560/55000 [==>...........................] - ETA: 11s - loss: 0.5254 - accuracy: 0.8460#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 7296/55000 [==>...........................] - ETA: 10s - loss: 0.5035 - accuracy: 0.8521#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 8000/55000 [===>..........................] - ETA: 9s - loss: 0.4876 - accuracy: 0.8560 #010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 8736/55000 [===>..........................] - ETA: 8s - loss: 0.4693 - accuracy: 0.8615#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 9440/55000 [====>.........................] - ETA: 8s - loss: 0.4579 - accuracy: 0.8646#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01510112/55000 [====>.........................] - ETA: 7s - loss: 0.4458 - accuracy: 0.8689#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01510816/55000 [====>.........................] - ETA: 7s - loss: 0.4368 - accuracy: 0.8708#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511520/55000 [=====>........................] - ETA: 7s - loss: 0.4282 - accuracy: 0.8734#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01512256/55000 [=====>........................] - ETA: 6s - loss: 0.4185 - accuracy: 0.8763#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01512992/55000 [======>.......................] - ETA: 6s - loss: 0.4078 - accuracy: 0.8797#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01513696/55000 [======>.......................] - ETA: 6s - loss: 0.3995 - accuracy: 0.8820#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01514400/55000 [======>.......................] - ETA: 5s - loss: 0.3918 - accuracy: 0.8846#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01515104/55000 [=======>......................] - ETA: 5s - loss: 0.3839 - accuracy: 0.8867#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01515808/55000 [=======>......................] - ETA: 5s - loss: 0.3768 - accuracy: 0.8887#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01516512/55000 [========>.....................] - ETA: 5s - loss: 0.3680 - accuracy: 0.8912#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01517216/55000 [========>.....................] - ETA: 5s - loss: 0.3620 - accuracy: 0.8928#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01517920/55000 [========>.....................] - ETA: 4s - loss: 0.3553 - accuracy: 0.8948#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01518624/55000 [=========>....................] - ETA: 4s - loss: 0.3504 - accuracy: 0.8961#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01519328/55000 [=========>....................] - ETA: 4s - loss: 0.3457 - accuracy: 0.8976#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01520064/55000 [=========>....................] - ETA: 4s - loss: 0.3406 - accuracy: 0.8992#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01520800/55000 [==========>...................] - ETA: 4s - loss: 0.3365 - accuracy: 0.9003#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01521536/55000 [==========>...................] - ETA: 4s - loss: 0.3328 - accuracy: 0.9012#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01522272/55000 [===========>..................] - ETA: 3s - loss: 0.3294 - accuracy: 0.9025#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01522976/55000 [===========>..................] - ETA: 3s - loss: 0.3259 - accuracy: 0.9036#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01523680/55000 [===========>..................] - ETA: 3s - loss: 0.3212 - accuracy: 0.9047#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01524320/55000 [============>.................] - ETA: 3s - loss: 0.3200 - accuracy: 0.9055#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01525024/55000 [============>.................] - ETA: 3s - loss: 0.3170 - accuracy: 0.9064#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01525728/55000 [=============>................] - ETA: 3s - loss: 0.3131 - accuracy: 0.9071#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01526464/55000 [=============>................] - ETA: 3s - loss: 0.3086 - accuracy: 0.9083#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01527200/55000 [=============>................] - ETA: 3s - loss: 0.3053 - accuracy: 0.9091#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01527904/55000 [==============>...............] - ETA: 2s - loss: 0.3033 - accuracy: 0.9095#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01528640/55000 [==============>...............] - ETA: 2s - loss: 0.2997 - accuracy: 0.9106#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01529376/55000 [===============>..............] - ETA: 2s - loss: 0.2979 - accuracy: 0.9114#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01530112/55000 [===============>..............] - ETA: 2s - loss: 0.2954 - accuracy: 0.9121#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01530848/55000 [===============>..............] - ETA: 2s - loss: 0.2924 - accuracy: 0.9130#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01531584/55000 [================>.............] - ETA: 2s - loss: 0.2893 - accuracy: 0.9138#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01532320/55000 [================>.............] - ETA: 2s - loss: 0.2868 - accuracy: 0.9148#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01533056/55000 [=================>............] - ETA: 2s - loss: 0.2842 - accuracy: 0.9155#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01533792/55000 [=================>............] - ETA: 2s - loss: 0.2810 - accuracy: 0.9164#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01534496/55000 [=================>............] - ETA: 2s - loss: 0.2779 - accuracy: 0.9172#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01535232/55000 [==================>...........] - ETA: 1s - loss: 0.2759 - accuracy: 0.9177#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01535968/55000 [==================>...........] - ETA: 1s - loss: 0.2738 - accuracy: 0.9183#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01536704/55000 [===================>..........] - ETA: 1s - loss: 0.2715 - accuracy: 0.9191#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01537440/55000 [===================>..........] - ETA: 1s - loss: 0.2696 - accuracy: 0.9199#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01538176/55000 [===================>..........] - ETA: 1s - loss: 0.2672 - accuracy: 0.9206#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01538880/55000 [====================>.........] - ETA: 1s - loss: 0.2653 - accuracy: 0.9213#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01539616/55000 [====================>.........] - ETA: 1s - loss: 0.2629 - accuracy: 0.9220#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01540352/55000 [=====================>........] - ETA: 1s - loss: 0.2608 - accuracy: 0.9226#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01541088/55000 [=====================>........] - ETA: 1s - loss: 0.2599 - accuracy: 0.9231#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01541824/55000 [=====================>........] - ETA: 1s - loss: 0.2578 - accuracy: 0.9237#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01542560/55000 [======================>.......] - ETA: 1s - loss: 0.2563 - accuracy: 0.9240#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01543296/55000 [======================>.......] - ETA: 1s - loss: 0.2548 - accuracy: 0.9246#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01544000/55000 [=======================>......] - ETA: 1s - loss: 0.2527 - accuracy: 0.9251#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01544736/55000 [=======================>......] - ETA: 0s - loss: 0.2521 - accuracy: 0.9253#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01545472/55000 [=======================>......] - ETA: 0s - loss: 0.2509 - accuracy: 0.9254#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01546176/55000 [========================>.....] - ETA: 0s - loss: 0.2499 - accuracy: 0.9258#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01546912/55000 [========================>.....] - ETA: 0s - loss: 0.2481 - accuracy: 0.9262#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01547648/55000 [========================>.....] - ETA: 0s - loss: 0.2468 - accuracy: 0.9264#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01548320/55000 [=========================>....] - ETA: 0s - loss: 0.2459 - accuracy: 0.9266#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01548992/55000 [=========================>....] - ETA: 0s - loss: 0.2444 - accuracy: 0.9271#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01549600/55000 [==========================>...] - ETA: 0s - loss: 0.2432 - accuracy: 0.9274#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01550304/55000 [==========================>...] - ETA: 0s - loss: 0.2421 - accuracy: 0.9277#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01551040/55000 [==========================>...] - ETA: 0s - loss: 0.2405 - accuracy: 0.9282#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01551744/55000 [===========================>..] - ETA: 0s - loss: 0.2393 - accuracy: 0.9285#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01552352/55000 [===========================>..] - ETA: 0s - loss: 0.2383 - accuracy: 0.9287#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01553056/55000 [===========================>..] - ETA: 0s - loss: 0.2373 - accuracy: 0.9288#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01553760/55000 [============================>.] - ETA: 0s - loss: 0.2358 - accuracy: 0.9292#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01554464/55000 [============================>.] - ETA: 0s - loss: 0.2349 - accuracy: 0.9295#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01555000/55000 [==============================] - 5s 91us/sample - loss: 0.2343 - accuracy: 0.9297[0m
[35m#015 32/10000 [..............................] - ETA: 23s - loss: 0.0526 - accuracy: 0.9688#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 992/10000 [=>............................] - ETA: 1s - loss: 0.1147 - accuracy: 0.9617 #010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 1952/10000 [====>.........................] - ETA: 0s - loss: 0.1368 - accuracy: 0.9600#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 2912/10000 [=======>......................] - ETA: 0s - loss: 0.1448 - accuracy: 0.9574#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 3872/10000 [==========>...................] - ETA: 0s - loss: 0.1446 - accuracy: 0.9566#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 4864/10000 [=============>................] - ETA: 0s - loss: 0.1460 - accuracy: 0.9544#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 5824/10000 [================>.............] - ETA: 0s - loss: 0.1341 - accuracy: 0.9583#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 6752/10000 [===================>..........] - ETA: 0s - loss: 0.1326 - accuracy: 0.9587#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 7648/10000 [=====================>........] - ETA: 0s - loss: 0.1219 - accuracy: 0.9625#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 8608/10000 [========================>.....] - ETA: 0s - loss: 0.1150 - accuracy: 0.9647#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 9568/10000 [===========================>..] - ETA: 0s - loss: 0.1070 - accuracy: 0.9673#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01510000/10000 [==============================] - 1s 61us/sample - loss: 0.1111 - accuracy: 0.9661[0m
[35m2021-09-16 20:25:17,107 sagemaker_tensorflow_container.training INFO master algo-1 is down, stopping parameter server[0m
[35m2021-09-16 20:25:17,108 sagemaker_tensorflow_container.training WARNING No model artifact is saved under path /opt/ml/model. Your training job will not save any model files to S3.[0m
[35mFor details of how to construct your training script see:[0m
[35mhttps://sagemaker.readthedocs.io/en/stable/using_tf.html#adapting-your-local-tensorflow-script[0m
[35m2021-09-16 20:25:17,108 sagemaker-containers INFO Reporting training SUCCESS[0m
[34m#015 32/10000 [..............................] - ETA: 23s - loss: 0.0388 - accuracy: 0.9688#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 992/10000 [=>............................] - ETA: 1s - loss: 0.0971 - accuracy: 0.9748 #010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 1952/10000 [====>.........................] - ETA: 0s - loss: 0.1308 - accuracy: 0.9647#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 2880/10000 [=======>......................] - ETA: 0s - loss: 0.1345 - accuracy: 0.9618#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 3840/10000 [==========>...................] - ETA: 0s - loss: 0.1327 - accuracy: 0.9609#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 4768/10000 [=============>................] - ETA: 0s - loss: 0.1359 - accuracy: 0.9602#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 5696/10000 [================>.............] - ETA: 0s - loss: 0.1269 - accuracy: 0.9631#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 6624/10000 [==================>...........] - ETA: 0s - loss: 0.1223 - accuracy: 0.9641#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 7584/10000 [=====================>........] - ETA: 0s - loss: 0.1126 - accuracy: 0.9673#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 8544/10000 [========================>.....] - ETA: 0s - loss: 0.1064 - accuracy: 0.9690#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 9472/10000 [===========================>..] - ETA: 0s - loss: 0.0998 - accuracy: 0.9712#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01510000/10000 [==============================] - 1s 61us/sample - loss: 0.1030 - accuracy: 0.9701[0m
[34m2021-09-16 20:25:18.012594: W tensorflow/python/util/util.cc:319] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mIf using Keras pass *_constraint arguments to layers.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mIf using Keras pass *_constraint arguments to layers.[0m
[34mINFO:tensorflow:Assets written to: /opt/ml/model/000000001/assets[0m
[34mINFO:tensorflow:Assets written to: /opt/ml/model/000000001/assets[0m
[34m2021-09-16 20:25:18,986 sagemaker-containers INFO Reporting training SUCCESS[0m
2021-09-16 20:25:47 Completed - Training job completed
ProfilerReport-1631823622: NoIssuesFound
Training seconds: 224
Billable seconds: 224
###Markdown
Deploy the trained model to an endpointThe `deploy()` method creates a SageMaker model, which is then deployed to an endpoint to serve prediction requests in real time. We will use the TensorFlow Serving container for the endpoint, because we trained with script mode. This serving container runs an implementation of a web server that is compatible with SageMaker hosting protocol. The [Using your own inference code](https://render.githubusercontent.com/view/ipynb?color_mode=auto&commit=a5c9a21e6ed70fd51ab5178f3a35461473f7b379&enc_url=68747470733a2f2f7261772e67697468756275736572636f6e74656e742e636f6d2f6177732f616d617a6f6e2d736167656d616b65722d6578616d706c65732f613563396132316536656437306664353161623531373866336133353436313437336637623337392f736167656d616b65722d707974686f6e2d73646b2f74656e736f72666c6f775f7363726970745f6d6f64655f747261696e696e675f616e645f73657276696e672f74656e736f72666c6f775f7363726970745f6d6f64655f747261696e696e675f616e645f73657276696e672e6970796e62&nwo=aws%2Famazon-sagemaker-examples&path=sagemaker-python-sdk%2Ftensorflow_script_mode_training_and_serving%2Ftensorflow_script_mode_training_and_serving.ipynb&repository_id=107937815&repository_type=Repository) document explains how SageMaker runs inference containers.
###Code
# cell 08
predictor = mnist_estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
update_endpoint is a no-op in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
###Markdown
Deployed the trained TensorFlow 2.1 model to an endpoint.
###Code
# cell 09
predictor2 = mnist_estimator2.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
update_endpoint is a no-op in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.
###Markdown
Invoke the endpointLet's download the training data and use that as input for inference.
###Code
# cell 10
import numpy as np
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/train_data.npy train_data.npy
!aws --region {region} s3 cp s3://sagemaker-sample-data-{region}/tensorflow/mnist/train_labels.npy train_labels.npy
train_data = np.load('train_data.npy')
train_labels = np.load('train_labels.npy')
###Output
download: s3://sagemaker-sample-data-us-east-1/tensorflow/mnist/train_data.npy to ./train_data.npy
download: s3://sagemaker-sample-data-us-east-1/tensorflow/mnist/train_labels.npy to ./train_labels.npy
###Markdown
The formats of the input and the output data correspond directly to the request and response formats of the Predict method in the [TensorFlow Serving REST API](https://www.tensorflow.org/serving/api_rest). SageMaker's TensforFlow Serving endpoints can also accept additional input formats that are not part of the TensorFlow REST API, including the simplified JSON format, line-delimited JSON objects ("jsons" or "jsonlines"), and CSV data.In this example we are using a numpy array as input, which will be serialized into the simplified JSON format. In addtion, TensorFlow serving can also process multiple items at once as you can see in the following code. You can find the complete documentation on how to make predictions against a TensorFlow serving SageMaker endpoint [here](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rstmaking-predictions-against-a-sagemaker-endpoint).
###Code
# cell 11
predictions = predictor.predict(train_data[:50])
for i in range(0, 50):
prediction = predictions['predictions'][i]['classes']
label = train_labels[i]
print('prediction is {}, label is {}, matched: {}'.format(prediction, label, prediction == label))
###Output
prediction is 7, label is 7, matched: True
prediction is 3, label is 3, matched: True
prediction is 4, label is 4, matched: True
prediction is 6, label is 6, matched: True
prediction is 1, label is 1, matched: True
prediction is 8, label is 8, matched: True
prediction is 1, label is 1, matched: True
prediction is 0, label is 0, matched: True
prediction is 9, label is 9, matched: True
prediction is 8, label is 8, matched: True
prediction is 0, label is 0, matched: True
prediction is 3, label is 3, matched: True
prediction is 1, label is 1, matched: True
prediction is 2, label is 2, matched: True
prediction is 7, label is 7, matched: True
prediction is 0, label is 0, matched: True
prediction is 2, label is 2, matched: True
prediction is 9, label is 9, matched: True
prediction is 6, label is 6, matched: True
prediction is 0, label is 0, matched: True
prediction is 1, label is 1, matched: True
prediction is 6, label is 6, matched: True
prediction is 7, label is 7, matched: True
prediction is 1, label is 1, matched: True
prediction is 9, label is 9, matched: True
prediction is 7, label is 7, matched: True
prediction is 6, label is 6, matched: True
prediction is 5, label is 5, matched: True
prediction is 5, label is 5, matched: True
prediction is 8, label is 8, matched: True
prediction is 8, label is 8, matched: True
prediction is 3, label is 3, matched: True
prediction is 4, label is 4, matched: True
prediction is 4, label is 4, matched: True
prediction is 8, label is 8, matched: True
prediction is 7, label is 7, matched: True
prediction is 3, label is 3, matched: True
prediction is 6, label is 6, matched: True
prediction is 4, label is 4, matched: True
prediction is 6, label is 6, matched: True
prediction is 6, label is 6, matched: True
prediction is 3, label is 3, matched: True
prediction is 1, label is 8, matched: False
prediction is 8, label is 8, matched: True
prediction is 9, label is 9, matched: True
prediction is 9, label is 9, matched: True
prediction is 4, label is 4, matched: True
prediction is 4, label is 4, matched: True
prediction is 0, label is 0, matched: True
prediction is 7, label is 7, matched: True
###Markdown
Examine the prediction result from the TensorFlow 2.1 model.
###Code
# cell 12
predictions2 = predictor2.predict(train_data[:50])
for i in range(0, 50):
prediction = np.argmax(predictions2['predictions'][i])
label = train_labels[i]
print('prediction is {}, label is {}, matched: {}'.format(prediction, label, prediction == label))
###Output
prediction is 3, label is 7, matched: False
prediction is 3, label is 3, matched: True
prediction is 9, label is 4, matched: False
prediction is 6, label is 6, matched: True
prediction is 1, label is 1, matched: True
prediction is 8, label is 8, matched: True
prediction is 1, label is 1, matched: True
prediction is 0, label is 0, matched: True
prediction is 9, label is 9, matched: True
prediction is 8, label is 8, matched: True
prediction is 0, label is 0, matched: True
prediction is 3, label is 3, matched: True
prediction is 1, label is 1, matched: True
prediction is 3, label is 2, matched: False
prediction is 7, label is 7, matched: True
prediction is 0, label is 0, matched: True
prediction is 2, label is 2, matched: True
prediction is 9, label is 9, matched: True
prediction is 6, label is 6, matched: True
prediction is 0, label is 0, matched: True
prediction is 1, label is 1, matched: True
prediction is 6, label is 6, matched: True
prediction is 7, label is 7, matched: True
prediction is 1, label is 1, matched: True
prediction is 9, label is 9, matched: True
prediction is 7, label is 7, matched: True
prediction is 6, label is 6, matched: True
prediction is 5, label is 5, matched: True
prediction is 5, label is 5, matched: True
prediction is 8, label is 8, matched: True
prediction is 8, label is 8, matched: True
prediction is 3, label is 3, matched: True
prediction is 4, label is 4, matched: True
prediction is 4, label is 4, matched: True
prediction is 8, label is 8, matched: True
prediction is 7, label is 7, matched: True
prediction is 3, label is 3, matched: True
prediction is 6, label is 6, matched: True
prediction is 4, label is 4, matched: True
prediction is 6, label is 6, matched: True
prediction is 6, label is 6, matched: True
prediction is 3, label is 3, matched: True
prediction is 8, label is 8, matched: True
prediction is 8, label is 8, matched: True
prediction is 9, label is 9, matched: True
prediction is 9, label is 9, matched: True
prediction is 4, label is 4, matched: True
prediction is 4, label is 4, matched: True
prediction is 0, label is 0, matched: True
prediction is 7, label is 7, matched: True
###Markdown
Delete the endpointLet's delete the endpoint we just created to prevent incurring any extra costs and then [verify](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-cleanup.html)
###Code
# cell 13
predictor.delete_endpoint()
# cell 14
predictor2.delete_endpoint()
###Output
_____no_output_____ |
in-class-exercises/week-4 in-class exercises functions.ipynb | ###Markdown
Week 3 - Functions The real power in any programming language is the **Function**.A function is:* a little block of script (one line or many) that performs specific task or a series of tasks.* reusable and helps us make our code DRY.* triggered when something "invokes" or "calls" it.* ideally modular – it performs a narrow task and you call several functions to perform more complex tasks.
###Code
def myFunction(number1, number2):
print(f"My first input is {number1} and the second number is {number2}.")
total = number1 + number2
print(f"The total is {total}!")
## Call myFunction using 4 and 5 as the arguments
myFunction
## Call myFunction using 10 and 2 as the arguments
## you might forget what arguments are needed for the function to work.
## you can add notes that appear on shift-tab as you call the function.
## call the function using 3 and 4 as arguments
###Output
_____no_output_____
###Markdown
To use or not use functions?Let's compare the two options with a simple example:
###Code
## You have a list of numbers.
mylist1 = [1, -5, 22, -44.2, 33, -45]
## Turn each number into an absolute number.
## a for loop works perfectly fine here.
## The problem is that your project keeps generating more lists.
## Each list of numbers has to be turned into absolute numbers
mylist2 = [-56, -34, -75, -111, -22]
mylist3 = [-100, -200, 100, -300, -100]
mylist4 = [-23, -89, -11, -45, -27]
mylist5 = [0, 1, 2, 3, 4, 5]
###Output
_____no_output_____
###Markdown
DRY Do you keep writing for loops for each list? No, that's a lot of repetition! DRY stands for "Don't Repeat Yourself"
###Code
## Instead we write a function that takes a list,
## converts each list item to an absolute number,
## and prints out the number
## Try swapping out different lists into the function:
###Output
_____no_output_____
###Markdown
Timesaver Imagine for a moment that your editor tells you that the calculation needs to be updated. Instead of needing the absolute number, you need the absolute number minus 5. Having used multiple for loops, you'd have to change each one. What if you miss one or two? Either way, it's a chore. With functions, you just revise the function and the update runs everywhere.
###Code
## You scrape a site and each datapoint is stored in different lists
firstName = ["Irene", "Ursula", "Elon", "Tim"]
lastName = ["Rosenfeld", "Burns", "Musk", "Cook"]
title = ["Chairman and CEO", "Chairman and CEO", "CEO", "CEO"]
company = ["Kraft Foods", "Xerox", "Tesla", "Apple"]
industry = ["Food and Beverage", "Process and Document Management", "Auto Manufacturing", "Consumer Technology"]
## Zip all the lists into a dictionary using a for loop
bio_list = []
for (fname, lname, rank, field) in zip(firstName, lastName, title, industry ):
bio_dict = {"first_name": fname, "last_name": lname, "title": rank, "industry": field}
bio_list.append(bio_dict)
print(bio_list)
## Convert it into a function:
## Call the function
###Output
_____no_output_____
###Markdown
Return Statements So far we have only printed out values processed by a function. But we really want to retain the value the function creates. We can then pass that value to other parts of our calculations and code.
###Code
## Simple example
## A function that adds two numbers together and prints the value:
## call the function with the numbers 2 and 4
## let's try to save it in a variable called myCalc
## Print myCalc. What does it hold?
###Output
_____no_output_____
###Markdown
The return Statement
###Code
## Tweak our function by adding return statement
## instead of printing a value we want to return a value(or values).
## call the function add_numbers_ret
## and store in variable called myCalc
## print myCalc
## What type is myCalc?
###Output
_____no_output_____
###Markdown
Let's revise our earlier absolute values converter with a return statement Here is the earlier version:
###Code
## revised with for loop
## Let's test it by storing the return value in variable x
## What type of data object is it?
## Let's actually make that a list comprehension version of the function:
## Let run the function on a list and store the absolute values in variable y
###Output
_____no_output_____
###Markdown
Functions that call other funcions
###Code
## Two lists of values
## Our goal here is to convert these to absolute numbers and then sum each list.
## We'll do this in steps
someNumbers = [0,1,2,3,4,-5] # total added up is 5; absolute value total 15
negNumbers = [0,-1,-2,-3,-4, 5, -20] # total added up is -25; absolute value total 35
###Output
_____no_output_____
###Markdown
We already have a function called return_absolutes_lc that returns the absolute values in a list
###Code
## Let's write a function that returns the total of the items in a list
## Actually let's tweak that functions to be more efficient
## test it on our two two basic lists
###Output
_____no_output_____
###Markdown
Each function works as expected. addAllNumbers - Returns the sum of a list. return_absolutes_lc - Returns the absolute values in a list. We can have a function ** call ** another function:
###Code
## Let's have addAlladdAllNumbers calls returnabsolute_returns_lc on the someNumbers list
## Let's have addAlladdAllNumbers calls returnabsolute_returns_lc on the negNumbers list
###Output
_____no_output_____ |
.ipynb_checkpoints/Tarea1-checkpoint.ipynb | ###Markdown
Cargar base de datos
###Code
pilotos = pd.read_csv('basepilotos.txt', sep = '\t', index_col = 0)
pilotos.head()
vuelos = pd.read_csv('basevuelos.txt', sep = '\t', index_col=0)
vuelos.head()
###Output
_____no_output_____
###Markdown
Crear modelo
###Code
modeloA = Model("ModeloA")
Hola
###Output
_____no_output_____ |
HeroesOfPymoli/main.ipynb | ###Markdown
Note* If charts are not rendered properly, please follow this link to an alternative notebook viewer. https://nbviewer.jupyter.org/github/loganbonsignore/pandas-challenge/blob/master/HeroesOfPymoli/main.ipynb
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
# calculate total players
total_count = len(purchase_data["SN"].unique())
print(f"Total Players: {total_count}")
###Output
Total Players: 576
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# calculations
unique_items = len(purchase_data["Item Name"].unique())
avg_price = purchase_data["Price"].mean()
purchases = len(purchase_data["Purchase ID"])
revenue = purchase_data["Price"].sum()
# create dataframe with new data
summary_df = pd.DataFrame({
"Number of Items":[unique_items],
"Average Purchase Price":["${:,.2f}".format(avg_price)],
"Number of Purchases":[purchases],
"Total Revenue":["${:,.2f}".format(revenue)]
})
summary_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# create new dataframes based on Gender
male_df = purchase_data.loc[purchase_data["Gender"] == "Male",:]
female_df = purchase_data.loc[purchase_data["Gender"] == "Female",:]
other_df = purchase_data.loc[(purchase_data["Gender"] != "Male") & (purchase_data["Gender"] != "Female"),:]
# calculations
male_count = len(male_df["SN"].unique())
female_count = len(female_df["SN"].unique())
other_count = len(other_df["SN"].unique())
# format outputs
males_pct = "{:.2%}".format((male_count / total_count))
female_pct = "{:.2%}".format((female_count / total_count))
other_pct = "{:.2%}".format((other_count / total_count))
# create dataframe with new data
data = {
"Total Count":[male_count, female_count, other_count],
"Percent of User":[males_pct, female_pct, other_pct]
}
df = pd.DataFrame(data, index=["Males","Females","Other/Non-Disclosed"],columns=["Total Count","Percent of User"])
df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# calculate avg purchase price per gender
avg_purchase_male = male_df["Price"].mean()
avg_purchase_female = female_df["Price"].mean()
avg_purchase_other = other_df["Price"].mean()
# calculate total purchase value per gender
total_male = male_df["Price"].sum()
total_female = female_df["Price"].sum()
total_other = other_df["Price"].sum()
# calculate avg total purchase per person per gender
avg_total_male = total_male / len(male_df["SN"].unique())
avg_total_female = total_female / len(female_df["SN"].unique())
avg_total_other = total_other / len(other_df["SN"].unique())
#create dataframe with new data
data = {
"Purchase Count":[len(male_df), len(female_df), len(other_df)],
"Average Purchase Price":["${:,.2f}".format(avg_purchase_male), "${:,.2f}".format(avg_purchase_female), "${:,.2f}".format(avg_purchase_other)],
"Total Purchase Value":["${:,.2f}".format(total_male), "${:,.2f}".format(total_female), "${:,.2f}".format(total_other)],
"Avg Total Purchase Per Person":["${:,.2f}".format(avg_total_male),"${:,.2f}".format(avg_total_female),"${:,.2f}".format(avg_total_other)]
}
gender_summary_df = pd.DataFrame(data, index=["Males","Females","Other/Non-Disclosed"])
gender_summary_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# create bins and labels
bins = [0,9,14,19,24,29,34,39,200]
labels = ["<10","10-14","15-19","20-24","25-29","30-34","35-39","40+"]
# slice data into bins, group by new bins
purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"],bins=bins,labels=labels)
duplicates_dropped_df = purchase_data.drop_duplicates("SN")
pd_gb = duplicates_dropped_df.groupby("Age Ranges")
# calculations
count = pd_gb["Age Ranges"].count()
pct = pd_gb["SN"].count() / pd_gb["SN"].count().sum()
pct = pct.map("{:.2%}".format)
# create dataframe with new data
data = {
"Total Count":count,
"Percentage of Players":pct
}
age_demo_summary = pd.DataFrame(data)
age_demo_summary
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# create bins and labels
bins = [0,9,14,19,24,29,34,39,200]
labels = ["<10","10-14","15-19","20-24","25-29","30-34","35-39","40+"]
# slice data into bins, group by Age Ranges
purchase_data["Age Ranges"] = pd.cut(purchase_data["Age"],bins=bins,labels=labels)
age_gb = purchase_data.groupby("Age Ranges")
# calculations
pur_count = age_gb["Age Ranges"].count()
avg_pur = age_gb["Price"].mean()
total_pur = age_gb["Price"].sum()
avg_purchase_person = total_pur / pd_gb["Age Ranges"].count()
# format outputs
avg_pur = avg_pur.map("${:,.2f}".format)
total_pur = total_pur.map("${:,.2f}".format)
avg_purchase_person = avg_purchase_person.map("${:,.2f}".format)
# create dataframe with new data
data = {
"Purchase Count":pur_count,
"Average Purchase Price":avg_pur,
"Total Purchase Value":total_pur,
"Average Total Purchase Per Person":avg_purchase_person
}
age_demo_df = pd.DataFrame(data)
age_demo_df
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# create groupby on "SN"
names_gb = purchase_data.groupby("SN")
# calculations
purchase_counts = purchase_data["SN"].value_counts()
avg_purchase_price = names_gb["Price"].mean()
total_purchase_price = names_gb["Price"].sum()
# create dataframe with new data
data = {
"Purchase Counts":purchase_counts,
"Average Purchase Price":avg_purchase_price,
"Total Purchase Price":total_purchase_price
}
df = pd.DataFrame(data)
# sort and format data
df = df.sort_values("Total Purchase Price",ascending=False)
df["Average Purchase Price"] = df["Average Purchase Price"].map("${:,.2f}".format)
df["Total Purchase Price"] = df["Total Purchase Price"].map("${:,.2f}".format)
df.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# retrieve needed columns, create groupby
pop_df = purchase_data.loc[:,["Item ID","Item Name","Price"]]
pop_gb = pop_df.groupby(["Item ID","Item Name"])
# calculate and sort variables
total_count = pop_gb.count().sort_values("Price",ascending=False)
item_price = pop_gb.mean()
total_purchase_value = pop_gb.sum().sort_values("Price",ascending=False)
#rename columns
total_count = total_count.rename(columns={"Price":"Purchase Count"})
item_price = item_price.rename(columns={"Price":"Item Price"})
total_purchase_value = total_purchase_value.rename(columns={"Price":"Total Purchase Value"})
# format currency values
item_price = item_price["Item Price"].map("${:.2f}".format)
total_purchase_value = total_purchase_value["Total Purchase Value"].map("${:.2f}".format)
# create dataframe with new data
summary_table = pd.concat([total_count,item_price,total_purchase_value],axis=1)
popular_items = summary_table.sort_values("Purchase Count",ascending=False)
popular_items.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# retrieve needed columns, create groupby
pop_df = purchase_data.loc[:,["Item ID","Item Name","Price"]]
pop_gb = pop_df.groupby(["Item ID","Item Name"])
# calculate variables
total_count = pop_gb.count().sort_values("Price",ascending=False)
total_purchase_value = pop_gb.sum().sort_values("Price",ascending=False)
item_price = pop_gb.mean()
# rename columns, format "item price" series
total_count = total_count.rename(columns={"Price":"Purchase Count"})
total_purchase_value = total_purchase_value.rename(columns={"Price":"Total Purchase Value"})
item_price = item_price.rename(columns={"Price":"Item Price"})
item_price = item_price["Item Price"].map("${:.2f}".format)
# create dataframe sorted by Total Purchase Value
summary_table = pd.concat([total_count,item_price,total_purchase_value],axis=1)
popular_items = summary_table.sort_values("Total Purchase Value",ascending=False)
popular_items["Total Purchase Value"] = popular_items["Total Purchase Value"].map("${:.2f}".format)
popular_items.head()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
import os
# File to Load (Remember to Change These)
file_to_load = os.path.join("..","Resources","purchase_data.csv")
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
#Used unique function to give me the unique values. This turns the dataframe to a dict
group_number_of_players = purchase_data["SN"].nunique()
#Create data frame to display data
df_player = pd.DataFrame({
"Total Players": [group_number_of_players]
})
df_player
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Unique Items
unique_items = purchase_data["Item ID"].value_counts()
unique_items = unique_items.count()
#Average Price
total_avg_price = purchase_data["Price"].mean()
#Number of Purchases
total_num_of_pur = purchase_data["Item Name"].count()
#Total Revenue
total_rev = purchase_data["Price"].sum()
#creating a summary dataframe
summary_df = pd.DataFrame({
'Number of Unique Items': [unique_items],
'Average Price': "${:.2f}".format(total_avg_price),
'Number of Purchases': [total_num_of_pur],
'Total Revenue': "${:,.2f}".format(total_rev)
})
summary_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#creating a copy of the data frame
gender = purchase_data[["SN", "Gender"]].copy()
#drop duplicate SN to have the true amount of gender
gender.drop_duplicates("SN", keep = "first", inplace = True)
#The count of Gender
gender_count_df = gender["Gender"].value_counts()
# #Find the percentage of Genders within the DF
gender_per_df = gender_count_df/gender["Gender"].count()
# #Creating a data frame summary
demographics_summary = pd.DataFrame ({
"Total Count":gender_count_df, 'Percentage of Players': gender_per_df.map("{:.2%}".format)
})
demographics_summary
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#a copy of purchase data to drop duplicates form column SN
purchase_date_copy = purchase_data.copy()
#drop duplicate SN to have the true amount of gender
purchase_date_copy.drop_duplicates("SN", keep = "first", inplace = True)
#group copy by gender
grouped_gender_df_copy = purchase_date_copy.groupby(["Gender"])
#group by gender
grouped_gender_df = purchase_data.groupby(["Gender"])
#Purchase count
gender_purchase_count = grouped_gender_df["Price"].count()
#Average purchase
gender_avg_purchase = grouped_gender_df["Price"].mean()
#Total purchase price
gender_total_purchase_value = grouped_gender_df["Price"].sum()
#Average total purchase per person
gender_avg_total = grouped_gender_df["Price"].sum()/grouped_gender_df_copy["Gender"].count()
#DF for Purchase Analysis
purchasing_analysis = pd.DataFrame ({
"Purchase Count":gender_purchase_count,
'Average Purchase Price': gender_avg_purchase.map("${:,.2f}".format),
'Total Purchase Value': gender_total_purchase_value.map("${:,.2f}".format),
'Avg Total Purchase per Person': gender_avg_total.map("${:,.2f}".format)
})
purchasing_analysis
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
#Defining my bin
bins = [0, 9, 14, 19, 24, 29, 34, 39, 99]
#Defining my loabel that is going on my bin, Label always has one less then bin
age_range_label = ["< 10","10-14","15-19","20-24","25-29","30-34","35-39","40 +"]
#Using the cut function to create age range column, that adds the bin corelating with the age
purchase_date_copy["Age Range"] = pd.cut(purchase_date_copy["Age"], bins, labels = age_range_label)
#Count the bins that the ages are stored in
demographics_total_count = purchase_date_copy["Age Range"].value_counts()
#Divide each age count value with the total age amount of to get the percentage
demographics_percentage = demographics_total_count/purchase_date_copy["Age Range"].count()
#Add the data into a dataframe and format to round to the nearest two decimal place
demographics_table = pd.DataFrame ({
"Total Count": demographics_total_count, "Percentage of Players": demographics_percentage.map("{:.2%}".format)
})
#Sorting the index from ascending order
demographics_table = demographics_table.sort_index()
demographics_table
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Create a copy of purchase data that has duplicate SN, which also has all price invoices
purchase_analysis_data = purchase_data.copy()
#Using the cut function to create age range column, that adds the bin corelating with the age
purchase_analysis_data["Age Range"] = pd.cut(purchase_analysis_data["Age"], bins, labels = age_range_label)
#Group the data frame by column Age Range
grouped_analysis_data = purchase_analysis_data.groupby(["Age Range"])
#Find purchase count of price
age_purchasing_count = grouped_analysis_data["Price"].count()
#Find the average purchase price
age_avg_purchasing_price = grouped_analysis_data["Price"].mean()
#Find the total value for each age range
age_total_purchase_value = grouped_analysis_data["Price"].sum()
#using the purchase data copy from previous since duplicate SN are not there, create a count of price
per_person_count = purchase_date_copy.groupby(["Age Range"])["Price"].count()
#Find the age total per person by dividing sum with count per person
age_total_purchase_per_person = age_total_purchase_value/per_person_count
#Create a data frame
age_purchas_analysis = pd.DataFrame ({
"Purchase Count": age_purchasing_count,
"Average Purchase Price": age_avg_purchasing_price.map("${:,.2f}".format),
"Total Purchase Value": age_total_purchase_value.map("${:,.2f}".format),
"Avg Total Purchase per Person": age_total_purchase_per_person.map("${:,.2f}".format)
})
age_purchas_analysis
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Group by SN
top_spenders = purchase_data.groupby(["SN"])
#Count the number of users spent
spender_count = top_spenders["Price"].count()
#Average Purchase Price
spender_avg_purchase_price = top_spenders["Price"].mean()
#Total Purchase Value
spender_tot_purchase_value = top_spenders["Price"].sum()
#Created a summary data frame
top_spenders_summary = pd.DataFrame({
"Purchase Count" : spender_count,
"Average Purchase Price": spender_avg_purchase_price.map("${:,.2f}".format),
"Total Purchase Value": spender_tot_purchase_value
})
#sort the Top SPender Summary by descedning order
sort_top_spenders_summary = top_spenders_summary.sort_values("Total Purchase Value", ascending = False)
#After sorting format the column Total Purchase Value. If you format before, it changes the descending order
sort_top_spenders_summary["Total Purchase Value"] = sort_top_spenders_summary["Total Purchase Value"].map("${:,.2f}".format)
#print head
sort_top_spenders_summary.head()
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#Retrieve the Item ID, Item Name, and Item Price columns
popular_items = purchase_data[["Item ID", "Item Name", "Price"]]
#Group by Item ID and Item Name
group_popular_items = popular_items.groupby(["Item ID","Item Name"])
#Count of items and rename data so no scalar values error occurs
popular_purchas_count = group_popular_items.count()
#Mean Avg of items and rename data so no scalar values error occurs
popular_item_price = group_popular_items.mean()
#Sum of items and rename data so no scalar values error occurs
popular_item_sum = group_popular_items.sum()
#Create a Data frame
most_popular_items = pd.DataFrame({
"Purchase Count":popular_purchas_count["Price"],
"Item Price": popular_item_price["Price"],
"Total Purchase Value": popular_item_sum["Price"]
})
#Formatting the item price and total purchase value
most_popular_items["Item Price"] = most_popular_items["Item Price"].map("${:,.2f}".format)
most_popular_items["Total Purchase Value"] = most_popular_items["Total Purchase Value"].map("${:,.2f}".format)
#Sort values from descending purchase count
sort_most_popular_items = most_popular_items.sort_values("Purchase Count", ascending = False)
sort_most_popular_items.head()
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
#Create a function to convert string currency to float
#remove $, commas, and convert to float
def convert_cur(val):
if type(val) == str:
new_val = val.replace(',','').replace('$', '')
else:
return float(val)
return float(new_val)
#Apply the function to the Total Purchase Value
most_popular_items["Total Purchase Value"] = most_popular_items["Total Purchase Value"].apply(convert_cur)
#Sort column Total Purchase Value
sort_total_purchase_items = most_popular_items.sort_values("Total Purchase Value", ascending = False)
#Applu curency to Total Purchas Value
sort_total_purchase_items["Total Purchase Value"] = sort_total_purchase_items["Total Purchase Value"].map("${:,.2f}".format)
sort_total_purchase_items.head()
###Output
_____no_output_____ |
endsem/.ipynb_checkpoints/genetic_algorithms-checkpoint.ipynb | ###Markdown
Using Elitism Average Results: Minimum Value of fitness is between -8.7 and -9.50. Most of the time it is near -8.7. It also depends on the population size, with higher population size like >1000, it sometimes reached to a value of -9.7.
###Code
function_val_epoch_elitism = []
RANGE_OF_X = [-2.04 , 2.04]
POPULATION_SIZE = 1000
GENES = ["01" , "012" , "0123456789", "0123456789"]
TARGET_LENGTH = 4
CROSSOVER_PROB = 0.1
h = 1e-7
X_SIZE = 5
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
def f1(X):
return np.sum(np.square(X))
def f2(X):
return np.sum(np.floor(X))
def f3(X):
return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0]
def g(X):
return f1(X) + f2(X) + f3(X)
def determine_target_length(range_of_x):
n = max(range_of_x)
return int(np.ceil(np.log(n)/np.log(2)))
def getNum(l):
num_str=""
if l[0] == "1":
num_str+="-"
num_str += "{}.{}{}".format(l[1] , l[2], l[3])
return float(num_str)
def inRange(l , range_of_x):
num = getNum(l)
return min(range_of_x)<= num <= max(range_of_x)
class Individual(object):
def __init__(self,chromosome):
self.chromosome = chromosome
self.fitness = self.calculate_fitness()
@classmethod
def mutate(self , digit_num:int):
global GENES
return random.choice(GENES[digit_num])
@classmethod
def create_gnome(self):
global TARGET_LENGTH
global RANGE_OF_X
global X_SIZE
gnome = []
for i in range(X_SIZE):
while True:
l = [self.mutate(i) for i in range(TARGET_LENGTH)]
if (inRange(l , RANGE_OF_X)):
gnome.append(l)
break
return gnome
def mate(self , par2):
global CROSSOVER_PROB
child_chromosome = []
for gp1 , gp2 in zip(self.chromosome , par2.chromosome):
child_part_chromosome = []
# print(gp1)
for i in range(len(gp1)):
probability_of_crossover = random.random()
if (probability_of_crossover > CROSSOVER_PROB):
# do crossover
probability_of_p1_gene = random.random()
if probability_of_p1_gene > 0.5:
child_part_chromosome.append(gp1[i])
else:
child_part_chromosome.append(gp2[i])
else:
# do mutation
child_part_chromosome.append(self.mutate(i))
child_chromosome.append(child_part_chromosome)
return Individual(child_chromosome)
def calculate_fitness(self):
global TARGET_LENGTH
X = []
for s in self.chromosome:
#print(s)
#s = ''.join(map(str, self.chromosome))
x = getNum(s)
X.append(x)
return g(X)
global POPULATION_SIZE
global TARGET_LENGTH
global RANGE_OF_X
# TARGET_LENGTH = determine_target_length(RANGE_OF_X)
generation = 1
count = 1000
population = []
for _ in range(POPULATION_SIZE):
gnome = Individual.create_gnome()
population.append(Individual(gnome))
while count!=0:
count-=1
population = sorted(population , key = lambda x:x.fitness)
# performing elitism
new_generation = []
s = int(0.10*POPULATION_SIZE)
new_generation.extend(population[:s])
s = int(0.90*POPULATION_SIZE)
for _ in range(s):
parent1 = random.choice(population[:POPULATION_SIZE//2])
parent2 = random.choice(population[:POPULATION_SIZE//2])
child = parent1.mate(parent2)
new_generation.append(child)
if generation % 5 ==0:
population = new_generation
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness))
function_val_epoch_elitism.append(population[0].fitness)
generation += 1
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Gen: {} X: {}\tMinimimum Value: {}".format(generation, get_num_arr, population[0].fitness))
plt.plot(range(len(function_val_epoch_elitism)) , function_val_epoch_elitism , "k--")
###Output
Gen: 5 X: [-0.8, -0.76, -0.09, -0.06, -0.4] Fit: -5.1163667428183
Gen: 10 X: [-0.8, -0.76, -0.09, -0.06, -0.4] Fit: -5.1163667428183
Gen: 15 X: [-0.8, -0.76, -0.09, -0.06, -0.4] Fit: -5.1163667428183
Gen: 20 X: [-0.8, -0.76, -0.09, -0.06, -0.4] Fit: -5.1163667428183
Gen: 25 X: [-0.7, -0.11, -0.61, -0.17, -0.19] Fit: -5.879661700246153
Gen: 30 X: [-0.7, -0.11, -0.61, -0.17, -0.19] Fit: -5.879661700246153
Gen: 35 X: [-0.44, -0.02, -0.16, -0.34, -0.02] Fit: -6.570418404404528
Gen: 40 X: [-0.05, -0.16, -0.23, -0.39, -0.09] Fit: -6.822674138367253
Gen: 45 X: [-0.05, -0.16, -0.23, -0.39, -0.09] Fit: -6.822674138367253
Gen: 50 X: [-0.09, -0.76, -0.05, -0.11, -0.04] Fit: -6.891377926732752
Gen: 55 X: [-0.09, -0.76, -0.05, -0.11, -0.04] Fit: -6.891377926732752
Gen: 60 X: [-0.06, -0.57, -0.39, -0.31, -0.17] Fit: -7.043979757113863
Gen: 65 X: [-1.02, -0.14, -0.23, -0.31, -0.01] Fit: -7.286555813344302
Gen: 70 X: [-0.25, -0.05, -0.34, -0.21, -0.2] Fit: -7.361184462392697
Gen: 75 X: [-0.23, -0.13, -0.06, -0.02, -0.1] Fit: -7.444378431826655
Gen: 80 X: [-0.23, -0.13, -0.06, -0.02, -0.1] Fit: -7.444378431826655
Gen: 85 X: [-0.23, -0.13, -0.06, -0.02, -0.1] Fit: -7.444378431826655
Gen: 90 X: [-0.23, -0.13, -0.06, -0.02, -0.1] Fit: -7.444378431826655
Gen: 95 X: [-0.22, -0.14, -0.17, -0.14, -0.13] Fit: -7.9914511773812755
Gen: 100 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 105 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 110 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 115 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 120 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 125 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 130 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 135 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 140 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 145 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 150 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 155 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 160 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 165 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 170 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 175 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 180 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 185 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 190 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 195 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 200 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 205 X: [-0.09, -0.37, -0.15, -0.06, -0.03] Fit: -8.514878086370487
Gen: 210 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 215 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 220 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 225 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 230 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 235 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 240 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 245 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 250 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 255 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 260 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 265 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 270 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 275 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 280 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 285 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 290 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 295 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 300 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 305 X: [-1.05, -0.15, -0.35, -0.23, -0.14] Fit: -8.78976048757957
Gen: 310 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 315 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 320 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 325 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 330 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 335 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 340 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 345 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 350 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 355 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 360 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 365 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 370 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 375 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 380 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 385 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 390 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 395 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 400 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 405 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 410 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 415 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 420 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 425 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 430 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 435 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 440 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 445 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 450 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 455 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 460 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 465 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 470 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 475 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 480 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 485 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 490 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 495 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 500 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 505 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 510 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 515 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 520 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 525 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 530 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 535 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 540 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 545 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 550 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 555 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 560 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 565 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 570 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 575 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 580 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 585 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 590 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 595 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 600 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 605 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 610 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 615 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 620 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 625 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 630 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 635 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 640 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 645 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 650 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 655 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 660 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 665 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 670 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 675 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 680 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 685 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 690 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 695 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 700 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 705 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 710 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 715 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 720 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 725 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 730 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 735 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 740 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 745 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 750 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 755 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 760 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 765 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 770 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 775 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 780 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 785 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 790 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 795 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 800 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 805 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 810 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 815 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 820 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 825 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 830 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 835 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 840 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 845 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 850 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 855 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 860 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 865 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 870 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 875 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 880 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 885 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 890 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 895 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 900 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 905 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 910 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 915 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 920 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 925 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 930 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 935 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 940 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 945 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 950 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 955 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 960 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 965 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 970 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 975 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 980 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 985 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 990 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 995 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 1000 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Fit: -9.023994382165274
Gen: 1001 X: [-0.16, -0.17, -0.05, -0.21, -0.08] Minimimum Value: -9.023994382165274
###Markdown
Using Basic Genetic Algorithm The minimum value of fitness achieved is between -8.4 to -9.0.
###Code
function_val_epoch_basic_genetic = []
RANGE_OF_X = [-2.04 , 2.04]
POPULATION_SIZE = 1000
GENES = ["01" , "012" , "0123456789", "0123456789"]
TARGET_LENGTH = 4
CROSSOVER_PROB = 0.1
h = 1e-7
X_SIZE = 5
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
def f1(X):
return np.sum(np.square(X))
def f2(X):
return np.sum(np.floor(X))
def f3(X):
return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0]
def g(X):
return f1(X) + f2(X) + f3(X)
def determine_target_length(range_of_x):
n = max(range_of_x)
return int(np.ceil(np.log(n)/np.log(2)))
def getNum(l):
num_str=""
if l[0] == "1":
num_str+="-"
num_str += "{}.{}{}".format(l[1] , l[2], l[3])
return float(num_str)
def inRange(l , range_of_x):
num = getNum(l)
return min(range_of_x)<= num <= max(range_of_x)
class Individual(object):
def __init__(self,chromosome):
self.chromosome = chromosome
self.fitness = self.calculate_fitness()
@classmethod
def mutate(self , digit_num:int):
global GENES
return random.choice(GENES[digit_num])
@classmethod
def create_gnome(self):
global TARGET_LENGTH
global RANGE_OF_X
global X_SIZE
gnome = []
for i in range(X_SIZE):
while True:
l = [self.mutate(i) for i in range(TARGET_LENGTH)]
if (inRange(l , RANGE_OF_X)):
gnome.append(l)
break
return gnome
def mate(self , par2):
child_chromosome = []
global CROSSOVER_PROB
for gp1 , gp2 in zip(self.chromosome , par2.chromosome):
child_part_chromosome = []
# print(gp1)
for i in range(len(gp1)):
probability_of_crossover = random.random()
if (probability_of_crossover > CROSSOVER_PROB):
# do crossover
probability_of_p1_gene = random.random()
if probability_of_p1_gene > 0.5:
child_part_chromosome.append(gp1[i])
else:
child_part_chromosome.append(gp2[i])
else:
# do mutation
child_part_chromosome.append(self.mutate(i))
child_chromosome.append(child_part_chromosome)
return Individual(child_chromosome)
def calculate_fitness(self):
global TARGET_LENGTH
X = []
for s in self.chromosome:
#print(s)
#s = ''.join(map(str, self.chromosome))
x = getNum(s)
X.append(x)
return g(X)
global POPULATION_SIZE
global TARGET_LENGTH
global RANGE_OF_X
# TARGET_LENGTH = determine_target_length(RANGE_OF_X)
generation = 1
count = 1000
population = []
for _ in range(POPULATION_SIZE):
gnome = Individual.create_gnome()
population.append(Individual(gnome))
while count!=0:
count-=1
population = sorted(population , key = lambda x:x.fitness)
# performing elitism
new_generation = []
s = int(0.10*POPULATION_SIZE)
new_generation.extend(population[:s])
s = int(0.90*POPULATION_SIZE)
for _ in range(s):
# no elitism
parent1 = random.choice(population[:POPULATION_SIZE])
parent2 = random.choice(population[:POPULATION_SIZE])
child = parent1.mate(parent2)
new_generation.append(child)
if generation % 5 ==0:
population = new_generation
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness))
function_val_epoch_basic_genetic.append(population[0].fitness)
generation += 1
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Gen: {} X: {}\tMinimimum Value: {}".format(generation, get_num_arr, population[0].fitness))
plt.plot(range(len(function_val_epoch_basic_genetic)) , function_val_epoch_basic_genetic , "k--")
###Output
Gen: 5 X: [-0.43, 0.09, -0.06, -0.04, 0.05] Fit: -3.7865567921665204
Gen: 10 X: [-0.48, 0.64, -0.35, -0.11, -0.38] Fit: -5.0429321164847725
Gen: 15 X: [-0.48, 0.64, -0.35, -0.11, -0.38] Fit: -5.0429321164847725
Gen: 20 X: [-0.48, 0.64, -0.35, -0.11, -0.38] Fit: -5.0429321164847725
Gen: 25 X: [-0.48, 0.64, -0.35, -0.11, -0.38] Fit: -5.0429321164847725
Gen: 30 X: [-0.48, 0.64, -0.35, -0.11, -0.38] Fit: -5.0429321164847725
Gen: 35 X: [-0.48, 0.64, -0.35, -0.11, -0.38] Fit: -5.0429321164847725
Gen: 40 X: [-0.48, 0.64, -0.35, -0.11, -0.38] Fit: -5.0429321164847725
Gen: 45 X: [-0.3, -0.23, 0.57, -0.05, -0.36] Fit: -5.482656471541309
Gen: 50 X: [-0.3, -0.23, 0.57, -0.05, -0.36] Fit: -5.482656471541309
Gen: 55 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 60 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 65 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 70 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 75 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 80 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 85 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 90 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 95 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 100 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 105 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 110 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 115 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 120 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 125 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 130 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 135 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 140 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 145 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 150 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 155 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 160 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 165 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 170 X: [-0.48, -0.31, -0.33, -0.15, -0.13] Fit: -6.622905122564039
Gen: 175 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 180 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 185 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 190 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 195 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 200 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 205 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 210 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 215 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 220 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 225 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 230 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 235 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 240 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 245 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 250 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 255 X: [-0.44, -0.19, 0.01, -0.5, -0.28] Fit: -6.986517360260253
Gen: 260 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 265 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 270 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 275 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 280 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 285 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 290 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 295 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 300 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 305 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 310 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 315 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 320 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 325 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 330 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 335 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 340 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 345 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 350 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 355 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 360 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 365 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 370 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 375 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 380 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 385 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 390 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 395 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 400 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 405 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 410 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 415 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 420 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 425 X: [-0.4, -0.59, -0.3, -0.01, -0.23] Fit: -7.843382617785142
Gen: 430 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 435 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 440 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 445 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 450 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 455 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 460 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 465 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 470 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 475 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 480 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 485 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 490 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 495 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 500 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 505 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 510 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 515 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 520 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 525 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 530 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 535 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 540 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 545 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 550 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 555 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 560 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 565 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 570 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 575 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 580 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 585 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 590 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 595 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 600 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 605 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 610 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 615 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 620 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 625 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 630 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 635 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 640 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 645 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 650 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 655 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 660 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 665 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 670 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 675 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 680 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 685 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 690 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 695 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 700 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 705 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 710 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 715 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 720 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 725 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 730 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 735 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 740 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 745 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 750 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 755 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 760 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 765 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 770 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 775 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 780 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 785 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 790 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 795 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 800 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 805 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 810 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 815 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 820 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 825 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 830 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 835 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 840 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 845 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 850 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 855 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 860 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 865 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 870 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 875 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 880 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 885 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 890 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 895 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 900 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 905 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 910 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 915 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 920 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 925 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 930 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 935 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 940 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 945 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 950 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 955 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 960 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 965 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 970 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 975 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 980 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 985 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 990 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 995 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 1000 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Fit: -8.121080937869879
Gen: 1001 X: [-0.23, -0.06, -0.03, -0.07, -0.27] Minimimum Value: -8.121080937869879
###Markdown
Using Diversity:
###Code
function_val_epoch_diversity = []
RANGE_OF_X = [-2.04 , 2.04]
POPULATION_SIZE = 1000
GENES = ["01" , "012" , "0123456789", "0123456789"]
TARGET_LENGTH = 4
h = 1e-7
X_SIZE = 5
DIVERSITY_PERCENT = 50
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
def f1(X):
return np.sum(np.square(X))
def f2(X):
return np.sum(np.floor(X))
def f3(X):
return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0]
def g(X):
return f1(X) + f2(X) + f3(X)
def determine_target_length(range_of_x):
n = max(range_of_x)
return int(np.ceil(np.log(n)/np.log(2)))
def getNum(l):
num_str=""
if l[0] == "1":
num_str+="-"
num_str += "{}.{}{}".format(l[1] , l[2], l[3])
return float(num_str)
def inRange(l , range_of_x):
num = getNum(l)
return min(range_of_x)<= num <= max(range_of_x)
class Individual(object):
def __init__(self,chromosome):
self.chromosome = chromosome
self.fitness = self.calculate_fitness()
@classmethod
def mutate(self , digit_num:int):
global GENES
return random.choice(GENES[digit_num])
@classmethod
def create_gnome(self):
global TARGET_LENGTH
global RANGE_OF_X
global X_SIZE
gnome = []
for i in range(X_SIZE):
while True:
l = [self.mutate(i) for i in range(TARGET_LENGTH)]
if (inRange(l , RANGE_OF_X)):
gnome.append(l)
break
return gnome
def mate(self , par2):
global DIVERSITY_PERCENT
tot = len(self.chromosome)
diversity_idx_arr = np.random.choice(range(tot) ,
replace=False ,
size=int(DIVERSITY_PERCENT*tot / 100))
child_chromosome = []
for j , gp1 , gp2 in zip(range(tot) , self.chromosome , par2.chromosome):
child_part_chromosome = []
for i in range(len(gp1)):
if (j*tot+i) in diversity_idx_arr:
child_part_chromosome.append(self.mutate(i))
else:
probability_of_p1_gene = random.random()
if probability_of_p1_gene > 0.5:
child_part_chromosome.append(gp1[i])
else:
child_part_chromosome.append(gp2[i])
child_chromosome.append(child_part_chromosome)
return Individual(child_chromosome)
def calculate_fitness(self):
global TARGET_LENGTH
X = []
for s in self.chromosome:
#print(s)
#s = ''.join(map(str, self.chromosome))
x = getNum(s)
X.append(x)
return g(X)
global POPULATION_SIZE
global TARGET_LENGTH
global RANGE_OF_X
# TARGET_LENGTH = determine_target_length(RANGE_OF_X)
generation = 1
count = 1000
population = []
for _ in range(POPULATION_SIZE):
gnome = Individual.create_gnome()
population.append(Individual(gnome))
while count!=0:
count-=1
population = sorted(population , key = lambda x:x.fitness)
# performing elitism
new_generation = []
s = int(0.10*POPULATION_SIZE)
new_generation.extend(population[:s])
s = int(0.90*POPULATION_SIZE)
for _ in range(s):
# no elitism
parent1 = random.choice(population[:POPULATION_SIZE])
parent2 = random.choice(population[:POPULATION_SIZE])
child = parent1.mate(parent2)
new_generation.append(child)
if generation % 5 ==0:
population = new_generation
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness))
function_val_epoch_diversity.append(population[0].fitness)
generation += 1
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Gen: {} X: {}\tMinimimum Value: {}".format(generation, get_num_arr, population[0].fitness))
plt.plot(range(len(function_val_epoch_diversity)) , function_val_epoch_diversity , "k--")
###Output
Gen: 5 X: [-0.02, -0.1, -0.02, -0.24, -0.61] Fit: -3.937674329374512
Gen: 10 X: [-1.54, -0.15, -0.4, -0.29, -0.27] Fit: -4.312514106136908
Gen: 15 X: [-1.54, -0.15, -0.4, -0.29, -0.27] Fit: -4.312514106136908
Gen: 20 X: [-0.26, -0.82, -0.22, -0.57, -0.37] Fit: -4.964899514920933
Gen: 25 X: [-0.26, -0.82, -0.22, -0.57, -0.37] Fit: -4.964899514920933
Gen: 30 X: [0.68, -0.07, -0.12, -0.54, -0.27] Fit: -5.289523224350354
Gen: 35 X: [0.68, -0.07, -0.12, -0.54, -0.27] Fit: -5.289523224350354
Gen: 40 X: [-0.42, -0.1, -0.02, -0.36, -0.14] Fit: -5.527993746677225
Gen: 45 X: [-0.42, -0.1, -0.02, -0.36, -0.14] Fit: -5.527993746677225
Gen: 50 X: [-0.42, -0.1, -0.02, -0.36, -0.14] Fit: -5.527993746677225
Gen: 55 X: [-1.12, -0.41, -0.35, -0.37, -0.55] Fit: -6.05226898572454
Gen: 60 X: [0.12, -0.11, -0.41, -0.27, -0.1] Fit: -6.453007986388743
Gen: 65 X: [0.12, -0.11, -0.41, -0.27, -0.1] Fit: -6.453007986388743
Gen: 70 X: [0.12, -0.11, -0.41, -0.27, -0.1] Fit: -6.453007986388743
Gen: 75 X: [0.12, -0.11, -0.41, -0.27, -0.1] Fit: -6.453007986388743
Gen: 80 X: [0.12, -0.11, -0.41, -0.27, -0.1] Fit: -6.453007986388743
Gen: 85 X: [0.12, -0.11, -0.41, -0.27, -0.1] Fit: -6.453007986388743
Gen: 90 X: [0.12, -0.11, -0.41, -0.27, -0.1] Fit: -6.453007986388743
Gen: 95 X: [0.12, -0.11, -0.41, -0.27, -0.1] Fit: -6.453007986388743
Gen: 100 X: [0.12, -0.11, -0.41, -0.27, -0.1] Fit: -6.453007986388743
Gen: 105 X: [0.12, -0.11, -0.41, -0.27, -0.1] Fit: -6.453007986388743
Gen: 110 X: [-1.31, -0.12, -0.27, -0.17, -0.15] Fit: -6.581034658487475
Gen: 115 X: [0.14, -0.11, -0.02, -0.22, -0.18] Fit: -7.0339500201100496
Gen: 120 X: [0.14, -0.11, -0.02, -0.22, -0.18] Fit: -7.0339500201100496
Gen: 125 X: [0.14, -0.11, -0.02, -0.22, -0.18] Fit: -7.0339500201100496
Gen: 130 X: [-0.14, -0.17, -0.23, -0.35, -0.11] Fit: -7.168568732811705
Gen: 135 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 140 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 145 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 150 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 155 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 160 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 165 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 170 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 175 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 180 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 185 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 190 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 195 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 200 X: [-0.31, -0.51, -0.09, -0.3, -0.17] Fit: -7.755133476096957
Gen: 205 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 210 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 215 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 220 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 225 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 230 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 235 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 240 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 245 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 250 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 255 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 260 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 265 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 270 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 275 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 280 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 285 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 290 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 295 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 300 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 305 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 310 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 315 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 320 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 325 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 330 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 335 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 340 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 345 X: [-1.03, -0.18, -0.01, -0.06, -0.08] Fit: -7.8096676130693465
Gen: 350 X: [-0.41, -0.2, -0.01, -0.03, -0.19] Fit: -7.87288705220368
Gen: 355 X: [-0.41, -0.2, -0.01, -0.03, -0.19] Fit: -7.87288705220368
Gen: 360 X: [-0.41, -0.2, -0.01, -0.03, -0.19] Fit: -7.87288705220368
Gen: 365 X: [-0.41, -0.2, -0.01, -0.03, -0.19] Fit: -7.87288705220368
Gen: 370 X: [-0.41, -0.2, -0.01, -0.03, -0.19] Fit: -7.87288705220368
Gen: 375 X: [-0.41, -0.2, -0.01, -0.03, -0.19] Fit: -7.87288705220368
Gen: 380 X: [-0.41, -0.2, -0.01, -0.03, -0.19] Fit: -7.87288705220368
Gen: 385 X: [-0.41, -0.2, -0.01, -0.03, -0.19] Fit: -7.87288705220368
Gen: 390 X: [-0.41, -0.2, -0.01, -0.03, -0.19] Fit: -7.87288705220368
Gen: 395 X: [-0.41, -0.2, -0.01, -0.03, -0.19] Fit: -7.87288705220368
Gen: 400 X: [-1.27, -0.11, -0.21, -0.14, -0.29] Fit: -8.23279646265678
Gen: 405 X: [-1.27, -0.11, -0.21, -0.14, -0.29] Fit: -8.23279646265678
Gen: 410 X: [-1.27, -0.11, -0.21, -0.14, -0.29] Fit: -8.23279646265678
Gen: 415 X: [-1.27, -0.11, -0.21, -0.14, -0.29] Fit: -8.23279646265678
Gen: 420 X: [-1.27, -0.11, -0.21, -0.14, -0.29] Fit: -8.23279646265678
Gen: 425 X: [-1.27, -0.11, -0.21, -0.14, -0.29] Fit: -8.23279646265678
Gen: 430 X: [-1.27, -0.11, -0.21, -0.14, -0.29] Fit: -8.23279646265678
Gen: 435 X: [-0.08, -0.07, -0.09, -0.11, -0.15] Fit: -8.368935459960822
Gen: 440 X: [-0.08, -0.07, -0.09, -0.11, -0.15] Fit: -8.368935459960822
Gen: 445 X: [-0.08, -0.07, -0.09, -0.11, -0.15] Fit: -8.368935459960822
Gen: 450 X: [-0.08, -0.07, -0.09, -0.11, -0.15] Fit: -8.368935459960822
Gen: 455 X: [-0.08, -0.07, -0.09, -0.11, -0.15] Fit: -8.368935459960822
Gen: 460 X: [-0.08, -0.07, -0.09, -0.11, -0.15] Fit: -8.368935459960822
Gen: 465 X: [-0.08, -0.07, -0.09, -0.11, -0.15] Fit: -8.368935459960822
Gen: 470 X: [-0.08, -0.07, -0.09, -0.11, -0.15] Fit: -8.368935459960822
Gen: 475 X: [-0.08, -0.07, -0.09, -0.11, -0.15] Fit: -8.368935459960822
Gen: 480 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 485 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 490 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 495 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 500 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 505 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 510 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 515 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 520 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 525 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 530 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 535 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 540 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 545 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 550 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 555 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 560 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 565 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 570 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 575 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 580 X: [-1.13, -0.17, -0.22, -0.02, -0.06] Fit: -8.522227663008454
Gen: 585 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 590 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 595 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 600 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 605 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 610 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 615 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 620 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 625 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 630 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 635 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 640 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 645 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 650 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 655 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 660 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 665 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 670 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 675 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 680 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 685 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 690 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 695 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 700 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 705 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 710 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 715 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 720 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 725 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 730 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 735 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 740 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 745 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 750 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 755 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 760 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 765 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 770 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 775 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 780 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 785 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 790 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 795 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 800 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 805 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 810 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 815 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 820 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 825 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 830 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 835 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 840 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 845 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 850 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 855 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 860 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 865 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 870 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 875 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 880 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 885 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 890 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 895 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 900 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 905 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 910 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 915 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 920 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 925 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 930 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 935 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 940 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 945 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 950 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 955 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 960 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 965 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 970 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 975 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 980 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 985 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 990 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 995 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 1000 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Fit: -8.535805733231442
Gen: 1001 X: [-0.4, -0.14, -0.09, -0.17, -0.03] Minimimum Value: -8.535805733231442
###Markdown
Using Random Search
###Code
function_val_epoch_random_search = []
RANGE_OF_X = [-2.04 , 2.04]
POPULATION_SIZE = 1000
GENES = ["01" , "012" , "0123456789", "0123456789"]
TARGET_LENGTH = 4
h = 1e-7
X_SIZE = 5
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
def f1(X):
return np.sum(np.square(X))
def f2(X):
return np.sum(np.floor(X))
def f3(X):
return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0]
def g(X):
return f1(X) + f2(X) + f3(X)
def determine_target_length(range_of_x):
n = max(range_of_x)
return int(np.ceil(np.log(n)/np.log(2)))
def getNum(l):
num_str=""
if l[0] == "1":
num_str+="-"
num_str += "{}.{}{}".format(l[1] , l[2], l[3])
return float(num_str)
def inRange(l , range_of_x):
num = getNum(l)
return min(range_of_x)<= num <= max(range_of_x)
class Individual(object):
def __init__(self,chromosome):
self.chromosome = chromosome
self.fitness = self.calculate_fitness()
@classmethod
def mutate(self , digit_num:int):
global GENES
return random.choice(GENES[digit_num])
@classmethod
def create_gnome(self):
global TARGET_LENGTH
global RANGE_OF_X
global X_SIZE
gnome = []
for i in range(X_SIZE):
while True:
l = [self.mutate(i) for i in range(TARGET_LENGTH)]
if (inRange(l , RANGE_OF_X)):
gnome.append(l)
break
return gnome
def mate(self , par2):
child_chromosome = []
for gp1 , gp2 in zip(self.chromosome , par2.chromosome):
child_part_chromosome = []
# print(gp1)
for i in range(len(gp1)):
probability_of_crossover = random.random()
if (probability_of_crossover > 0.1):
# do crossover
probability_of_p1_gene = random.random()
if probability_of_p1_gene > 0.5:
child_part_chromosome.append(gp1[i])
else:
child_part_chromosome.append(gp2[i])
else:
# do mutation
child_part_chromosome.append(self.mutate(i))
child_chromosome.append(child_part_chromosome)
return Individual(child_chromosome)
def calculate_fitness(self):
global TARGET_LENGTH
X = []
for s in self.chromosome:
#print(s)
#s = ''.join(map(str, self.chromosome))
x = getNum(s)
X.append(x)
return g(X)
global POPULATION_SIZE
global TARGET_LENGTH
global RANGE_OF_X
# TARGET_LENGTH = determine_target_length(RANGE_OF_X)
generation = 1
count = 1000
population = []
for _ in range(POPULATION_SIZE):
gnome = Individual.create_gnome()
population.append(Individual(gnome))
while count!=0:
count-=1
population = sorted(population , key = lambda x:x.fitness)
# performing elitism
new_generation = []
s = int(0.10*POPULATION_SIZE)
new_generation.extend(population[:s])
s = int(0.90*POPULATION_SIZE)
for _ in range(s):
# Random Search
gnome = Individual.create_gnome()
new_generation.append(Individual(gnome))
if generation % 5 ==0:
population = new_generation
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness))
function_val_epoch_random_search.append(population[0].fitness)
generation += 1
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Gen: {} X: {}\tMinimimum Value: {}".format(generation, get_num_arr, population[0].fitness))
plt.plot(range(len(function_val_epoch_random_search)) , function_val_epoch_random_search , "k--")
###Output
Gen: 5 X: [0.4, -0.44, -0.3, 0.49, -0.06] Fit: -4.566818798489114
Gen: 10 X: [0.4, -0.44, -0.3, 0.49, -0.06] Fit: -4.566818798489114
Gen: 15 X: [0.4, -0.44, -0.3, 0.49, -0.06] Fit: -4.566818798489114
Gen: 20 X: [0.4, -0.44, -0.3, 0.49, -0.06] Fit: -4.566818798489114
Gen: 25 X: [0.4, -0.44, -0.3, 0.49, -0.06] Fit: -4.566818798489114
Gen: 30 X: [0.4, -0.44, -0.3, 0.49, -0.06] Fit: -4.566818798489114
Gen: 35 X: [0.4, -0.44, -0.3, 0.49, -0.06] Fit: -4.566818798489114
Gen: 40 X: [0.4, -0.44, -0.3, 0.49, -0.06] Fit: -4.566818798489114
Gen: 45 X: [0.4, -0.44, -0.3, 0.49, -0.06] Fit: -4.566818798489114
Gen: 50 X: [-1.02, -0.16, -0.26, -0.03, -0.37] Fit: -4.972204275695288
Gen: 55 X: [-1.02, -0.16, -0.26, -0.03, -0.37] Fit: -4.972204275695288
Gen: 60 X: [-1.02, -0.16, -0.26, -0.03, -0.37] Fit: -4.972204275695288
Gen: 65 X: [-1.02, -0.16, -0.26, -0.03, -0.37] Fit: -4.972204275695288
Gen: 70 X: [-1.02, -0.16, -0.26, -0.03, -0.37] Fit: -4.972204275695288
Gen: 75 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 80 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 85 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 90 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 95 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 100 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 105 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 110 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 115 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 120 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 125 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 130 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 135 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 140 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 145 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 150 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 155 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 160 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 165 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 170 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 175 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 180 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 185 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 190 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 195 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 200 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 205 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 210 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 215 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 220 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 225 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 230 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 235 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 240 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 245 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 250 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 255 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 260 X: [0.2, -0.32, -0.34, -0.01, -0.06] Fit: -5.003549236136931
Gen: 265 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 270 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 275 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 280 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 285 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 290 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 295 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 300 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 305 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 310 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 315 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 320 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 325 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 330 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 335 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 340 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 345 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 350 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 355 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 360 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 365 X: [-0.77, -0.26, -0.29, -0.31, 0.25] Fit: -5.064618362666242
Gen: 370 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 375 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 380 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 385 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 390 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 395 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 400 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 405 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 410 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 415 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 420 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 425 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 430 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 435 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 440 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 445 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 450 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 455 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 460 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 465 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 470 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 475 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 480 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 485 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 490 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 495 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 500 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 505 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 510 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 515 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 520 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 525 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 530 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 535 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 540 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 545 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 550 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 555 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 560 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 565 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 570 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 575 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 580 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 585 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 590 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 595 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 600 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 605 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 610 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 615 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 620 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 625 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 630 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 635 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 640 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 645 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 650 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 655 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 660 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 665 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 670 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 675 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 680 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 685 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 690 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 695 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 700 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 705 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 710 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 715 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 720 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 725 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 730 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 735 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 740 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 745 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 750 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 755 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 760 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 765 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 770 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 775 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 780 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 785 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 790 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 795 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 800 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 805 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 810 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 815 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 820 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 825 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 830 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 835 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 840 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 845 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 850 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 855 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 860 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 865 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 870 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 875 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 880 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 885 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 890 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 895 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 900 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 905 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 910 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 915 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 920 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 925 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 930 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 935 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 940 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 945 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 950 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 955 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 960 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 965 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 970 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 975 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 980 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 985 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 990 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 995 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 1000 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Fit: -6.075811704644703
Gen: 1001 X: [-1.73, -0.5, -0.23, -0.44, -0.25] Minimimum Value: -6.075811704644703
###Markdown
Comparison:
###Code
plt.plot(range(len(function_val_epoch_elitism)) , function_val_epoch_elitism , "k--")
plt.plot(range(len(function_val_epoch_basic_genetic)) , function_val_epoch_basic_genetic , "b--")
plt.plot(range(len(function_val_epoch_diversity)) , function_val_epoch_diversity , "r--")
plt.plot(range(len(function_val_epoch_random_search)) , function_val_epoch_random_search , "g--")
plt.legend(["with elitism" , "basic-genetic" , "with diversity" , "random search"])
###Output
_____no_output_____
###Markdown
Clearly, Random Search is worst approach for this kind of the problems, it's complete luck. Elitism: with different sample count
###Code
RANGE_OF_X = [-2.04 , 2.04]
POPULATION_SIZE = [50 ,100 , 500 , 1000]
GENES = ["01" , "012" , "0123456789", "0123456789"]
TARGET_LENGTH = 4
CROSSOVER_PROB = 0.1
h = 1e-7
X_SIZE = 5
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
def f1(X):
return np.sum(np.square(X))
def f2(X):
return np.sum(np.floor(X))
def f3(X):
return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0]
def g(X):
return f1(X) + f2(X) + f3(X)
def determine_target_length(range_of_x):
n = max(range_of_x)
return int(np.ceil(np.log(n)/np.log(2)))
def getNum(l):
num_str=""
if l[0] == "1":
num_str+="-"
num_str += "{}.{}{}".format(l[1] , l[2], l[3])
return float(num_str)
def inRange(l , range_of_x):
num = getNum(l)
return min(range_of_x)<= num <= max(range_of_x)
class Individual(object):
def __init__(self,chromosome):
self.chromosome = chromosome
self.fitness = self.calculate_fitness()
@classmethod
def mutate(self , digit_num:int):
global GENES
return random.choice(GENES[digit_num])
@classmethod
def create_gnome(self):
global TARGET_LENGTH
global RANGE_OF_X
global X_SIZE
gnome = []
for i in range(X_SIZE):
while True:
l = [self.mutate(i) for i in range(TARGET_LENGTH)]
if (inRange(l , RANGE_OF_X)):
gnome.append(l)
break
return gnome
def mate(self , par2):
global CROSSOVER_PROB
child_chromosome = []
for gp1 , gp2 in zip(self.chromosome , par2.chromosome):
child_part_chromosome = []
# print(gp1)
for i in range(len(gp1)):
probability_of_crossover = random.random()
if (probability_of_crossover > CROSSOVER_PROB):
# do crossover
probability_of_p1_gene = random.random()
if probability_of_p1_gene > 0.5:
child_part_chromosome.append(gp1[i])
else:
child_part_chromosome.append(gp2[i])
else:
# do mutation
child_part_chromosome.append(self.mutate(i))
child_chromosome.append(child_part_chromosome)
return Individual(child_chromosome)
def calculate_fitness(self):
global TARGET_LENGTH
X = []
for s in self.chromosome:
#print(s)
#s = ''.join(map(str, self.chromosome))
x = getNum(s)
X.append(x)
return g(X)
global POPULATION_SIZE
global TARGET_LENGTH
global RANGE_OF_X
color = ["k--" , "b--" , "r--" , "g--" , "y--"]
legend = []
for N, c in zip(POPULATION_SIZE , color):
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
function_val_epoch_elitism = []
generation = 1
count = 1000
population = []
for _ in range(N):
gnome = Individual.create_gnome()
population.append(Individual(gnome))
while count!=0:
count-=1
population = sorted(population , key = lambda x:x.fitness)
# performing elitism
new_generation = []
s = int(0.10*N)
new_generation.extend(population[:s])
s = int(0.90*N)
for _ in range(s):
parent1 = random.choice(population[:N//2])
parent2 = random.choice(population[:N//2])
child = parent1.mate(parent2)
new_generation.append(child)
if generation % 5 ==0:
population = new_generation
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
#print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness))
function_val_epoch_elitism.append(population[0].fitness)
generation += 1
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Population: {} X: {}\tMinimimum Value: {}".format(N, get_num_arr, population[0].fitness))
plt.plot(range(len(function_val_epoch_elitism)) , function_val_epoch_elitism , c)
legend.append("N = {}".format(N))
plt.legend(legend)
###Output
Population: 50 X: [-0.71, -0.03, -0.05, -0.24, -0.05] Minimimum Value: -7.423175616939729
Population: 100 X: [-0.15, -0.36, -0.03, -0.23, -0.04] Minimimum Value: -8.358071308792296
Population: 500 X: [-0.08, -0.02, -0.08, -0.01, -0.13] Minimimum Value: -8.776798931437748
Population: 1000 X: [-0.01, -0.11, -0.18, -0.23, -0.27] Minimimum Value: -8.948559792465993
###Markdown
The result above is as expected for a genetic algorithm with elitism. The more the size of the population the diversity and the fitness is maintained at the same time, which in principle yields (most of the time) a better result on increasing the size of the population with a suitable number of epochs. Basic Genetic Algorithm : with different sample count
###Code
function_val_epoch_basic_genetic = []
RANGE_OF_X = [-2.04 , 2.04]
POPULATION_SIZE = [50, 100, 500, 1000]
GENES = ["01" , "012" , "0123456789", "0123456789"]
TARGET_LENGTH = 4
CROSSOVER_PROB = 0.1
h = 1e-7
X_SIZE = 5
def f1(X):
return np.sum(np.square(X))
def f2(X):
return np.sum(np.floor(X))
def f3(X):
return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0]
def g(X):
return f1(X) + f2(X) + f3(X)
def determine_target_length(range_of_x):
n = max(range_of_x)
return int(np.ceil(np.log(n)/np.log(2)))
def getNum(l):
num_str=""
if l[0] == "1":
num_str+="-"
num_str += "{}.{}{}".format(l[1] , l[2], l[3])
return float(num_str)
def inRange(l , range_of_x):
num = getNum(l)
return min(range_of_x)<= num <= max(range_of_x)
class Individual(object):
def __init__(self,chromosome):
self.chromosome = chromosome
self.fitness = self.calculate_fitness()
@classmethod
def mutate(self , digit_num:int):
global GENES
return random.choice(GENES[digit_num])
@classmethod
def create_gnome(self):
global TARGET_LENGTH
global RANGE_OF_X
global X_SIZE
gnome = []
for i in range(X_SIZE):
while True:
l = [self.mutate(i) for i in range(TARGET_LENGTH)]
if (inRange(l , RANGE_OF_X)):
gnome.append(l)
break
return gnome
def mate(self , par2):
child_chromosome = []
global CROSSOVER_PROB
for gp1 , gp2 in zip(self.chromosome , par2.chromosome):
child_part_chromosome = []
# print(gp1)
for i in range(len(gp1)):
probability_of_crossover = random.random()
if (probability_of_crossover > CROSSOVER_PROB):
# do crossover
probability_of_p1_gene = random.random()
if probability_of_p1_gene > 0.5:
child_part_chromosome.append(gp1[i])
else:
child_part_chromosome.append(gp2[i])
else:
# do mutation
child_part_chromosome.append(self.mutate(i))
child_chromosome.append(child_part_chromosome)
return Individual(child_chromosome)
def calculate_fitness(self):
global TARGET_LENGTH
X = []
for s in self.chromosome:
#print(s)
#s = ''.join(map(str, self.chromosome))
x = getNum(s)
X.append(x)
return g(X)
global POPULATION_SIZE
global TARGET_LENGTH
global RANGE_OF_X
# TARGET_LENGTH = determine_target_length(RANGE_OF_X)
color = ["k--" , "b--" , "r--" , "g--" , "y--"]
legend = []
for N, c in zip(POPULATION_SIZE , color):
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
function_val_epoch_basic_genetic = []
generation = 1
count = 1000
population = []
for _ in range(N):
gnome = Individual.create_gnome()
population.append(Individual(gnome))
while count!=0:
count-=1
population = sorted(population , key = lambda x:x.fitness)
# performing elitism
new_generation = []
s = int(0.10*N)
new_generation.extend(population[:s])
s = int(0.90*N)
for _ in range(s):
# no elitism
parent1 = random.choice(population[:N])
parent2 = random.choice(population[:N])
child = parent1.mate(parent2)
new_generation.append(child)
if generation % 5 ==0:
population = new_generation
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
#print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness))
function_val_epoch_basic_genetic.append(population[0].fitness)
generation += 1
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Population: {} X: {}\tMinimimum Value: {}".format(N, get_num_arr, population[0].fitness))
plt.plot(range(len(function_val_epoch_basic_genetic)) , function_val_epoch_basic_genetic , c)
legend.append("N = {}".format(N))
plt.legend(legend)
###Output
Population: 50 X: [-0.32, -0.25, -0.26, -0.24, -0.03] Minimimum Value: -7.314303773893771
Population: 100 X: [0.14, -0.04, -0.23, -0.35, -0.21] Minimimum Value: -6.909335612272095
Population: 500 X: [-0.26, -0.56, -0.04, -0.24, -0.08] Minimimum Value: -8.875460202474713
Population: 1000 X: [-0.08, -0.46, -0.16, -0.19, -0.05] Minimimum Value: -7.921433360804886
###Markdown
Unlike genetic algorithm with elitism, in basic genetic algorithm the complete population get the chance to mate(crossover and mutation) which may or may not improve the results on increasing the size of the population, because increasing the size of the population also expose us to the risk that elite members will not get chance to mate. Diversity: with the different sample counts
###Code
function_val_epoch_diversity = []
RANGE_OF_X = [-2.04 , 2.04]
POPULATION_SIZE = [50 , 100 , 500 , 1000]
GENES = ["01" , "012" , "0123456789", "0123456789"]
TARGET_LENGTH = 4
h = 1e-7
X_SIZE = 5
DIVERSITY_PERCENT = 50
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
def f1(X):
return np.sum(np.square(X))
def f2(X):
return np.sum(np.floor(X))
def f3(X):
return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0]
def g(X):
return f1(X) + f2(X) + f3(X)
def determine_target_length(range_of_x):
n = max(range_of_x)
return int(np.ceil(np.log(n)/np.log(2)))
def getNum(l):
num_str=""
if l[0] == "1":
num_str+="-"
num_str += "{}.{}{}".format(l[1] , l[2], l[3])
return float(num_str)
def inRange(l , range_of_x):
num = getNum(l)
return min(range_of_x)<= num <= max(range_of_x)
class Individual(object):
def __init__(self,chromosome):
self.chromosome = chromosome
self.fitness = self.calculate_fitness()
@classmethod
def mutate(self , digit_num:int):
global GENES
return random.choice(GENES[digit_num])
@classmethod
def create_gnome(self):
global TARGET_LENGTH
global RANGE_OF_X
global X_SIZE
gnome = []
for i in range(X_SIZE):
while True:
l = [self.mutate(i) for i in range(TARGET_LENGTH)]
if (inRange(l , RANGE_OF_X)):
gnome.append(l)
break
return gnome
def mate(self , par2):
global DIVERSITY_PERCENT
tot = len(self.chromosome)
diversity_idx_arr = np.random.choice(range(tot) ,
replace=False ,
size=int(DIVERSITY_PERCENT*tot / 100))
child_chromosome = []
for j , gp1 , gp2 in zip(range(tot) , self.chromosome , par2.chromosome):
child_part_chromosome = []
for i in range(len(gp1)):
if (j*tot+i) in diversity_idx_arr:
child_part_chromosome.append(self.mutate(i))
else:
probability_of_p1_gene = random.random()
if probability_of_p1_gene > 0.5:
child_part_chromosome.append(gp1[i])
else:
child_part_chromosome.append(gp2[i])
child_chromosome.append(child_part_chromosome)
return Individual(child_chromosome)
def calculate_fitness(self):
global TARGET_LENGTH
X = []
for s in self.chromosome:
#print(s)
#s = ''.join(map(str, self.chromosome))
x = getNum(s)
X.append(x)
return g(X)
global POPULATION_SIZE
global TARGET_LENGTH
global RANGE_OF_X
# TARGET_LENGTH = determine_target_length(RANGE_OF_X)
color = ["k--" , "b--" , "r--" , "g--" , "y--"]
legend = []
for N, c in zip(POPULATION_SIZE , color):
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
function_val_epoch_diversity = []
generation = 1
count = 1000
population = []
for _ in range(N):
gnome = Individual.create_gnome()
population.append(Individual(gnome))
while count!=0:
count-=1
population = sorted(population , key = lambda x:x.fitness)
# performing elitism
new_generation = []
s = int(0.10*N)
new_generation.extend(population[:s])
s = int(0.90*N)
for _ in range(s):
# no elitism
parent1 = random.choice(population[:N])
parent2 = random.choice(population[:N])
child = parent1.mate(parent2)
new_generation.append(child)
if generation % 5 ==0:
population = new_generation
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
#print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness))
function_val_epoch_diversity.append(population[0].fitness)
generation += 1
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Population: {} X: {}\tMinimimum Value: {}".format(N, get_num_arr, population[0].fitness))
plt.plot(range(len(function_val_epoch_diversity)) , function_val_epoch_diversity , c)
legend.append("N = {}".format(N))
plt.legend(legend)
###Output
Population: 50 X: [-0.39, -0.2, -0.13, -0.01, -0.3] Minimimum Value: -8.002973851495726
Population: 100 X: [-0.07, -0.16, -0.05, -0.22, -0.34] Minimimum Value: -7.915400920312548
Population: 500 X: [-1.08, -0.1, -0.35, -0.02, -0.18] Minimimum Value: -9.177451964813589
Population: 1000 X: [-0.05, -0.04, -0.11, -0.24, -0.04] Minimimum Value: -9.030058205773974
###Markdown
I don't even need to state that this is the best results by far we have got. Just like the elitism it is also affected by increasing the population size and the overall trend is that the performance(on an average) increases. Random Search : with different sample counts
###Code
function_val_epoch_random_search = []
RANGE_OF_X = [-2.04 , 2.04]
POPULATION_SIZE = [50, 100, 500, 1000]
GENES = ["01" , "012" , "0123456789", "0123456789"]
TARGET_LENGTH = 4
h = 1e-7
X_SIZE = 5
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
def f1(X):
return np.sum(np.square(X))
def f2(X):
return np.sum(np.floor(X))
def f3(X):
return np.sum(np.multiply(np.arange(len(X)) , np.power(X , 4) ) ) + np.random.standard_normal(1)[0]
def g(X):
return f1(X) + f2(X) + f3(X)
def determine_target_length(range_of_x):
n = max(range_of_x)
return int(np.ceil(np.log(n)/np.log(2)))
def getNum(l):
num_str=""
if l[0] == "1":
num_str+="-"
num_str += "{}.{}{}".format(l[1] , l[2], l[3])
return float(num_str)
def inRange(l , range_of_x):
num = getNum(l)
return min(range_of_x)<= num <= max(range_of_x)
class Individual(object):
def __init__(self,chromosome):
self.chromosome = chromosome
self.fitness = self.calculate_fitness()
@classmethod
def mutate(self , digit_num:int):
global GENES
return random.choice(GENES[digit_num])
@classmethod
def create_gnome(self):
global TARGET_LENGTH
global RANGE_OF_X
global X_SIZE
gnome = []
for i in range(X_SIZE):
while True:
l = [self.mutate(i) for i in range(TARGET_LENGTH)]
if (inRange(l , RANGE_OF_X)):
gnome.append(l)
break
return gnome
def mate(self , par2):
child_chromosome = []
for gp1 , gp2 in zip(self.chromosome , par2.chromosome):
child_part_chromosome = []
# print(gp1)
for i in range(len(gp1)):
probability_of_crossover = random.random()
if (probability_of_crossover > 0.1):
# do crossover
probability_of_p1_gene = random.random()
if probability_of_p1_gene > 0.5:
child_part_chromosome.append(gp1[i])
else:
child_part_chromosome.append(gp2[i])
else:
# do mutation
child_part_chromosome.append(self.mutate(i))
child_chromosome.append(child_part_chromosome)
return Individual(child_chromosome)
def calculate_fitness(self):
global TARGET_LENGTH
X = []
for s in self.chromosome:
#print(s)
#s = ''.join(map(str, self.chromosome))
x = getNum(s)
X.append(x)
return g(X)
global POPULATION_SIZE
global TARGET_LENGTH
global RANGE_OF_X
# TARGET_LENGTH = determine_target_length(RANGE_OF_X)
color = ["k--" , "b--" , "r--" , "g--" , "y--"]
legend = []
for N, c in zip(POPULATION_SIZE , color):
np.random.seed(np.random.randint(low=0 , high=100))
random.seed(np.random.randint(low=0 , high=100))
function_val_epoch_random_search = []
generation = 1
count = 1000
population = []
for _ in range(N):
gnome = Individual.create_gnome()
population.append(Individual(gnome))
while count!=0:
count-=1
population = sorted(population , key = lambda x:x.fitness)
# performing elitism
new_generation = []
s = int(0.10*N)
new_generation.extend(population[:s])
s = int(0.90*N)
for _ in range(s):
# Random Search
gnome = Individual.create_gnome()
new_generation.append(Individual(gnome))
if generation % 5 ==0:
population = new_generation
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
# print("Gen: {} X: {} Fit: {}".format(generation, get_num_arr, population[0].fitness))
function_val_epoch_random_search.append(population[0].fitness)
generation += 1
get_num_arr = []
for l in population[0].chromosome:
get_num_arr.append(getNum(l))
print("Population: {} X: {}\tMinimimum Value: {}".format(N, get_num_arr, population[0].fitness))
plt.plot(range(len(function_val_epoch_random_search)) , function_val_epoch_random_search , c)
legend.append("N = {}".format(N))
plt.legend(legend)
###Output
Population: 50 X: [-1.07, -0.59, -0.18, -0.09, 0.61] Minimimum Value: -4.150216119472115
Population: 100 X: [-1.34, -0.38, -0.14, -0.11, 0.34] Minimimum Value: -5.247213375510198
Population: 500 X: [-0.11, -0.57, -0.12, -0.1, 0.18] Minimimum Value: -6.707476516282099
Population: 1000 X: [-1.32, -0.42, -0.01, -0.27, -0.19] Minimimum Value: -5.635770854849166
|
examples/Tutorial 4.ipynb | ###Markdown
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.net)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Tutorial 4: Bond Portfolio Optimization and ImmunizationIf you want to know more about the mathematics behind this model you can check the following posts: __[Valorización de Bonos con Python parte II](https://financioneroncios.wordpress.com/2018/05/23/valorizacion-de-bonos-con-python-parte-ii/)__, __[Fixed Income Portfolio Optimization with Python](https://financioneroncios.wordpress.com/2020/01/09/fixed-income-portfolio-optimization-with-python/)__ 1. Uploading the data:
###Code
########################################################################
# Uploading Data
########################################################################
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings("ignore")
# Interest Rates Data
kr = pd.read_excel('KeyRates.xlsx', engine='openpyxl', index_col=0, header=0)/100
# Prices Data
assets = pd.read_excel('Assets.xlsx', engine='openpyxl', index_col=0, header=0)
# Find common dates
a = pd.merge(left=assets, right=kr, how='inner', on='Date')
dates = a.index
# Calculate interest rates returns
kr_returns = kr.loc[dates,:].sort_index().diff().dropna()
kr_returns.sort_index(ascending=False, inplace=True)
# List of instruments
equity = ['APA','CMCSA','CNP','HPQ','PSA','SEE','ZION']
bonds = ['PEP11900D031', 'PEP13000D012', 'PEP13000M088',
'PEP23900M103','PEP70101M530','PEP70101M571',
'PEP70310M156']
# Calculate assets returns
assets_returns = assets.loc[dates, equity + bonds]
assets_returns = assets_returns.sort_index().pct_change().dropna()
assets_returns.sort_index(ascending=False, inplace=True)
# Show tables
display(kr_returns.head().style.format("{:.4%}"))
display(assets_returns.head().style.format("{:.4%}"))
########################################################################
# Uploading Duration and Convexity Matrixes
########################################################################
durations = pd.read_excel('durations.xlsx', index_col=0, header=0)
convexity = pd.read_excel('convexity.xlsx', index_col=0, header=0)
print('Durations Matrix')
display(durations.head().style.format("{:.4f}").background_gradient(cmap='YlGn'))
print('')
print('Convexity Matrix')
display(convexity.head().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
Durations Matrix
###Markdown
2. Estimating Mean Variance Portfolio 2.1 Building the loadings matrix and risk factors returns.This part shows how to build a personalized loadings matrix that will be used by __Riskfolio-Lib__ to calculate the expected returns and covariance matrix.
###Code
########################################################################
# Building The Loadings Matrix
########################################################################
loadings = pd.concat([-1.0 * durations, 0.5 * convexity], axis = 1)
display(loadings.style.format("{:.4f}").background_gradient(cmap='YlGn'))
########################################################################
# Building the risk factors returns matrix
########################################################################
kr_returns_2 = kr_returns ** 2
cols = loadings.columns
X = pd.concat([kr_returns, kr_returns_2], axis=1)
X.columns = cols
display(X.head().style.format("{:.4%}"))
########################################################################
# Building the asset returns matrix
########################################################################
Y = assets_returns[loadings.index]
display(Y.head())
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
########################################################################
# Calculating optimum portfolio
########################################################################
import riskfolio as rp
# Building the portfolio object
port = rp.Portfolio(returns=Y)
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94, B=loadings)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
3. Optimization with Key Rate Durations ConstraintsThis part shows how __Riskfolio-Lib__ can be used to build immunized portfolios using __duration matching__ and __convexity matching__, however the example only use duration matching. More information about inmunization theory can be found in this __[link](https://www.investopedia.com/terms/i/immunization.asp)__. 3.1 Statistics of Risk Factors
###Code
########################################################################
# Displaying factors statistics
########################################################################
table = pd.concat([loadings.min(), loadings.max()], axis=1)
table.columns = ['min', 'max']
display(table.iloc[:9,:].style.format("{:.4f}").background_gradient(cmap='YlGn'))
display(X.iloc[:,:9].corr().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Key Rate DurationsIn this example we are going to put a limit on the maximum duration that the portfolio can reach. The key rate durations of portfolio for 1800, 3600 and 7200 days will be lower than -2, -2 and -3.
###Code
########################################################################
# Creating durations constraints
########################################################################
constraints = {'Disabled': [False, False, False],
'Factor': ['R 1800', 'R 3600', 'R 7200'],
'Sign': ['<=', '<=', '<='],
'Value': [-2, -2, -3],
'Relative Factor': ['', '', '']}
constraints = pd.DataFrame(constraints)
display(constraints)
###Output
_____no_output_____
###Markdown
3.3 Estimating Optimum Portfolio with Key Rate Durations Constraints
###Code
########################################################################
# Estimating optimum portfolio with key rate duration constraints
########################################################################
C, D = rp.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
We can see that with this constraints the weights of the portfolio are more spread along all assets. To show that the portfolio full fill all constraints we will calculate the sensitivities of the portfolio.
###Code
########################################################################
# Calculating portfolio sensitivities for each risk factor
########################################################################
d_ = np.matrix(loadings).T * np.matrix(w)
d_ = pd.DataFrame(d_, index=loadings.columns, columns=['Values'])
display(d_.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
4. Estimating Mean Variance Portfolio 4.1 Building the loadings matrix and risk factors returns.This part shows how to build a personalized loadings matrix that will be used by __Riskfolio-Lib__ to calculate the expected returns and covariance matrix.
###Code
########################################################################
# Building the risk factors returns matrix
########################################################################
# Removing bond returns from factors matrix
cols = assets_returns.columns
cols = ~cols.isin(loadings.index)
cols = assets_returns.columns[cols]
# Other approach for removing bond returns from factors matrix
cols = [col for col in assets_returns.columns if col not in loadings.index]
X = pd.concat([assets_returns[cols], X], axis=1)
display(X.head())
########################################################################
# Building the asset returns matrix
########################################################################
Y = pd.concat([assets_returns[cols], Y], axis=1)
display(Y.head())
########################################################################
# Building The Loadings Matrix
########################################################################
a = np.identity(len(cols))
a = pd.DataFrame(a, index=cols, columns=cols)
loadings = pd.concat([a, loadings], axis = 1)
loadings.fillna(0, inplace=True)
display(loadings.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
4.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
########################################################################
# Calculating optimum portfolio
########################################################################
port = rp.Portfolio(returns=Y)
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94, B=loadings)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
5. Optimization of Equity and Bond Portfolio with Key Rate Durations ConstraintsThis part shows how __Riskfolio-Lib__ can be used to build immunized portfolios using __duration matching__ and __convexity matching__, however the example only use duration matching. More information about inmunization theory can be found in this __[link](https://www.investopedia.com/terms/i/immunization.asp)__. 5.1 Statistics of Risk Factors
###Code
########################################################################
# Displaying factors statistics
########################################################################
table = pd.concat([loadings.min(), loadings.max()], axis=1)
table.columns = ['min', 'max']
display(table.iloc[:16,:].style.format("{:.4f}").background_gradient(cmap='YlGn'))
display(X.iloc[:,:16].corr().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
5.2 Creating Constraints on Key Rate DurationsIn this example we are going to put a limit on the maximum duration that the portfolio can reach. The key rate durations of portfolio for 1800, 3600 and 7200 days will be lower than -2, -2 and -3.
###Code
########################################################################
# Creating key rate durations constraints
########################################################################
constraints = {'Disabled': [False, False, False],
'Factor': ['R 1800', 'R 3600', 'R 7200'],
'Sign': ['<=', '<=', '<='],
'Value': [-2, -2, -3],
'Relative Factor': ['', '', '']}
constraints = pd.DataFrame(constraints)
display(constraints)
###Output
_____no_output_____
###Markdown
5.3 Estimating Optimum Portfolio with Key Rate Durations Constraints
###Code
########################################################################
# Estimating optimum portfolio with key rate durations constraints
########################################################################
C, D = rp.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
########################################################################
# Calculating portfolio sensitivities for each risk factor
########################################################################
d_ = np.matrix(loadings).T * np.matrix(w)
d_ = pd.DataFrame(d_, index=loadings.columns, columns=['Values'])
display(d_.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.net)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Tutorial 4: Bond Portfolio Optimization and ImmunizationIf you want to know more about the mathematics behind this model you can check the following posts: __[Valorización de Bonos con Python parte II](https://financioneroncios.wordpress.com/2018/05/23/valorizacion-de-bonos-con-python-parte-ii/)__, __[Fixed Income Portfolio Optimization with Python](https://financioneroncios.wordpress.com/2020/01/09/fixed-income-portfolio-optimization-with-python/)__ 1. Uploading the data:
###Code
########################################################################
# Uploading Data
########################################################################
import pandas as pd
import numpy as np
# Interest Rates Data
kr = pd.read_excel('KeyRates.xlsx', engine='openpyxl', index_col=0, header=0)/100
# Prices Data
assets = pd.read_excel('Assets.xlsx', engine='openpyxl', index_col=0, header=0)
# Find common dates
a = pd.merge(left=assets, right=kr, how='inner', on='Date')
dates = a.index
# Calculate interest rates returns
kr_returns = kr.loc[dates,:].sort_index().diff().dropna()
kr_returns.sort_index(ascending=False, inplace=True)
# List of instruments
equity = ['APA','CMCSA','CNP','HPQ','PSA','SEE','ZION']
bonds = ['PEP11900D031', 'PEP13000D012', 'PEP13000M088',
'PEP23900M103','PEP70101M530','PEP70101M571',
'PEP70310M156']
# Calculate assets returns
assets_returns = assets.loc[dates, equity + bonds]
assets_returns = assets_returns.sort_index().pct_change().dropna()
assets_returns.sort_index(ascending=False, inplace=True)
# Show tables
display(kr_returns.head().style.format("{:.4%}"))
display(assets_returns.head().style.format("{:.4%}"))
########################################################################
# Uploading Duration and Convexity Matrixes
########################################################################
durations = pd.read_excel('durations.xlsx', index_col=0, header=0)
convexity = pd.read_excel('convexity.xlsx', index_col=0, header=0)
print('Durations Matrix')
display(durations.head().style.format("{:.4f}").background_gradient(cmap='YlGn'))
print('')
print('Convexity Matrix')
display(convexity.head().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
Durations Matrix
###Markdown
2. Estimating Mean Variance Portfolio 2.1 Building the loadings matrix and risk factors returns.This part shows how to build a personalized loadings matrix that will be used by __Riskfolio-Lib__ to calculate the expected returns and covariance matrix.
###Code
########################################################################
# Building The Loadings Matrix
########################################################################
loadings = pd.concat([-1.0 * durations, 0.5 * convexity], axis = 1)
display(loadings.style.format("{:.4f}").background_gradient(cmap='YlGn'))
########################################################################
# Building the risk factors returns matrix
########################################################################
kr_returns_2 = kr_returns ** 2
cols = loadings.columns
X = pd.concat([kr_returns, kr_returns_2], axis=1)
X.columns = cols
display(X.head().style.format("{:.4%}"))
########################################################################
# Building the asset returns matrix
########################################################################
Y = assets_returns[loadings.index]
display(Y.head())
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
########################################################################
# Calculating optimum portfolio
########################################################################
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94, B=loadings)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
3. Optimization with Key Rate Durations ConstraintsThis part shows how __Riskfolio-Lib__ can be used to build immunized portfolios using __duration matching__ and __convexity matching__, however the example only use duration matching. More information about inmunization theory can be found in this __[link](https://www.investopedia.com/terms/i/immunization.asp)__. 3.1 Statistics of Risk Factors
###Code
########################################################################
# Displaying factors statistics
########################################################################
table = pd.concat([loadings.min(), loadings.max()], axis=1)
table.columns = ['min', 'max']
display(table.iloc[:9,:].style.format("{:.4f}").background_gradient(cmap='YlGn'))
display(X.iloc[:,:9].corr().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Key Rate DurationsIn this example we are going to put a limit on the maximum duration that the portfolio can reach. The key rate durations of portfolio for 1800, 3600 and 7200 days will be lower than -2, -2 and -3.
###Code
########################################################################
# Creating durations constraints
########################################################################
import riskfolio.ConstraintsFunctions as cf
constraints = {'Disabled': [False, False, False],
'Factor': ['R 1800', 'R 3600', 'R 7200'],
'Sign': ['<=', '<=', '<='],
'Value': [-2, -2, -3],
'Relative Factor': ['', '', '']}
constraints = pd.DataFrame(constraints)
display(constraints)
###Output
_____no_output_____
###Markdown
3.3 Estimating Optimum Portfolio with Key Rate Durations Constraints
###Code
########################################################################
# Estimating optimum portfolio with key rate duration constraints
########################################################################
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
We can see that with this constraints the weights of the portfolio are more spread along all assets. To show that the portfolio full fill all constraints we will calculate the sensitivities of the portfolio.
###Code
########################################################################
# Calculating portfolio sensitivities for each risk factor
########################################################################
d_ = np.matrix(loadings).T * np.matrix(w)
d_ = pd.DataFrame(d_, index=loadings.columns, columns=['Values'])
display(d_.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
4. Estimating Mean Variance Portfolio 4.1 Building the loadings matrix and risk factors returns.This part shows how to build a personalized loadings matrix that will be used by __Riskfolio-Lib__ to calculate the expected returns and covariance matrix.
###Code
########################################################################
# Building the risk factors returns matrix
########################################################################
# Removing bond returns from factors matrix
cols = assets_returns.columns
cols = ~cols.isin(loadings.index)
cols = assets_returns.columns[cols]
# Other approach for removing bond returns from factors matrix
cols = [col for col in assets_returns.columns if col not in loadings.index]
X = pd.concat([assets_returns[cols], X], axis=1)
display(X.head())
########################################################################
# Building the asset returns matrix
########################################################################
Y = pd.concat([assets_returns[cols], Y], axis=1)
display(Y.head())
########################################################################
# Building The Loadings Matrix
########################################################################
a = np.identity(len(cols))
a = pd.DataFrame(a, index=cols, columns=cols)
loadings = pd.concat([a, loadings], axis = 1)
loadings.fillna(0, inplace=True)
display(loadings.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
4.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
########################################################################
# Calculating optimum portfolio
########################################################################
port = pf.Portfolio(returns=Y)
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94, B=loadings)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
5. Optimization of Equity and Bond Portfolio with Key Rate Durations ConstraintsThis part shows how __Riskfolio-Lib__ can be used to build immunized portfolios using __duration matching__ and __convexity matching__, however the example only use duration matching. More information about inmunization theory can be found in this __[link](https://www.investopedia.com/terms/i/immunization.asp)__. 5.1 Statistics of Risk Factors
###Code
########################################################################
# Displaying factors statistics
########################################################################
table = pd.concat([loadings.min(), loadings.max()], axis=1)
table.columns = ['min', 'max']
display(table.iloc[:16,:].style.format("{:.4f}").background_gradient(cmap='YlGn'))
display(X.iloc[:,:16].corr().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
5.2 Creating Constraints on Key Rate DurationsIn this example we are going to put a limit on the maximum duration that the portfolio can reach. The key rate durations of portfolio for 1800, 3600 and 7200 days will be lower than -2, -2 and -3.
###Code
########################################################################
# Creating key rate durations constraints
########################################################################
constraints = {'Disabled': [False, False, False],
'Factor': ['R 1800', 'R 3600', 'R 7200'],
'Sign': ['<=', '<=', '<='],
'Value': [-2, -2, -3],
'Relative Factor': ['', '', '']}
constraints = pd.DataFrame(constraints)
display(constraints)
###Output
_____no_output_____
###Markdown
5.3 Estimating Optimum Portfolio with Key Rate Durations Constraints
###Code
########################################################################
# Estimating optimum portfolio with key rate durations constraints
########################################################################
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
########################################################################
# Calculating portfolio sensitivities for each risk factor
########################################################################
d_ = np.matrix(loadings).T * np.matrix(w)
d_ = pd.DataFrame(d_, index=loadings.columns, columns=['Values'])
display(d_.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.net)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Part IV: Bond Portfolio Optimization and ImmunizationIf you want to know more about the mathematics behind this model you can check the following posts: __[Valorización de Bonos con Python parte II](https://financioneroncios.wordpress.com/2018/05/23/valorizacion-de-bonos-con-python-parte-ii/)__, __[Fixed Income Portfolio Optimization with Python](https://financioneroncios.wordpress.com/2020/01/09/fixed-income-portfolio-optimization-with-python/)__ 1. Uploading the data:
###Code
########################################################################
# Uploading Data
########################################################################
import pandas as pd
import numpy as np
# Interest Rates Data
kr = pd.read_excel('KeyRates.xlsx', index_col=0, header=0)/100
# Prices Data
assets = pd.read_excel('Assets.xlsx', index_col=0, header=0)
# Find common dates
a = pd.merge(left=assets, right=kr, how='inner', on='Date')
dates = a.index
# Calculate interest rates returns
kr_returns = kr.loc[dates,:].sort_index().diff().dropna()
kr_returns.sort_index(ascending=False, inplace=True)
# Calculate assets returns
assets_returns = assets.loc[dates,:].sort_index().pct_change().dropna()
assets_returns.sort_index(ascending=False, inplace=True)
# Show tables
display(kr_returns.head().style.format("{:.4%}"))
display(assets_returns.head().style.format("{:.4%}"))
########################################################################
# Uploading Duration and Convexity Matrixes
########################################################################
durations = pd.read_excel('durations.xlsx', index_col=0, header=0)
convexity = pd.read_excel('convexity.xlsx', index_col=0, header=0)
print('Durations Matrix')
display(durations.head().style.format("{:.4f}").background_gradient(cmap='YlGn'))
print('')
print('Convexity Matrix')
display(convexity.head().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
Durations Matrix
###Markdown
2. Estimating Mean Variance Portfolio 2.1 Building the loadings matrix and risk factors returns.This part shows how to build a personalized loadings matrix that will be used by __Riskfolio-Lib__ to calculate the expected returns and covariance matrix.
###Code
########################################################################
# Building The Loadings Matrix
########################################################################
loadings = pd.concat([-1.0 * durations, 0.5 * convexity], axis = 1)
display(loadings.style.format("{:.4f}").background_gradient(cmap='YlGn'))
########################################################################
# Building the risk factors returns matrix
########################################################################
kr_returns_2 = kr_returns ** 2
cols = loadings.columns
X = pd.concat([kr_returns, kr_returns_2], axis=1)
X.columns = cols
display(X.head().style.format("{:.4%}"))
########################################################################
# Building the asset returns matrix
########################################################################
Y = assets_returns[loadings.index]
display(Y.head())
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
########################################################################
# Calculating optimum portfolio
########################################################################
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94, B=loadings)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
3. Optimization with Key Rate Durations ConstraintsThis part shows how __Riskfolio-Lib__ can be used to build immunized portfolios using __duration matching__ and __convexity matching__, however the example only use duration matching. More information about inmunization theory can be found in this __[link](https://www.investopedia.com/terms/i/immunization.asp)__. 3.1 Statistics of Risk Factors
###Code
########################################################################
# Displaying factors statistics
########################################################################
table = pd.concat([loadings.min(), loadings.max()], axis=1)
table.columns = ['min', 'max']
display(table.iloc[:9,:].style.format("{:.4f}").background_gradient(cmap='YlGn'))
display(X.iloc[:,:9].corr().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Key Rate DurationsIn this example we are going to put a limit on the maximum duration that the portfolio can reach. The key rate durations of portfolio for 1800, 3600 and 7200 days will be lower than -2, -2 and -3.
###Code
########################################################################
# Creating durations constraints
########################################################################
import riskfolio.ConstraintsFunctions as cf
constraints = {'Disabled': [False, False, False],
'Factor': ['R 1800', 'R 3600', 'R 7200'],
'Sign': ['<=', '<=', '<='],
'Value': [-2, -2, -3],}
constraints = pd.DataFrame(constraints)
display(constraints)
###Output
_____no_output_____
###Markdown
3.3 Estimating Optimum Portfolio with Key Rate Durations Constraints
###Code
########################################################################
# Estimating optimum portfolio with key rate duration constraints
########################################################################
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
We can see that with this constraints the weights of the portfolio are more spread along all assets. To show that the portfolio full fill all constraints we will calculate the sensitivities of the portfolio.
###Code
########################################################################
# Calculating portfolio sensitivities for each risk factor
########################################################################
d_ = np.matrix(loadings).T * np.matrix(w)
d_ = pd.DataFrame(d_, index=loadings.columns, columns=['Values'])
display(d_.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
4. Estimating Mean Variance Portfolio 4.1 Building the loadings matrix and risk factors returns.This part shows how to build a personalized loadings matrix that will be used by __Riskfolio-Lib__ to calculate the expected returns and covariance matrix.
###Code
########################################################################
# Building the risk factors returns matrix
########################################################################
# Removing bond returns from factors matrix
cols = assets_returns.columns
cols = ~cols.isin(loadings.index)
cols = assets_returns.columns[cols]
# Other approach for removing bond returns from factors matrix
cols = [col for col in assets_returns.columns if col not in loadings.index]
X = pd.concat([assets_returns[cols], X], axis=1)
display(X.head())
########################################################################
# Building the asset returns matrix
########################################################################
Y = pd.concat([assets_returns[cols], Y], axis=1)
display(Y.head())
########################################################################
# Building The Loadings Matrix
########################################################################
a = np.identity(len(cols))
a = pd.DataFrame(a, index=cols, columns=cols)
loadings = pd.concat([a, loadings], axis = 1)
loadings.fillna(0, inplace=True)
display(loadings.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
4.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
########################################################################
# Calculating optimum portfolio
########################################################################
port = pf.Portfolio(returns=Y)
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94, B=loadings)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
5. Optimization of Equity and Bond Portfolio with Key Rate Durations ConstraintsThis part shows how __Riskfolio-Lib__ can be used to build immunized portfolios using __duration matching__ and __convexity matching__, however the example only use duration matching. More information about inmunization theory can be found in this __[link](https://www.investopedia.com/terms/i/immunization.asp)__. 5.1 Statistics of Risk Factors
###Code
########################################################################
# Displaying factors statistics
########################################################################
table = pd.concat([loadings.min(), loadings.max()], axis=1)
table.columns = ['min', 'max']
display(table.iloc[:16,:].style.format("{:.4f}").background_gradient(cmap='YlGn'))
display(X.iloc[:,:16].corr().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
5.2 Creating Constraints on Key Rate DurationsIn this example we are going to put a limit on the maximum duration that the portfolio can reach. The key rate durations of portfolio for 1800, 3600 and 7200 days will be lower than -2, -2 and -3.
###Code
########################################################################
# Creating key rate durations constraints
########################################################################
constraints = {'Disabled': [False, False, False],
'Factor': ['R 1800', 'R 3600', 'R 7200'],
'Sign': ['<=', '<=', '<='],
'Value': [-2, -2, -3],}
constraints = pd.DataFrame(constraints)
display(constraints)
###Output
_____no_output_____
###Markdown
5.3 Estimating Optimum Portfolio with Key Rate Durations Constraints
###Code
########################################################################
# Estimating optimum portfolio with key rate durations constraints
########################################################################
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
########################################################################
# Calculating portfolio sensitivities for each risk factor
########################################################################
d_ = np.matrix(loadings).T * np.matrix(w)
d_ = pd.DataFrame(d_, index=loadings.columns, columns=['Values'])
display(d_.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.net)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Part IV: Bond Portfolio Optimization and ImmunizationIf you want to know more about the mathematics behind this model you can check the following posts: __[Valorización de Bonos con Python parte II](https://financioneroncios.wordpress.com/2018/05/23/valorizacion-de-bonos-con-python-parte-ii/)__, __[Fixed Income Portfolio Optimization with Python](https://financioneroncios.wordpress.com/2020/01/09/fixed-income-portfolio-optimization-with-python/)__ 1. Uploading the data:
###Code
########################################################################
# Uploading Data
########################################################################
import pandas as pd
import numpy as np
# Interest Rates Data
kr = pd.read_excel('KeyRates.xlsx', index_col=0, header=0)/100
# Prices Data
assets = pd.read_excel('Assets.xlsx', index_col=0, header=0)
# Find common dates
a = pd.merge(left=assets, right=kr, how='inner', on='Date')
dates = a.index
# Calculate interest rates returns
kr_returns = kr.loc[dates,:].sort_index().diff().dropna()
kr_returns.sort_index(ascending=False, inplace=True)
# Calculate assets returns
assets_returns = assets.loc[dates,:].sort_index().pct_change().dropna()
assets_returns.sort_index(ascending=False, inplace=True)
# Show tables
display(kr_returns.head().style.format("{:.4%}"))
display(assets_returns.head().style.format("{:.4%}"))
########################################################################
# Uploading Duration and Convexity Matrixes
########################################################################
durations = pd.read_excel('durations.xlsx', index_col=0, header=0)
convexity = pd.read_excel('convexity.xlsx', index_col=0, header=0)
print('Durations Matrix')
display(durations.head().style.format("{:.4f}").background_gradient(cmap='YlGn'))
print('')
print('Convexity Matrix')
display(convexity.head().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
Durations Matrix
###Markdown
2. Estimating Mean Variance Portfolio 2.1 Building the loadings matrix and risk factors returns.This part shows how to build a personalized loadings matrix that will be used by __Riskfolio-Lib__ to calculate the expected returns and covariance matrix.
###Code
########################################################################
# Building The Loadings Matrix
########################################################################
loadings = pd.concat([-1.0 * durations, 0.5 * convexity], axis = 1)
display(loadings.style.format("{:.4f}").background_gradient(cmap='YlGn'))
########################################################################
# Building the risk factors returns matrix
########################################################################
kr_returns_2 = kr_returns ** 2
cols = loadings.columns
X = pd.concat([kr_returns, kr_returns_2], axis=1)
X.columns = cols
display(X.head().style.format("{:.4%}"))
########################################################################
# Building the asset returns matrix
########################################################################
Y = assets_returns[loadings.index]
display(Y.head())
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
########################################################################
# Calculating optimum portfolio
########################################################################
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94, B=loadings)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
3. Optimization with Key Rate Durations ConstraintsThis part shows how __Riskfolio-Lib__ can be used to build immunized portfolios using __duration matching__ and __convexity matching__, however the example only use duration matching. More information about inmunization theory can be found in this __[link](https://www.investopedia.com/terms/i/immunization.asp)__. 3.1 Statistics of Risk Factors
###Code
########################################################################
# Displaying factors statistics
########################################################################
table = pd.concat([loadings.min(), loadings.max()], axis=1)
table.columns = ['min', 'max']
display(table.iloc[:9,:].style.format("{:.4f}").background_gradient(cmap='YlGn'))
display(X.iloc[:,:9].corr().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Key Rate DurationsIn this example we are going to put a limit on the maximum duration that the portfolio can reach. The key rate durations of portfolio for 1800, 3600 and 7200 days will be lower than -2, -2 and -3.
###Code
########################################################################
# Creating durations constraints
########################################################################
import riskfolio.ConstraintsFunctions as cf
constraints = {'Disabled': [False, False, False],
'Factor': ['R 1800', 'R 3600', 'R 7200'],
'Sign': ['<=', '<=', '<='],
'Value': [-2, -2, -3],}
constraints = pd.DataFrame(constraints)
display(constraints)
###Output
_____no_output_____
###Markdown
3.3 Estimating Optimum Portfolio with Key Rate Durations Constraints
###Code
########################################################################
# Estimating optimum portfolio with key rate duration constraints
########################################################################
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
We can see that with this constraints the weights of the portfolio are more spread along all assets. To show that the portfolio full fill all constraints we will calculate the sensitivities of the portfolio.
###Code
########################################################################
# Calculating portfolio sensitivities for each risk factor
########################################################################
d_ = np.matrix(loadings).T * np.matrix(w)
d_ = pd.DataFrame(d_, index=loadings.columns, columns=['Values'])
display(d_.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
4. Estimating Mean Variance Portfolio 4.1 Building the loadings matrix and risk factors returns.This part shows how to build a personalized loadings matrix that will be used by __Riskfolio-Lib__ to calculate the expected returns and covariance matrix.
###Code
########################################################################
# Building the risk factors returns matrix
########################################################################
# Removing bond returns from factors matrix
cols = assets_returns.columns
cols = ~cols.isin(loadings.index)
cols = assets_returns.columns[cols]
# Other approach for removing bond returns from factors matrix
cols = [col for col in assets_returns.columns if col not in loadings.index]
X = pd.concat([assets_returns[cols], X], axis=1)
display(X.head())
########################################################################
# Building the asset returns matrix
########################################################################
Y = pd.concat([assets_returns[cols], Y], axis=1)
display(Y.head())
########################################################################
# Building The Loadings Matrix
########################################################################
a = np.identity(len(cols))
a = pd.DataFrame(a, index=cols, columns=cols)
loadings = pd.concat([a, loadings], axis = 1)
loadings.fillna(0, inplace=True)
display(loadings.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
4.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
########################################################################
# Calculating optimum portfolio
########################################################################
port = pf.Portfolio(returns=Y)
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94, B=loadings)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
5. Optimization of Equity and Bond Portfolio with Key Rate Durations ConstraintsThis part shows how __Riskfolio-Lib__ can be used to build immunized portfolios using __duration matching__ and __convexity matching__, however the example only use duration matching. More information about inmunization theory can be found in this __[link](https://www.investopedia.com/terms/i/immunization.asp)__. 5.1 Statistics of Risk Factors
###Code
########################################################################
# Displaying factors statistics
########################################################################
table = pd.concat([loadings.min(), loadings.max()], axis=1)
table.columns = ['min', 'max']
display(table.iloc[:16,:].style.format("{:.4f}").background_gradient(cmap='YlGn'))
display(X.iloc[:,:16].corr().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
5.2 Creating Constraints on Key Rate DurationsIn this example we are going to put a limit on the maximum duration that the portfolio can reach. The key rate durations of portfolio for 1800, 3600 and 7200 days will be lower than -2, -2 and -3.
###Code
########################################################################
# Creating key rate durations constraints
########################################################################
constraints = {'Disabled': [False, False, False],
'Factor': ['R 1800', 'R 3600', 'R 7200'],
'Sign': ['<=', '<=', '<='],
'Value': [-2, -2, -3],}
constraints = pd.DataFrame(constraints)
display(constraints)
###Output
_____no_output_____
###Markdown
5.3 Estimating Optimum Portfolio with Key Rate Durations Constraints
###Code
########################################################################
# Estimating optimum portfolio with key rate durations constraints
########################################################################
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
########################################################################
# Calculating portfolio sensitivities for each risk factor
########################################################################
d_ = np.matrix(loadings).T * np.matrix(w)
d_ = pd.DataFrame(d_, index=loadings.columns, columns=['Values'])
display(d_.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
Riskfolio-Lib Tutorial: __[Financionerioncios](https://financioneroncios.wordpress.com)____[Orenji](https://www.orenj-i.com)____[Riskfolio-Lib](https://riskfolio-lib.readthedocs.io/en/latest/)____[Dany Cajas](https://www.linkedin.com/in/dany-cajas/)__ Part IV: Bond Portfolio Optimization and ImmunizationIf you want to know more about the mathematics behind this model you can check the following posts: __[Valorización de Bonos con Python parte II](https://financioneroncios.wordpress.com/2018/05/23/valorizacion-de-bonos-con-python-parte-ii/)__, __[Fixed Income Portfolio Optimization with Python](https://financioneroncios.wordpress.com/2020/01/09/fixed-income-portfolio-optimization-with-python/)__ 1. Uploading the data:
###Code
########################################################################
# Uploading Data
########################################################################
import pandas as pd
import numpy as np
# Interest Rates Data
kr = pd.read_excel('KeyRates.xlsx', index_col=0, header=0)/100
# Prices Data
assets = pd.read_excel('Assets.xlsx', index_col=0, header=0)
# Find common dates
a = pd.merge(left=assets, right=kr, how='inner', on='Date')
dates = a.index
# Calculate interest rates returns
kr_returns = kr.loc[dates,:].sort_index().diff().dropna()
kr_returns.sort_index(ascending=False, inplace=True)
# Calculate assets returns
assets_returns = assets.loc[dates,:].sort_index().pct_change().dropna()
assets_returns.sort_index(ascending=False, inplace=True)
# Show tables
display(kr_returns.head().style.format("{:.4%}"))
display(assets_returns.head().style.format("{:.4%}"))
########################################################################
# Uploading Duration and Convexity Matrixes
########################################################################
durations = pd.read_excel('durations.xlsx', index_col=0, header=0)
convexity = pd.read_excel('convexity.xlsx', index_col=0, header=0)
print('Durations Matrix')
display(durations.head().style.format("{:.4f}").background_gradient(cmap='YlGn'))
print('')
print('Convexity Matrix')
display(convexity.head().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
Durations Matrix
###Markdown
2. Estimating Mean Variance Portfolio 2.1 Building the loadings matrix and risk factors returns.This part shows how to build a personalized loadings matrix that will be used by __Riskfolio-Lib__ to calculate the expected returns and covariance matrix.
###Code
########################################################################
# Building The Loadings Matrix
########################################################################
loadings = pd.concat([-1.0 * durations, 0.5 * convexity], axis = 1)
display(loadings.style.format("{:.4f}").background_gradient(cmap='YlGn'))
########################################################################
# Building the risk factors returns matrix
########################################################################
kr_returns_2 = kr_returns ** 2
cols = loadings.columns
X = pd.concat([kr_returns, kr_returns_2], axis=1)
X.columns = cols
display(X.head().style.format("{:.4%}"))
########################################################################
# Building the asset returns matrix
########################################################################
Y = assets_returns[loadings.index]
display(Y.head())
###Output
_____no_output_____
###Markdown
2.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
########################################################################
# Calculating optimum portfolio
########################################################################
import riskfolio.Portfolio as pf
# Building the portfolio object
port = pf.Portfolio(returns=Y)
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94, B=loadings)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
3. Optimization with Key Rate Durations ConstraintsThis part shows how __Riskfolio-Lib__ can be used to build immunized portfolios using __duration matching__ and __convexity matching__, however the example only use duration matching. More information about inmunization theory can be found in this __[link](https://www.investopedia.com/terms/i/immunization.asp)__. 3.1 Statistics of Risk Factors
###Code
########################################################################
# Displaying factors statistics
########################################################################
table = pd.concat([loadings.min(), loadings.max()], axis=1)
table.columns = ['min', 'max']
display(table.iloc[:9,:].style.format("{:.4f}").background_gradient(cmap='YlGn'))
display(X.iloc[:,:9].corr().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
3.2 Creating Constraints on Key Rate DurationsIn this example we are going to put a limit on the maximum duration that the portfolio can reach. The key rate durations of portfolio for 1800, 3600 and 7200 days will be lower than -2, -2 and -3.
###Code
########################################################################
# Creating durations constraints
########################################################################
import riskfolio.ConstraintsFunctions as cf
constraints = {'Disabled': [False, False, False],
'Factor': ['R 1800', 'R 3600', 'R 7200'],
'Sign': ['<=', '<=', '<='],
'Value': [-2, -2, -3],}
constraints = pd.DataFrame(constraints)
display(constraints)
###Output
_____no_output_____
###Markdown
3.3 Estimating Optimum Portfolio with Key Rate Durations Constraints
###Code
########################################################################
# Estimating optimum portfolio with key rate duration constraints
########################################################################
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
We can see that with this constraints the weights of the portfolio are more spread along all assets. To show that the portfolio full fill all constraints we will calculate the sensitivities of the portfolio.
###Code
########################################################################
# Calculating portfolio sensitivities for each risk factor
########################################################################
d_ = np.matrix(loadings).T * np.matrix(w)
d_ = pd.DataFrame(d_, index=loadings.columns, columns=['Values'])
display(d_.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
4. Estimating Mean Variance Portfolio 4.1 Building the loadings matrix and risk factors returns.This part shows how to build a personalized loadings matrix that will be used by __Riskfolio-Lib__ to calculate the expected returns and covariance matrix.
###Code
########################################################################
# Building the risk factors returns matrix
########################################################################
# Removing bond returns from factors matrix
cols = assets_returns.columns
cols = ~cols.isin(loadings.index)
cols = assets_returns.columns[cols]
# Other approach for removing bond returns from factors matrix
cols = [col for col in assets_returns.columns if col not in loadings.index]
X = pd.concat([assets_returns[cols], X], axis=1)
display(X.head())
########################################################################
# Building the asset returns matrix
########################################################################
Y = pd.concat([assets_returns[cols], Y], axis=1)
display(Y.head())
########################################################################
# Building The Loadings Matrix
########################################################################
a = np.identity(len(cols))
a = pd.DataFrame(a, index=cols, columns=cols)
loadings = pd.concat([a, loadings], axis = 1)
loadings.fillna(0, inplace=True)
display(loadings.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
4.2 Calculating the portfolio that maximizes Sharpe ratio.
###Code
########################################################################
# Calculating optimum portfolio
########################################################################
port = pf.Portfolio(returns=Y)
# Select method and estimate input parameters:
method_mu='hist' # Method to estimate expected returns based on historical data.
method_cov='hist' # Method to estimate covariance matrix based on historical data.
port.assets_stats(method_mu=method_mu, method_cov=method_cov, d=0.94)
port.factors = X
port.factors_stats(method_mu=method_mu, method_cov=method_cov, d=0.94, B=loadings)
# Estimate optimal portfolio:
model='FM' # Factor Model
rm = 'MV' # Risk measure used, this time will be variance
obj = 'Sharpe' # Objective function, could be MinRisk, MaxRet, Utility or Sharpe
hist = False # Use historical scenarios for risk measures that depend on scenarios
rf = 0 # Risk free rate
l = 0 # Risk aversion factor, only useful when obj is 'Utility'
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
5. Optimization of Equity and Bond Portfolio with Key Rate Durations ConstraintsThis part shows how __Riskfolio-Lib__ can be used to build immunized portfolios using __duration matching__ and __convexity matching__, however the example only use duration matching. More information about inmunization theory can be found in this __[link](https://www.investopedia.com/terms/i/immunization.asp)__. 5.1 Statistics of Risk Factors
###Code
########################################################################
# Displaying factors statistics
########################################################################
table = pd.concat([loadings.min(), loadings.max()], axis=1)
table.columns = ['min', 'max']
display(table.iloc[:16,:].style.format("{:.4f}").background_gradient(cmap='YlGn'))
display(X.iloc[:,:16].corr().style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____
###Markdown
5.2 Creating Constraints on Key Rate DurationsIn this example we are going to put a limit on the maximum duration that the portfolio can reach. The key rate durations of portfolio for 1800, 3600 and 7200 days will be lower than -2, -2 and -3.
###Code
########################################################################
# Creating key rate durations constraints
########################################################################
constraints = {'Disabled': [False, False, False],
'Factor': ['R 1800', 'R 3600', 'R 7200'],
'Sign': ['<=', '<=', '<='],
'Value': [-2, -2, -3],}
constraints = pd.DataFrame(constraints)
display(constraints)
###Output
_____no_output_____
###Markdown
5.3 Estimating Optimum Portfolio with Key Rate Durations Constraints
###Code
########################################################################
# Estimating optimum portfolio with key rate durations constraints
########################################################################
C, D = cf.factors_constraints(constraints, loadings)
port.ainequality = C
port.binequality = D
w = port.optimization(model=model, rm=rm, obj=obj, rf=rf, l=l, hist=hist)
display(w.style.format("{:.4%}").background_gradient(cmap='YlGn'))
########################################################################
# Calculating portfolio sensitivities for each risk factor
########################################################################
d_ = np.matrix(loadings).T * np.matrix(w)
d_ = pd.DataFrame(d_, index=loadings.columns, columns=['Values'])
display(d_.style.format("{:.4f}").background_gradient(cmap='YlGn'))
###Output
_____no_output_____ |
notebooks/text-autoencoders_aae_train.ipynb | ###Markdown
DATA
###Code
!bash download_data.sh
###Output
--2021-07-25 06:11:06-- http://people.csail.mit.edu/tianxiao/data/yelp.zip
Resolving people.csail.mit.edu (people.csail.mit.edu)... 128.30.2.133
Connecting to people.csail.mit.edu (people.csail.mit.edu)|128.30.2.133|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3676642 (3.5M) [application/zip]
Saving to: ‘yelp.zip’
yelp.zip 100%[===================>] 3.51M --.-KB/s in 0.1s
2021-07-25 06:11:06 (33.5 MB/s) - ‘yelp.zip’ saved [3676642/3676642]
Archive: yelp.zip
creating: yelp/
creating: yelp/tense/
inflating: yelp/tense/valid.past
inflating: yelp/tense/valid.present
inflating: yelp/tense/test.past
inflating: yelp/tense/test.present
creating: yelp/sentiment/
inflating: yelp/sentiment/100.neg
inflating: yelp/sentiment/100.pos
inflating: yelp/sentiment/1000.neg
inflating: yelp/sentiment/1000.pos
inflating: yelp/test.txt
inflating: yelp/train.txt
inflating: yelp/valid.txt
creating: yelp/interpolate/
inflating: yelp/interpolate/example.long
inflating: yelp/interpolate/example.short
--2021-07-25 06:11:07-- http://people.csail.mit.edu/tianxiao/data/yahoo.zip
Resolving people.csail.mit.edu (people.csail.mit.edu)... 128.30.2.133
Connecting to people.csail.mit.edu (people.csail.mit.edu)|128.30.2.133|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11962156 (11M) [application/zip]
Saving to: ‘yahoo.zip’
yahoo.zip 100%[===================>] 11.41M 56.7MB/s in 0.2s
2021-07-25 06:11:07 (56.7 MB/s) - ‘yahoo.zip’ saved [11962156/11962156]
Archive: yahoo.zip
creating: yahoo/
inflating: yahoo/test.txt
inflating: yahoo/train.txt
inflating: yahoo/valid.txt
###Markdown
Training the AAE model for 30 epochs
###Code
NUM_EPOCHS = 30
!python train.py --epochs $NUM_EPOCHS --train data/yelp/train.txt --valid data/yelp/valid.txt --model_type aae --lambda_adv 10 --noise 0.3,0,0,0 --save-dir checkpoints/yelp/daae
!zip -r /content/text-autoencoders/checkpoints.zip /content/text-autoencoders/checkpoints/
!cp /content/text-autoencoders/checkpoints.zip /content/drive/MyDrive/checkpoints
###Output
_____no_output_____ |
sessions/05_weather/exploration.ipynb | ###Markdown
Temperatur in Würzburg
###Code
library(tidyverse)
library(lubridate)
theme_set(theme_light())
data <- read_csv("data/data_OBS_DEU_PT1H_T2M.csv")
head(data)
station <- read_csv("data/sdo_OBS_DEU_PT1H_T2M.csv")
station
data <- data %>% mutate(SDO_ID = if_else(SDO_ID==2600, "Kitzingen", "Würzburg"))
###Output
_____no_output_____
###Markdown
**Diesesmal schauen wir uns nur Daten aus Würzburg an**
###Code
data <- data %>% filter(SDO_ID=="Würzburg")
###Output
_____no_output_____
###Markdown
Fragestellungen- Was und wann war die wärmste/kälteste Temperatur die je in Würzburg gemessen wurde?- Wann war der wärmste/kälteste Tag/Woche/Monat/Jahr in Würzburg?- Was war die extremste Temperaturdifferenz innerhalb von 24h?- Gibt es einen langfristigen Trend in den Temperaturdaten über die Zeit?- Gibt es einen Hinweis darauf, dass sich die Jahreszeiten verschieben? Wärmste/kälteste Temperatur
###Code
data %>% arrange(-Wert) %>% head
###Output
_____no_output_____
###Markdown
Die heißeste gemessene Temparatur in Würzburg gab es mit 39,3°C am 7. August 2015
###Code
data %>% arrange(Wert) %>% head
###Output
_____no_output_____
###Markdown
Die kälteste gemessene Temparatur in Würzburg gab es mit -23,4°C am 10. Februar 1956 Wärmster/kältester Tag/Woche/Monat/Jahr Tag
###Code
data %>%
mutate(tag = floor_date(Zeitstempel, unit="day")) %>%
group_by(tag) %>%
summarize(Wert=mean(Wert)) %>%
arrange(-Wert) %>%
head
###Output
_____no_output_____
###Markdown
Der heißeste Tag war mit einer Tagesdurchschnittstemperatur von 30,3°C der 7. August 2015.
###Code
data %>%
mutate(tag = floor_date(Zeitstempel, unit="day")) %>%
group_by(tag) %>%
summarize(Wert=mean(Wert)) %>%
arrange(Wert) %>%
head
###Output
_____no_output_____
###Markdown
Der kälteste Tag war mit einer Tagesdurchschnittstemperatur von -18,2°C der 1. Februar 1956. Größte Temperaturdifferenz in 24h Erstmal innerhalb eines Tages (also von 0 bis 24Uhr)
###Code
data %>%
mutate(tag = floor_date(Zeitstempel, unit="day")) %>%
group_by(tag) %>%
summarize(span=diff(range(Wert))) %>%
arrange(-span) %>%
head
###Output
_____no_output_____
###Markdown
Am 14. Januar 1968 gab es innerhalb eines Tages einen Temperatur Unterschied von 24,9°C, wow!
###Code
data %>% filter(year(Zeitstempel)==1968, month(Zeitstempel)==1, day(Zeitstempel)==14) %>% ggplot(aes(Zeitstempel, Wert)) + geom_line()
###Output
_____no_output_____
###Markdown
Jetzt ordentlich in einem Sliding Window von 24h
###Code
library(slider)
data %>%
mutate(span_last_24h = slide_index_dbl(Wert, Zeitstempel, ~diff(range(.x)), .before = lubridate::hours(23), .complete=TRUE)) %>%
arrange(-span_last_24h)
data %>% filter(Zeitstempel<ymd("1979-01-02"), Zeitstempel>=ymd("1978-12-31")) %>% ggplot(aes(Zeitstempel, Wert)) + geom_line()
###Output
_____no_output_____
###Markdown
Die größte Differenz gab es an Neujahr 1979 als die Temperatur über Nacht von fast 10°C um 26,1°C auf -16,8°C fiel. Dagegen ist die angeblich geringste Differenz innerhalb eines Tages am 10. September 2019 (0,0°C Schwankung) ein Artefakt auf Grund fehlender Daten.
###Code
data %>% filter(Zeitstempel<ymd("2019-09-11"), Zeitstempel>=ymd("2019-09-08")) %>% ggplot(aes(Zeitstempel, Wert)) + geom_line()
###Output
_____no_output_____ |
notebooks/CSC2018 - Pandas.ipynb | ###Markdown
Exploring Stack ExchanceWhile everyone *loves* a fun dataset to explore, good data is expensive. It costs a significant amount of resources to generate, accurately curate, securely store, and provide robust access to. For instance, our cold-storage tape archive, [Ranch](https://www.tacc.utexas.edu/systems/ranch), grows at a rate of 8.5PB (~5.3%) per year. Despite these costs, data is often invaluable to both users and administrators.Today, we will be exploring data from Stack Exchange. While this is probably not the kind of data you interact with on a daily basis, everyone at this camp should have some familiarity from interacting with *at least* one [Stack Exchange Community](https://stackexchange.com/sites):- StackOverflow- Super User- TeX - LaTeX- ...and moreToday, you will be using python to explore question and answer history from the Stack Exchange site of your choice. This data will be accessed over their public API. This is their **actual** data, and these methods can be extended to a variety of other datasets and websites. Objectives- Use python [requests](http://docs.python-requests.org/en/master/) to download data- Import data into [Pandas](http://pandas.pydata.org/)- Explore data - Inspect and summarize data - Group records - Select and subset records - Visualize selection - Join two datasets together DependenciesWe will be using the following non-standard python libraries:- [**requests** library](http://docs.python-requests.org/en/master/) *\(Already Installed\)*- [**pandas** library](http://pandas.pydata.org/) *\(Already Installed\)*
###Code
# Import necessary Libraries
import requests, json
import pandas as pd
# Render matplotlib in the notebook
%matplotlib inline
###Output
_____no_output_____
###Markdown
Stack Exchange QuestionsStack Exchange has a [well documented API](https://api.stackexchange.com/), which contains enpoints for **each site**. You can perform any graphical interaction through the API while authenticated, but general information can also be retrieved anonymously. Just make sure you do not make more than 10,000 requests per day. (*I did while devloping this notebook*)Beginning with the initial questsions submitted by users, take a look at the[Questions API](https://api.stackexchange.com/docs/questions)webapp on the Stack Exchange site, and build a query that you would like to use with Python. Goals- Choose a site (default is StackOverflow)- Choose Start and/or End Date- Sort by creation- Limit the number of questions to 10 (`pagesize`) Make API Request
###Code
# API URL
url = 'https://api.stackexchange.com/2.2/questions'
params = dict(
site='stackoverflow', # stackoverflow (coding) questions
pagesize='10', # Number of questions to return
fromdate='1500163200',# Get epoch time from webapp
order='desc',
sort='creation'
)
resp = requests.get(url=url, params=params)
data = json.loads(resp.text)
print(json.dumps(data, indent=3))
###Output
_____no_output_____
###Markdown
Great! If you kept `pagesize` at 10, you should have a JSON response of 10 questions. If you decided to crank up your response size, you might have to scroll a bit. JSON StructureThis JSON response probably looks familiar if you have ever worked with Python dictionaries in the past. At the most basic level, a JSON is a collection of key and value pairs.```json{ "key1": value1, "key2": value2}```Instead of using a numerical index, you refer to each value with the corresponding key.- key1- key2This makes both the data structure and programatic access human-readable. However, the lack of indicies makes traditional access through looping somewhat difficult.
###Code
# Print first question title
############################
# Pull "items" json
# > Pull first record
# > Pull title
print(data['items'][0]['title'])
# You need to know the exact key names to traverse it
for item in data['items']:
# Print the question title
print("TAGS - %s"%(item['title']))
# Print the question tags
print(" [%s]\n"%(", ".join(item['tags'])))
###Output
_____no_output_____
###Markdown
Explore- Try pulling out the `answer_count` for each question- Try pulling out the `view_count` for each question- Try pulling out the submission date.- **Extra Credit** - [Convert the epoch time to human readable](https://stackoverflow.com/a/12400584) Converting to PandasInstead of testing you on your ability to traverse a JSON tree, the goal for today is to explore data using Pandas, so lets convert the JSON to a DataFrame.
###Code
questionsDF = pd.io.json.json_normalize(data['items'])
questionsDF
###Output
_____no_output_____
###Markdown
[`json_normalize`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.io.json.json_normalize.html) takes a nested JSON and flattens it into a table. In our case, it flattened each return question in the `items` list. Child JSONs like owner, which described the original submitter, now have owner as a prefix in the column name. JSON```"owner": { "reputation": 1, "user_id": 6140730, "user_type": "registered", "profile_image": "https://www.gravatar.com/avatar/efa02138df0bc1f59618c365872caed6?s=128&d=identicon&r=PG&f=1", "display_name": "John", "link": "https://stackoverflow.com/users/6140730/john" }``` Table| Column Name | Value ||--|--|| owner.reputation | 1 || owner.user_id | 6140730 || owner.user_type | registered || owner.profile_image | https://www... || owner.display_name | John || owner.link | https://stackoverflow... | Exploring the DataWhen we transform the JSON data into a table, using `json_normalize`, the resulting table is actually a [Pandas DataFrame.](https://pandas.pydata.org/pandas-docs/stable/dsintro.htmldataframe)A DataFrame is a 2-dimensional data structure that can store data of different types(characters, integers, floating point values, factors, and more)in columns. It is similar to a spreadsheet or an SQL table or the data.frame in R. A DataFrame always has an index (0-based). An index refers to the row of an element in the data structure.You can see the **bold** index column on the left of our example. Viewing DataFrame AttributesBesides having text column headers, DataFrames come with some nice attributes and methods to view specific parts of the data. ColumnsYou often need to iterate over the columns of your table, and DataFrames expose those names
###Code
print(questionsDF.columns)
###Output
_____no_output_____
###Markdown
ShapeYou can also see how many rows and columns (rows, columns) are in your DataFrame by accessing the shape attribute.
###Code
print(questionsDF.shape)
###Output
_____no_output_____
###Markdown
HeadIf you have ever used the `head` command on a terminal to view the first N lines of a file, the head function of a DataFrame will look familiar to you. This is great for just peeking at the data and not overflowing your window.
###Code
questionsDF.head()
#questionsDF.head(2)
###Output
_____no_output_____
###Markdown
TailThere is also a tail command for looking at the last N rows of a DataFrame.
###Code
questionsDF.tail()
#questionsDF.tail(2)
###Output
_____no_output_____
###Markdown
Grouping RecordsMany of the columns in this data, like `owner.link`, may not be immediately useful to us. With a DataFrame, you can select and group specific columns for use in a downstream analysis without losing the original.For example, we could be interested in the `view_count` of each question. An analysis of this column could show how many people also encounter a similar problem and needed to seek help on Stack Exchange. Column GroupsWe can pull out this single column using two methods.
###Code
# Dot
print(questionsDF.view_count.head())
# Bracket
print(questionsDF['view_count'].head())
###Output
_____no_output_____
###Markdown
We can also produce similar statistics provided by the `summary()` function in R with the `describe()` function. This can be applied directly to our column selection as so.
###Code
questionsDF['view_count'].describe()
###Output
_____no_output_____
###Markdown
Besides only using the first 10 questions in my example data, they're all very new, so they have very few views. Lets instead work on the latest 1,000 questions and generate the same description. Stack Exchange [limits](https://api.stackexchange.com/docs/throttle) the `pagesize` of the response to 100, so we will be pulling the first 10 pages.
###Code
# Latest 1000 questions
# Params pull 100 questsions per query
params = dict(
site='stackoverflow',
pagesize='100',
page='1',
order='desc',
sort='creation'
)
nPages = 10 #How many pages you want
data = []
import sys
print("Reading Page:")
for page in map(str, range(1,nPages+1)):
params['page']=page # Change page number
if int(page) > 1: sys.stdout.write(", ")
sys.stdout.write("%s"%(page))
data += json.loads(requests.get(url=url, params=params).text)['items']
questionsDF = pd.io.json.json_normalize(data)
# Drop the "migrated_from" columns
questionsDF = questionsDF[list(filter(lambda x: 'migrated' not in x, questionsDF.columns))]
questionsDF['view_count'].describe()
###Output
_____no_output_____
###Markdown
Now that we have a larger pool of data, you should check out other statistics that can be generated per column. Feel free to use another numerical column as well. ExploreThere are a bunch of [built in](https://pandas.pydata.org/pandas-docs/stable/api.htmlcomputations-descriptive-stats) descriptive functions, but these are good to check out.- describe()- nuniqe()- value_counts()
###Code
# How many unique users?
questionsDF['owner.user_id'].head()
###Output
_____no_output_____
###Markdown
Two-Way GroupsIf you ever want to summarize by one or more variables, you can use the `groupby` method. In our case, it would be interesting to look at `view_count` statistics of answered and unanswered questions.
###Code
questionsDF.groupby('is_answered')['view_count'].describe()
###Output
_____no_output_____
###Markdown
We can see that while there are fewer answered questions, their view count (in my test) is almost 100% higher. Neat! ExploreTake some time using the `groupby` method to explore other cool trends.- Owner reputation - Is the submitter a bot?- Score - Is the question real?
###Code
#questionsDF.groupby('is_answered')['owner.reputation'].describe()
###Output
_____no_output_____
###Markdown
Selecting and Subsetting RecordsYou can also select a subset of the data using criteria. For example, we can select all rows that have a `view_count` greater than 5.
###Code
questionsDF[questionsDF.view_count > 5]
###Output
_____no_output_____
###Markdown
ExploreExperiment with the- `>`, `<`- `==`, `!=`- `>=`, `<=`operators on numerical data. If you have extra time, look for questions that contain tags that you know. The tags are actually a list, so you can search for tags using the `in` operator.
###Code
# Need to use the map operation on tags
questionsDF[questionsDF.tags.map(lambda x: 'python' in x)]
###Output
_____no_output_____
###Markdown
Visualizing the ResultsWhile the tables we have been generating are nice, they still contain thousands of rows. A single figure could help visualize the data as a whole. Insead of crafting specific matplotlib calls, Pandas built a universal [`plot()` function](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html) into the DataFrame object to simplify figure generation.By stating that we want to generate a histogram with `kind='hist'`, we can look at the `view_count` fequency.
###Code
questionsDF['view_count'].plot(kind='hist')
# Try increasing the resolution with the "bins" parameter
# Try a square root transform of the view count
###Output
_____no_output_____
###Markdown
We can also plot our two-way tables.
###Code
questionsDF.groupby('is_answered')['view_count'].plot(kind='hist', legend=True)
###Output
_____no_output_____
###Markdown
ExploreTry generating a few figures on your own. Joining TablesYou can even join two datasets. Lets grab some answers so we can try joining them to their corresponding questions.
###Code
url = 'https://api.stackexchange.com/2.2/answers'
params = dict(
site='stackoverflow',
pagesize='100',
page='1',
order='desc',
sort='creation'
)
nPages = 10 #How many pages you want
data = []
import sys
print("Reading Page:")
for page in map(str, range(1,nPages+1)):
params['page']=page # Change page number
if int(page) > 1: sys.stdout.write(", ")
sys.stdout.write("%s"%(page))
data += json.loads(requests.get(url=url, params=params).text)['items']
answersDF = pd.io.json.json_normalize(data)
answersDF.head()
###Output
_____no_output_____
###Markdown
Inner JoinWe can return the intersection of all questions that also map to an answer by using an inner join. Assuming we had the following example data:```Questions---------------------QuestionID 0 1 2 3ViewCount 2 4 10 7AnswerID NA 1 2 3Answers---------------------QuestionID 5 1 2 3Score 3 5 3 1AnswerID 0 1 2 3```An inner join would yield```Questions X Answers---------------------QuestionID 1 2 3ViewCount 4 10 7Score 5 3 1AnswerID 1 2 3```We join both `questionsDF` and `answersDF` on the `question_id` column that they both share.
###Code
merged = pd.merge(left=questionsDF, right=answersDF[['answer_id','question_id']], left_on="question_id", right_on="question_id")
print(merged.shape)
merged.head()
print(questionsDF.columns)
###Output
_____no_output_____
###Markdown
Left JoinLeft joins return all items from the first set, and any items from the second set that overlap with the first. This is useful if we want ALL questions returned, and any questions that also match.Using the table from the first example, a left join would yield```Questions LJ Answers---------------------QuestionID 0 1 2 3ViewCount 2 4 10 7Score NA 5 3 1AnswerID NA 1 2 3```Notice that whenever there is no match on the right, fields are filled in as NA.
###Code
merged = pd.merge(left=questionsDF, right=answersDF, left_on="question_id", right_on="question_id", how="left")
print(merged.shape)
merged.head()
###Output
_____no_output_____
###Markdown
ExploreThere are also Right and Outer joins to explore. Take a look at [the documentation](https://pandas.pydata.org/pandas-docs/stable/merging.htmldatabase-style-dataframe-joining-merging) and see if you can discover anythign fun.
###Code
# Try joining some data
###Output
_____no_output_____ |
notebooks/5.0-lm-optimization-tsne.ipynb | ###Markdown
Parameter optimization for t-SNE
###Code
# Load the "autoreload" extension
%load_ext autoreload
# always reload modules marked with "%aimport"
%autoreload 1
import os
import sys
from dotenv import load_dotenv, find_dotenv
import numpy as np
import pandas as pd
import hdbscan
import scipy
#Visualisation Libraries
%matplotlib inline
# Uncomment if you want interactive 3D plots --> does not work in the github rendering
#%matplotlib notebook
from copy import deepcopy
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
matplotlib.style.use('ggplot')
import seaborn as sns
# add the 'src' directory as one where we can import modules
src_dir = os.path.join(os.getcwd(), os.pardir, 'src')
sys.path.append(src_dir)
%aimport visualization.visualize
from visualization.visualize import get_color_encoding
from visualization.visualize import plot_timeseries_clustering
from visualization.visualize import get_plot_timeseries_clustering_variables
%aimport data.preprocessing
from data.preprocessing import Preprocessor
%aimport data.download
from data.download import DatasetDownloader
%aimport utils.utilities
from utils.utilities import get_cluster_labels
%aimport models.cluster
from models.cluster import get_clustering_performance
%aimport models.dimensionality_reduction
from models.dimensionality_reduction.TSNEModel import TSNEModel
from models.dimensionality_reduction.BayesianTSNEOptimizer import BayesianTSNEOptimizer
###Output
_____no_output_____
###Markdown
Load data from disk.
###Code
# Load data from disk.
data_dir = os.path.join(os.path.abspath(DatasetDownloader.get_data_dir()))
file_path = os.path.join(data_dir, "preprocessed","preprocessed_data.dat")
dfs = Preprocessor.restore_preprocessed_data_from_disk(file_path)
###Output
_____no_output_____
###Markdown
Calculate distances.
###Code
trips_cut_per_30_sec = Preprocessor.get_cut_trip_snippets_for_total(dfs)
euclidean_distances = Preprocessor.calculate_distance_for_n2(trips_cut_per_30_sec, metric="euclidean")
###Output
_____no_output_____
###Markdown
Prepare distance data for fitting of t-SNE model.
###Code
categorical_columns = ["mode", "notes", "scripted", "token", "trip_id"]
segment_distance_matrix = euclidean_distances.drop(categorical_columns,axis=1)
###Output
_____no_output_____
###Markdown
Next steps: Integrate BayesianTSNEOptimizer, start optimization (record results and ingest at next start as initialization values).
###Code
# Define parameter ranges, fix static variables.
param_ranges = deepcopy(TSNEModel.PARAMETER_RANGES)
param_ranges["metric"] = (TSNEModel.CATEGORICAL_VALUES["metric"].index("precomputed"),)
param_ranges["init_method"] = (TSNEModel.CATEGORICAL_VALUES["init_method"].index("random"),)
param_ranges["random_state"] = (42,)
param_ranges["n_components"] = (3,)
param_ranges["n_iter"] = (5000,)
#param_ranges["min_grad_norm"] = (0.0000001,)
# Initialize new BO object.
boOpt = BayesianTSNEOptimizer(
high_dim_data=segment_distance_matrix,
cluster_memberships=euclidean_distances["mode"].values,
parameters=param_ranges
)
# Load existing results.
history = BayesianTSNEOptimizer.load_result_dict("tsne_results")
if history is not None:
print("Number of models generated so far: ", len(history["values"]))
# Execute optimization; initialize with existing results.
# Use higher init_fraction if not many initialization datapoints are available.
results = boOpt.run(num_iterations=30, init_fraction=0.1, init_values=history, kappa=6.0)
# Save merged result set (new results and existing ones).
all_results = BayesianTSNEOptimizer.merge_result_dictionaries(results, history)
BayesianTSNEOptimizer.persist_result_dict(
results=all_results,
filename="tsne_results"
)
###Output
Number of models generated so far: 111
[31mInitialization[0m
[94m-------------------------------------------------------------------------------------------------------------------[0m
Step | Time | Value | angle | early_exaggeration | learning_rate | min_grad_norm | perplexity |
1 | 00m19s | [35m 0.36532[0m | [32m 0.7946[0m | [32m 1.9608[0m | [32m 410.2173[0m | [32m 0.0769[0m | [32m 20.2949[0m |
2 | 00m56s | 0.31999 | 0.4394 | 13.8880 | 811.6344 | 0.0136 | 15.7405 |
3 | 00m43s | 0.30678 | 0.3427 | 35.5272 | 886.9313 | 0.0915 | 37.6761 |
4 | 00m00s | 0.34443 | 0.3128 | 1.0688 | 654.4796 | 0.0936 | 49.7076 |
5 | 00m00s | 0.34637 | 0.3437 | 1.0738 | 562.3034 | 0.0588 | 49.4633 |
6 | 00m00s | 0.35604 | 0.8542 | 1.5960 | 464.4792 | 0.0063 | 47.0979 |
7 | 00m00s | [35m 0.37111[0m | [32m 0.8249[0m | [32m 26.5449[0m | [32m 10.8491[0m | [32m 0.0521[0m | [32m 11.4050[0m |
8 | 00m00s | 0.29904 | 0.9107 | 49.8520 | 1889.6890 | 0.0635 | 99.7347 |
9 | 00m00s | 0.30641 | 0.6615 | 49.1872 | 797.1386 | 0.0623 | 95.6741 |
10 | 00m00s | 0.30444 | 0.8378 | 49.9533 | 922.9950 | 0.0466 | 7.8946 |
11 | 00m00s | 0.30098 | 0.1146 | 49.2372 | 1828.4363 | 0.0918 | 43.8103 |
12 | 00m00s | 0.30241 | 0.8151 | 49.5745 | 1640.4534 | 0.0368 | 56.4864 |
13 | 00m00s | 0.29483 | 0.6326 | 49.1647 | 1999.3934 | 0.0765 | 26.0882 |
14 | 00m00s | 0.30045 | 0.6520 | 48.9578 | 203.7880 | 0.0290 | 1.1384 |
15 | 00m00s | 0.30075 | 0.1469 | 49.9815 | 468.0764 | 0.0314 | 2.1522 |
16 | 00m00s | 0.34794 | 0.5456 | 1.6581 | 69.3128 | 0.0522 | 1.9923 |
17 | 00m00s | 0.35501 | 0.3051 | 1.1870 | 159.7261 | 0.0688 | 53.0406 |
18 | 00m00s | 0.34476 | 0.6475 | 1.7978 | 1108.4676 | 0.0875 | 99.8143 |
19 | 00m00s | 0.35662 | 0.1651 | 2.5197 | 506.5098 | 0.0751 | 46.2949 |
20 | 00m00s | 0.30560 | 0.9940 | 49.9472 | 697.8204 | 0.0136 | 45.0828 |
21 | 00m00s | 0.29398 | 0.7469 | 49.3801 | 1168.1650 | 0.0964 | 2.6211 |
22 | 00m00s | 0.33016 | 0.3555 | 1.8607 | 1909.4509 | 0.0838 | 99.9934 |
23 | 00m00s | 0.34518 | 0.9032 | 3.0345 | 899.9167 | 0.0966 | 98.7900 |
24 | 00m00s | 0.33371 | 0.1347 | 1.3138 | 1721.9947 | 0.0087 | 99.7203 |
25 | 00m00s | 0.31057 | 0.9181 | 27.7648 | 858.8472 | 0.0714 | 50.4817 |
26 | 00m00s | 0.33083 | 0.9182 | 2.6319 | 955.3677 | 0.0232 | 1.2861 |
27 | 00m00s | 0.34105 | 0.3955 | 1.3353 | 1456.6966 | 0.0605 | 62.5465 |
28 | 00m00s | 0.34912 | 0.6455 | 1.0282 | 31.0171 | 0.0379 | 72.6535 |
29 | 00m00s | 0.34681 | 0.8226 | 16.3877 | 137.1769 | 0.0881 | 99.0196 |
30 | 00m00s | 0.30812 | 0.8855 | 49.3717 | 546.4269 | 0.0199 | 84.8115 |
31 | 00m00s | 0.30767 | 0.5217 | 20.2927 | 385.7861 | 0.0647 | 1.0769 |
32 | 00m00s | 0.34216 | 0.4065 | 1.8400 | 1334.4834 | 0.0685 | 99.8095 |
33 | 00m00s | 0.35441 | 0.3233 | 1.4131 | 721.8197 | 0.0576 | 49.9436 |
34 | 00m00s | 0.35194 | 0.1446 | 1.2145 | 342.6239 | 0.0862 | 49.3531 |
35 | 00m00s | 0.30448 | 0.6427 | 49.4127 | 1039.2726 | 0.0585 | 98.2775 |
36 | 00m00s | 0.36680 | 0.1808 | 38.4465 | 1.0348 | 0.0989 | 50.2728 |
37 | 00m00s | 0.30684 | 0.4325 | 1.0943 | 1727.3216 | 0.0480 | 1.3386 |
38 | 00m00s | 0.33220 | 0.2187 | 1.6714 | 1999.4261 | 0.0514 | 66.2029 |
39 | 00m00s | 0.33149 | 0.6941 | 1.4036 | 1204.4865 | 0.0586 | 2.2114 |
40 | 00m00s | 0.34877 | 0.9680 | 2.1708 | 1600.8236 | 0.0581 | 47.5393 |
41 | 00m00s | 0.33929 | 0.6078 | 1.5280 | 1668.0727 | 0.0880 | 59.4251 |
42 | 00m00s | 0.33594 | 0.6934 | 1.1454 | 745.6466 | 0.0139 | 1.1292 |
43 | 00m00s | 0.35239 | 0.2164 | 2.3584 | 289.3523 | 0.0565 | 99.4950 |
44 | 00m00s | 0.32384 | 0.5627 | 1.4237 | 1437.2627 | 0.0655 | 2.4658 |
45 | 00m00s | 0.33303 | 0.2944 | 1.2474 | 689.8908 | 0.0997 | 1.0673 |
46 | 00m00s | 0.33537 | 0.2037 | 1.1818 | 1408.7275 | 0.0400 | 98.8258 |
47 | 00m00s | 0.30400 | 0.3412 | 48.6239 | 937.0568 | 0.0600 | 97.1822 |
48 | 00m00s | 0.34244 | 0.3232 | 1.1471 | 1321.4812 | 0.0962 | 56.2816 |
49 | 00m00s | 0.34460 | 0.1017 | 1.1956 | 427.3550 | 0.0920 | 98.5182 |
50 | 00m00s | 0.30011 | 0.5919 | 49.6391 | 1513.7555 | 0.0173 | 98.6602 |
51 | 00m00s | 0.34692 | 0.1978 | 1.0616 | 199.4631 | 0.0308 | 98.9227 |
52 | 00m00s | 0.34990 | 0.1464 | 1.6259 | 924.6001 | 0.0921 | 63.1840 |
53 | 00m00s | 0.31698 | 0.8764 | 49.9315 | 194.2295 | 0.0463 | 98.5747 |
54 | 00m00s | 0.35381 | 0.1353 | 2.0348 | 231.6353 | 0.0288 | 3.0429 |
55 | 00m00s | 0.35411 | 0.1714 | 1.5656 | 605.3854 | 0.0720 | 42.2867 |
56 | 00m00s | 0.33129 | 0.3957 | 1.0208 | 1733.5334 | 0.0501 | 56.7839 |
57 | 00m00s | 0.33962 | 0.1494 | 1.0910 | 1531.5253 | 0.0613 | 52.3546 |
58 | 00m00s | 0.36701 | 0.1738 | 3.6652 | 4.8300 | 0.0229 | 99.7852 |
59 | 00m00s | 0.30277 | 0.7129 | 49.7902 | 1113.4985 | 0.0393 | 99.5399 |
60 | 00m00s | 0.29240 | 0.9910 | 47.7636 | 1573.5244 | 0.0476 | 1.3962 |
61 | 00m00s | 0.29126 | 0.5331 | 49.8683 | 1041.5724 | 0.0156 | 1.7631 |
62 | 00m00s | 0.31017 | 0.6784 | 49.3533 | 82.9671 | 0.0823 | 1.2403 |
63 | 00m00s | 0.30405 | 0.2130 | 47.6998 | 1344.2020 | 0.0397 | 99.9642 |
64 | 00m00s | 0.29982 | 0.8467 | 43.7637 | 1999.2604 | 0.0353 | 99.7493 |
65 | 00m00s | 0.30329 | 0.3335 | 49.2427 | 1220.2798 | 0.0299 | 51.7465 |
66 | 00m00s | 0.29671 | 0.3971 | 49.6265 | 1924.1592 | 0.0606 | 2.2726 |
67 | 00m00s | 0.34636 | 0.2359 | 1.0215 | 790.2905 | 0.0826 | 32.2752 |
68 | 00m00s | 0.34772 | 0.9095 | 1.0312 | 267.1386 | 0.0888 | 59.3099 |
69 | 00m00s | 0.34760 | 0.9509 | 1.1635 | 551.8756 | 0.0449 | 3.5937 |
70 | 00m00s | 0.30741 | 0.5151 | 49.6511 | 316.7186 | 0.0942 | 99.3118 |
71 | 00m00s | 0.34510 | 0.8333 | 1.2514 | 1066.0585 | 0.0737 | 60.9584 |
72 | 00m00s | 0.34068 | 0.7097 | 1.4955 | 1511.2548 | 0.0826 | 99.0362 |
73 | 00m00s | 0.35735 | 0.2917 | 27.4145 | 27.0000 | 0.0716 | 57.0335 |
74 | 00m00s | 0.30869 | 0.5594 | 1.7658 | 1547.5589 | 0.0310 | 1.9237 |
75 | 00m00s | 0.32844 | 0.2313 | 1.0073 | 1930.2638 | 0.0373 | 36.1783 |
76 | 00m00s | 0.34574 | 0.5603 | 1.4974 | 751.9135 | 0.0526 | 99.1311 |
77 | 00m00s | 0.31363 | 0.6952 | 2.7045 | 1347.9938 | 0.0974 | 1.0621 |
78 | 00m00s | 0.30159 | 0.6477 | 49.9835 | 1432.0237 | 0.0185 | 35.4033 |
79 | 00m00s | 0.34933 | 0.7493 | 2.2691 | 591.5771 | 0.0335 | 99.5395 |
80 | 00m00s | 0.35306 | 0.6603 | 48.9842 | 7.2234 | 0.0789 | 1.1004 |
81 | 00m00s | 0.30994 | 0.7053 | 49.7156 | 455.6846 | 0.0331 | 98.1077 |
82 | 00m00s | 0.29578 | 0.9953 | 7.0567 | 1997.4346 | 0.0863 | 1.2954 |
83 | 00m00s | 0.35466 | 0.4304 | 2.5876 | 14.8857 | 0.0312 | 2.4159 |
84 | 00m00s | 0.33256 | 0.5681 | 2.1619 | 1985.3270 | 0.0778 | 99.2251 |
85 | 00m00s | 0.34764 | 0.5218 | 1.9850 | 682.9564 | 0.0562 | 99.8516 |
86 | 00m00s | 0.34407 | 0.6687 | 1.4990 | 1003.4461 | 0.0445 | 97.3465 |
87 | 00m00s | 0.34689 | 0.7987 | 1.0057 | 341.4508 | 0.0887 | 99.4471 |
88 | 00m00s | 0.34247 | 0.7645 | 1.3523 | 450.4442 | 0.0068 | 2.3155 |
89 | 00m00s | 0.33318 | 0.7533 | 1.2512 | 1828.1274 | 0.0516 | 99.1697 |
90 | 00m00s | 0.33421 | 0.1134 | 1.7416 | 885.9026 | 0.0521 | 1.2815 |
91 | 00m00s | 0.37075 | 0.2868 | 5.6124 | 6.7732 | 0.0687 | 97.4081 |
92 | 00m00s | 0.34846 | 0.7601 | 1.2738 | 1252.3122 | 0.0052 | 3.2058 |
93 | 00m00s | 0.35549 | 0.1693 | 48.6268 | 3.1480 | 0.0359 | 99.2178 |
94 | 00m00s | 0.34638 | 0.3104 | 1.3220 | 532.0899 | 0.0411 | 98.6401 |
95 | 00m00s | 0.31043 | 0.2376 | 2.9981 | 1823.4807 | 0.0326 | 1.5688 |
96 | 00m00s | 0.34752 | 0.7635 | 1.9882 | 823.4975 | 0.0419 | 98.3100 |
97 | 00m00s | 0.33245 | 0.3216 | 1.2128 | 1625.1680 | 0.0609 | 99.9586 |
98 | 00m00s | 0.30432 | 0.1789 | 49.5573 | 321.2464 | 0.0671 | 4.4460 |
99 | 00m00s | 0.34112 | 0.5811 | 1.1928 | 1165.1755 | 0.0392 | 71.0937 |
100 | 00m00s | 0.35545 | 0.8400 | 1.3813 | 1.4476 | 0.0179 | 44.1270 |
101 | 00m00s | 0.30167 | 0.1996 | 49.0180 | 1725.6710 | 0.0778 | 99.7398 |
102 | 00m00s | 0.34870 | 0.8821 | 3.7921 | 136.8910 | 0.0272 | 1.9920 |
103 | 00m00s | 0.30269 | 0.4553 | 49.4879 | 577.3544 | 0.0774 | 3.3832 |
104 | 00m00s | 0.29275 | 0.1786 | 48.2983 | 771.1327 | 0.0988 | 1.6598 |
105 | 00m00s | 0.32891 | 0.1539 | 1.1631 | 1654.2860 | 0.0264 | 3.6028 |
106 | 00m00s | 0.30976 | 0.4031 | 48.5392 | 637.3870 | 0.0953 | 99.8240 |
107 | 00m00s | 0.33732 | 0.5278 | 1.3803 | 1024.5311 | 0.0631 | 2.3021 |
108 | 00m00s | 0.35084 | 0.1584 | 1.2213 | 326.3545 | 0.0895 | 3.8243 |
109 | 00m00s | 0.29470 | 0.1645 | 48.7770 | 1314.0140 | 0.0867 | 1.9793 |
110 | 00m00s | 0.34556 | 0.7453 | 1.0318 | 72.9521 | 0.0190 | 93.4363 |
111 | 00m00s | 0.29109 | 0.5575 | 49.9870 | 1738.7781 | 0.0266 | 3.0750 |
112 | 00m00s | 0.33106 | 0.3663 | 1.1014 | 1133.2438 | 0.0922 | 1.5781 |
113 | 00m00s | 0.35343 | 0.1119 | 1.0332 | 412.4994 | 0.0121 | 55.0950 |
114 | 00m00s | 0.34265 | 0.7031 | 2.4904 | 1257.8046 | 0.0630 | 99.6657 |
###Markdown
Sort results by score, pick highest.
###Code
all_results_sorted_idx = np.argsort(all_results["values"])
max_score_index = all_results_sorted_idx[-1]
best_param_set = all_results["params"][max_score_index]
print(best_param_set)
###Output
{'perplexity': 6.2975330305384913, 'early_exaggeration': 26.398296478695748, 'learning_rate': 11.256126673690892, 'angle': 0.15925222040151887, 'min_grad_norm': 0.069598686192291315}
###Markdown
(Re-)Generate model with given parameter set, since we didn't store the results for each run.
###Code
tsne = TSNEModel(num_dimensions=3,
perplexity=best_param_set["perplexity"],
early_exaggeration=best_param_set["early_exaggeration"],
learning_rate=best_param_set["learning_rate"],
num_iterations=5000,
min_grad_norm=best_param_set["min_grad_norm"],
random_state=42,
angle=best_param_set["angle"],
metric='precomputed',
init_method='random')
# Fit t-SNE model.
tsne_results = tsne.run(segment_distance_matrix.values)
transport_modes = {
'WALK': 'blue',
'METRO': 'red',
'TRAM': 'green'
}
tokens = {
'355007075245007': 'x',
'358568053229914': 'o',
'868049020858898': 'v'
}
fig, ax = plt.subplots(2, 3, figsize=(20, 10))
for transport_mode, transport_mode_color in transport_modes.items():
transport_mode_scripted = euclidean_distances[
(euclidean_distances["mode"] == transport_mode) &
(euclidean_distances["notes"].str.contains('scripted'))
]
transport_mode_unscripted = euclidean_distances[
(euclidean_distances["mode"] == transport_mode) &
(~(euclidean_distances["notes"].str.contains('scripted', na=False)))
]
for token, token_symbol in tokens.items():
transport_mode_scripted_for_token = transport_mode_scripted[
transport_mode_scripted["token"] == token
].index.values
transport_mode_unscripted_for_token = transport_mode_unscripted[
transport_mode_unscripted["token"] == token
].index.values
ax[0, 0].scatter(
tsne_results[transport_mode_scripted_for_token, 0],
tsne_results[transport_mode_scripted_for_token, 1],
c=transport_mode_color,
marker=token_symbol,
alpha=0.5
)
ax[0, 1].scatter(
tsne_results[transport_mode_scripted_for_token, 0],
tsne_results[transport_mode_scripted_for_token, 2],
c=transport_mode_color,
marker=token_symbol,
alpha=0.5
)
ax[0, 2].scatter(
tsne_results[transport_mode_scripted_for_token, 1],
tsne_results[transport_mode_scripted_for_token, 2],
c=transport_mode_color,
marker=token_symbol,
alpha=0.5
)
ax[1, 0].scatter(
tsne_results[transport_mode_unscripted_for_token, 0],
tsne_results[transport_mode_unscripted_for_token, 1],
c=transport_mode_color,
marker=token_symbol,
alpha=0.5
)
ax[1, 1].scatter(
tsne_results[transport_mode_unscripted_for_token, 0],
tsne_results[transport_mode_unscripted_for_token, 2],
c=transport_mode_color,
marker=token_symbol,
alpha=0.5
)
ax[1, 2].scatter(
tsne_results[transport_mode_unscripted_for_token, 1],
tsne_results[transport_mode_unscripted_for_token, 2],
c=transport_mode_color,
marker=token_symbol,
alpha=0.5
)
ax[0, 0].set_title('Scripted')
ax[0, 1].set_title('Scripted')
ax[0, 2].set_title('Scripted')
ax[1, 0].set_title('Unscripted')
ax[1, 1].set_title('Unscripted')
ax[1, 2].set_title('Unscripted')
#ax[0].legend(loc='upper center', bbox_to_anchor=(1, 0.5))
#ax[1].legend(loc='upper center', bbox_to_anchor=(1, 0.5))
###Output
_____no_output_____ |
Functional Programming in Python/1_Functional Programming in Python.ipynb | ###Markdown
Functional Programming in Python[Tutorial playlist](https://www.youtube.com/playlist?list=PLP8GkvaIxJP1z5bu4NX_bFrEInBkAgTMr) [中文文档](https://docs.python.org/zh-cn/3/howto/functional.html) Immutable Data StructuresImmutable data structures cannot be modified in-place and this can help reduce bugs
###Code
import collections
Scientist = collections.namedtuple('Scientist',[
'name',
'field',
'born',
'nobel',
])
scientists = (
Scientist(name=' Ada Lovelace', field='math', born=1815, nobel=False),
Scientist(name=' Emmy Noether', field='math', born=1882, nobel=False),
Scientist(name='Marie Curie', field='physics', born=1867, nobel=True),
Scientist(name=' Tu-Youyou', field='chemistry', born=1930, nobel=True),
Scientist(name=' Ada-Yonath', field='chemistry', born=1939, nobel=True),
Scientist(name=' Vera Rubin', field='astronomy',born=1928, nobel=False),
Scientist(name='Sally Ride', field='physics', born=1951, nobel=False),
)
scientists[0].name
from pprint import pprint
pprint(scientists)
###Output
(Scientist(name=' Ada Lovelace', field='math', born=1815, nobel=False),
Scientist(name=' Emmy Noether', field='math', born=1882, nobel=False),
Scientist(name='Marie Curie', field='physics', born=1867, nobel=True),
Scientist(name=' Tu-Youyou', field='chemistry', born=1930, nobel=True),
Scientist(name=' Ada-Yonath', field=' chemistry', born=1939, nobel=True),
Scientist(name=' Vera Rubin', field='astronomy', born=1928, nobel=False),
Scientist(name='Sally Ride', field='physics', born=1951, nobel=False))
###Markdown
The `filter()` Function
###Code
filter(lambda x: x.nobel is True, scientists)
fs = filter(lambda x: x.nobel is True, scientists)
next(fs)
next(fs)
next(fs)
next(fs)
fs = tuple(filter(lambda x: x.nobel is True, scientists))
fs
pprint(tuple(filter(lambda x: True, scientists)))
pprint(tuple(filter(lambda x: x.field == 'physics', scientists)))
pprint(tuple(filter(lambda x: x.field == 'physics' and x.nobel, scientists)))
for x in scientists:
if x.nobel is True:
print(x)
def nobel_filter(x):
return x.nobel is True
pprint(tuple(filter(nobel_filter, scientists)))
# list comprehension
[x for x in scientists if x.nobel is True]
pprint([x for x in scientists if x.nobel is True])
pprint(tuple([x for x in scientists if x.nobel is True]))
# not need to use list as imtermediate
pprint(tuple(x for x in scientists if x.nobel is True))
tuple([1,2,3])
tuple(1,2,3)
###Output
_____no_output_____
###Markdown
The `map()` Function
###Code
names_and_ages = tuple(map(lambda x: {'name': x.name, 'age': 2017 - x.born}, scientists))
names_and_ages
pprint(names_and_ages)
# list comprehension
[{'name': x.name, 'age': 2017 - x.born} for x in scientists]
# generator
tuple({'name': x.name, 'age': 2017 - x.born} for x in scientists)
tuple({'name': x.name.upper(), 'age': 2017 - x.born} for x in scientists)
###Output
_____no_output_____
###Markdown
The `reduce()` Function
###Code
from functools import reduce
names_and_ages = tuple({'name': x.name.upper(), 'age': 2017 - x.born} for x in scientists)
pprint(names_and_ages)
total_age = reduce(lambda acc, val: acc + val['age'], names_and_ages, 0)
total_age
sum(x['age'] for x in names_and_ages)
def reducer(acc, val):
acc[val.field].append(val.name)
return acc
scientists_by_field = reduce(reducer, scientists, {'math': [], 'physics': [], 'chemistry': [], 'astronomy': []})
pprint(scientists_by_field)
import collections
scientists_by_field = reduce(reducer, scientists, collections.defaultdict(list))
pprint(scientists_by_field)
###Output
defaultdict(<class 'list'>,
{'astronomy': [' Vera Rubin'],
'chemistry': [' Tu-Youyou', ' Ada-Yonath'],
'math': [' Ada Lovelace', ' Emmy Noether'],
'physics': ['Marie Curie', 'Sally Ride']})
###Markdown
defaultdict
###Code
dd = collections.defaultdict(list)
dd
dd['doesnetexist']
dd
dd['doesnetexist---2']
dd
dd['xyz'].append(1)
dd['xyz'].append(2)
dd['xyz'].append(3)
dd
import itertools
scientists_by_field5 = {item[0]: list(item[1]) for item in itertools.groupby(scientists, lambda x: x.field)}
scientists_by_field5
# lambda function for fun
import functools
scientists_by_field = functools.reduce(lambda acc, val:{**acc, **{val.field: acc[val.field] + [val.name]}}, scientists, {'math': [], 'physics': [], 'chemistry': [], 'astronomy': []})
pprint(scientists_by_field)
###Output
{'astronomy': [' Vera Rubin'],
'chemistry': [' Tu-Youyou', ' Ada-Yonath'],
'math': [' Ada Lovelace', ' Emmy Noether'],
'physics': ['Marie Curie', 'Sally Ride']}
|
Week 02 - Data Science Libraries/2- Matplotlib.ipynb | ###Markdown
Matplotlib-->-->As its name suggests, Matplotlib is a library for creating plots, graphs, charts, etc., of data. Its syntax is influenced by Matlab. It is possbile visualize data contained in simple lists and tuples, but Matplotlib can also work effectively NumPy and Pandas data structures. Below, we will specifically work with the `pyplot` routines.You can view the documentation in more detail here:* [Documentation](https://matplotlib.org/tutorials/introductory/pyplot.html)* [Cheatsheet](https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Matplotlib_Cheat_Sheet.pdf) Below we create a plot from two data lists `x` `y` using `plot()`. We also specify text for the x-axis and y-axis as well as give the plot a title.
###Code
import numpy as np
import matplotlib
import matplotlib.pyplot as plt # pyplot gives a matlab like feel.
# need the below for presenting plots in Jupyiter notebook.
%matplotlib inline
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
result = plt.plot(x, y) # returns a list of objects.
plt.xlabel("X") # x-axis label
plt.ylabel("Y") # y-axis label
plt.title("Example 1") # title of the graph
plt.suptitle("A Simple Plot") # title of the entire graph
plt.show() # show the plot
###Output
_____no_output_____
###Markdown
An even simpler plot can be created by providing only a single list of numbers. These are taken as y values. The corresponding x values are just the sequence 0,1,2,...
###Code
plt.plot([1, 4, 9, 16]) # x-axis is the index of the list.
plt.xlabel("X") # x-axis label
plt.ylabel("Y") # y-axis label
plt.title("A single list is assumed to be y-values") # title of the graph
plt.show()
###Output
_____no_output_____
###Markdown
We can adjust the viewable axes using `plot.axis([xmin,xmax,ymin,ymax])`
###Code
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
plt.plot(x, y)
plt.xlabel("X")
plt.ylabel("Y")
plt.axis(
[4, 8, 10, 70]
) # first two parameters are minimum and maximum x values to show in the plot, the second two are minimum and maximum y values.
plt.show()
###Output
_____no_output_____
###Markdown
Formatting pointsAn optional string can be used to format the plotted data. The format is based on the parameters used in Matlab. E.g., `r*` will make red stars, `b-` will make a solid blue line.
###Code
plt.plot(
x, y, "r*"
) # 'r*' is a matlab-like formatting string; 'r' for red, '*' for stars
plt.xlabel("X") # x-axis label
plt.ylabel("Y") # y-axis label
plt.title("Formatted Data Points") # title of the graph
plt.show()
###Output
_____no_output_____
###Markdown
**Line Styles**| character | description || --- | --- || '-' | solid line style || '--' | dashed line style || '-.' | dash-dot line style || ':' | dotted line style || '.' | point marker || ',' | pixel marker || 'o' | circle marker || 'v' | triangle_down marker || '^' | triangle_up marker || '<' | triangle_left marker || '>' | triangle_right marker || '1' | tri_down marker || '2' | tri_up marker || '3' | tri_left marker || '4' | tri_right marker || 's' | square marker || 'p' | pentagon marker || '*' | star marker || 'h' | hexagon1 marker || 'H' | hexagon2 marker || '+' | plus marker || 'x' | x marker || 'D' | diamond marker || 'd' | thin_diamond marker || '_' | hline marker |**Colors**| character | color || --- | --- || b | blue || g | green || r | red || c | cyan || m | magenta || y | yellow || k | black || w | white | Formatting multiple seriesIn general, `plt.plot()` accepts a sequence of alternating x,y values, each having an optional format string. plot(x1,y1,format1,x2,y2,format2,x3,y3,format3) In this way, multiple series can be plotted.
###Code
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y2 = [1, 8, 27, 64, 125, 216, 343, 512, 729, 1000]
y3 = [10, 18, 37, 74, 135, 226, 353, 522, 739, 1010]
y4 = [20, 28, 47, 84, 145, 236, 363, 532, 749, 1020]
result = plt.plot(
x, y2, "r-", x, y3, "b--", x, y4, "g--"
) # 'r-', 'b--', 'g--' are matlab-like formatting strings.
plt.xlabel("X") # x-axis label
plt.ylabel("Y") # y-axis label
plt.title("Plotting multiple series") # title of the graph
plt.show()
###Output
_____no_output_____
###Markdown
We can also "zoom in" by placing limits on the displayed x and y axies using `plt.xlim()` and `plt.ylim()` functions
###Code
result = plt.plot(
x, y2, "r-", x, y3, "b--", x, y4, "g--"
) # 'r-', 'b--', 'g--' are matlab-like formatting strings.
plt.xlabel("X") # x-axis label
plt.ylabel("Y") # y-axis label
plt.xlim((2, 4)) # Note the limits on the x and y axes.
plt.ylim((0, 100))
plt.title("Plotting multiple series") # title of the graph
plt.show()
###Output
_____no_output_____
###Markdown
Though it is possible to use simple lists as the input to be plotted, it is more convenient and flexible to use NumPy arrays.
###Code
x = np.arange(
0.0, 10.0, 0.01
) # np.arrange(start, stop, step); returns an ndarray object.
y = np.sin(x)
plt.plot(x, y) # plot(x,y) is a matplotlib function.
plt.show()
###Output
_____no_output_____
###Markdown
The `hist()` function in pyplot module of matplotlib library is used to plot a histogram
###Code
x = np.random.randn(
10000
) # generate 10,000 points from the standard normal distribution (sd=1, mean=0)
plt.hist(x, bins=50) # bins=50 is the number of bins to use.
plt.show()
###Output
_____no_output_____
###Markdown
Bar Charts
###Code
groups = [0, 1, 2, 3, 4]
group_titles = ["A", "B", "C", "D", "E"]
grparray = np.array(groups)
values = [75, 60, 80, 77, 90]
plt.bar(
groups, values, align="center"
) # align='center' centers the bars on the x-axis.
plt.xticks(
groups, group_titles
) # groups is the x-axis, group_titles is the labels for the x-axis.
plt.ylabel("Score") # label the y-axis
plt.title("Test Scores by Class (A-E)") # title is a matlab-like formatting string
plt.show()
plt.barh(
groups, values, align="center", color="red"
) # color='red' is a matlab-like formatting string.
plt.yticks(groups, group_titles) # Note the order of the arguments.
plt.ylabel("Class") # Note the y-axis label is on the left side of the plot.
plt.xlabel("Score") # Note the x-axis label is on the bottom of the plot.
plt.title("Test Scores by Class (A-E)") # Note the title is on the top of the plot.
plt.show()
grpA = (77, 58, 84, 62)
grpB = (99, 92, 88, 80)
grpC = (85, 81, 79, 80)
plt.subplots() # creates a figure with a single subplot
index = np.arange(4)
bar_width = 0.25
rects1 = plt.bar(
index, grpA, bar_width, color="r", label="Group 1", width=bar_width
) # color='r' is a matlab-like formatting string.
rects2 = plt.bar(
index + bar_width, grpB, bar_width, color="b", label="Group 2"
) # color='b' is a matlab-like formatting string.
rects3 = plt.bar(
index + 2 * bar_width, grpC, bar_width, color="g", label="Group 3"
) # color='g' is a matlab-like formatting string.
plt.xlabel("Subject") # add x-axis label
plt.ylabel("Test Score") # add y-axis label
plt.title("Test Scores by Subject") # add title
plt.xticks(index + bar_width, ("A", "B", "C", "D")) # add x-axis tick labels
plt.legend() # add legend
plt.show()
###Output
_____no_output_____
###Markdown
MarkersYou can use the keyword argument marker to emphasize each point with a specified marker:
###Code
ypoints = np.array([3, 8, 1, 10])
plt.plot(ypoints, marker="o") # plot the points
plt.show()
###Output
_____no_output_____
###Markdown
Marker SizeYou can use the keyword argument markersize or the shorter version, ms to set the size of the markers:
###Code
ypoints = np.array([3, 8, 1, 10])
plt.plot(ypoints, marker="o", ms=20) # marker size
plt.show()
###Output
_____no_output_____
###Markdown
Marker ColorYou can use the keyword argument markeredgecolor or the shorter mec to set the color of the edge of the markers:
###Code
ypoints = np.array([3, 8, 1, 10])
plt.plot(ypoints, marker="o", ms=20, mec="r") # marker size, marker edge color
plt.show()
###Output
_____no_output_____
###Markdown
You can use the keyword argument markerfacecolor or the shorter mfc to set the color inside the edge of the markers:
###Code
ypoints = np.array([3, 8, 1, 10])
plt.plot(ypoints, marker="o", ms=20, mfc="r")
plt.show()
###Output
_____no_output_____
###Markdown
Set Font Properties for Title and LabelsYou can use the fontdict parameter in xlabel(), ylabel(), and title() to set font properties for the title and labels.
###Code
x = np.array([80, 85, 90, 95, 100, 105, 110, 115, 120, 125])
y = np.array([240, 250, 260, 270, 280, 290, 300, 310, 320, 330])
font1 = {"family": "serif", "color": "blue", "size": 20}
font2 = {"family": "serif", "color": "darkred", "size": 15}
plt.title("Sports Watch Data", fontdict=font1) # add to the title
plt.xlabel("Average Pulse", fontdict=font2) # add label to the x-xsie
plt.ylabel("Calorie Burnage", fontdict=font2) # add label to the y-xsie
plt.plot(x, y) # plot the data
plt.show()
###Output
_____no_output_____
###Markdown
Display Multiple PlotsMatplotlib provides a convenient method called subplots to do this. Subplots mean a group of smaller axes (where each axis is a plot) that can exist together within a single figure. Think of a figure as a canvas that holds multiple plots.With the `subplot(nrows, ncols, index)` function you can draw multiple plots in one figure:
###Code
# plot 1:
x = np.array([0, 1, 2, 3])
y = np.array([3, 8, 1, 10])
plt.subplot(1, 2, 1) # creates a figure with a single subplot
plt.plot(x, y) # plot the data
# plot 2:
x = np.array([0, 1, 2, 3])
y = np.array([10, 20, 30, 40])
plt.subplot(1, 2, 2) # creates a figure with a single subplot
plt.plot(x, y) # plot the data
plt.show()
###Output
_____no_output_____ |
1c_convolution.ipynb | ###Markdown
Tutorial 1c. ConvolutionThe spatial dimensions of the ouput image (width and height) depend on the spatial dimensions of the input image, kernel_size, padding, and striding. In order to build efficient convolutional networks, it's important to understand what the sizes are after after each convolutional layer.In this exersise you will derive the dependency between input and output image sizes. For the sake of simplicity we assume that the input tensor is _square_, i.e., width = height = image_size.We will use the nn.Conv2d layer here. We have not yet discussed what a convolutional layer is yet, but if you set the first two parameters (input channels and output channels) to 1, then this defines a basic convolution.If your code is correct, you should see 'OK'.
###Code
def compute_conv_output_size(image_size, kernel_size, padding, stride):
###########################################################################
# Add code that computes the size of the image after a conv layer. #
###########################################################################
return output_size
# Compare the size of the output of nn.Conv2d with compute_convnet_output_size.
for image_size in range(5, 21, 1):
# Shape: batch x channels x height x width.
input_tensor = torch.zeros((1, 1, image_size, image_size))
for kernel_size in 2, 3, 5, 7:
for padding in 0, 5:
for stride in 1, 2, 3, 4:
if kernel_size >= image_size:
continue
output_tensor = Conv2d(1, 1, kernel_size, stride, padding)(input_tensor)
output_size = output_tensor.size(2)
predicted_output_size = compute_conv_output_size(image_size, kernel_size, padding, stride)
assert output_size == predicted_output_size, (
f"ERROR: the real size is {output_size},"
f" but got {predicted_output_size}."
f"\nimage_size={image_size}"
f" kernel_size={kernel_size}"
f" padding={padding}"
f" stride={stride}"
)
print("OK")
###Output
_____no_output_____
###Markdown
You can now use the function you just implemented to compute the size of the output of a convolution.
###Code
compute_conv_output_size(1, 1, 1, 1)
###Output
_____no_output_____
###Markdown
**Question [optional]:** Implement your own convolution operator **without** using any of PyTorch's (or numpy's) pre-defined convolutional functions.
###Code
def conv_naive(x, w, b, conv_param):
"""
A naive Python implementation of a convolution.
The input consists of an image tensor with height H and
width W. We convolve each input with a filter F, where the filter
has height HH and width WW.
Input:
- x: Input data of shape (H, W)
- w: Filter weights of shape (HH, WW)
- b: Bias for filter
- conv_param: A dictionary with the following keys:
- 'stride': The number of pixels between adjacent receptive fields in the
horizontal and vertical directions.
- 'pad': The number of pixels that will be used to zero-pad the input.
During padding, 'pad' zeros should be placed symmetrically (i.e equally on both sides)
along the height and width axes of the input. Be careful not to modfiy the original
input x directly.
Returns an array.
- out: Output data, of shape (H', W') where H' and W' are given by
H' = 1 + (H + 2 * pad - HH) / stride
W' = 1 + (W + 2 * pad - WW) / stride
"""
out = None
H, W = x.shape
filter_height, filter_width = w.shape
stride, pad = conv_param["stride"], conv_param["pad"]
# Check dimensions.
assert (W + 2 * pad - filter_width) % stride == 0, "width does not work"
assert (H + 2 * pad - filter_height) % stride == 0, "height does not work"
###########################################################################
# TODO: Implement the convolutional forward pass. #
# Hint: you can use the function torch.nn.functional.pad for padding. #
###########################################################################
###Output
_____no_output_____
###Markdown
You can test your implementation by running the following:
###Code
# Make convolution module.
w_shape = (4, 4)
w = torch.linspace(-0.2, 0.3, steps=torch.prod(torch.tensor(w_shape))).reshape(w_shape)
b = torch.linspace(-0.1, 0.2, steps=1)
# Compute output of module and compare against reference values.
x_shape = (4, 4)
x = torch.linspace(-0.1, 0.5, steps=torch.prod(torch.tensor(x_shape))).reshape(x_shape)
out = conv_naive(x, w, b, {"stride": 2, "pad": 1})
correct_out = torch.tensor([[0.156, 0.162], [0.036, -0.054]])
# Compare your output to ours; difference should be around e-8
print("Testing conv_forward_naive")
rel_error = ((out - correct_out) / (out + correct_out + 1e-6)).mean()
print("difference: ", rel_error)
if abs(rel_error) < 1e-6:
print("Nice work! Your implementation of a convolution layer works correctly.")
else:
print(
"Something is wrong. The output was expected to be {} but it was {}".format(
correct_out, out
)
)
###Output
_____no_output_____
###Markdown
**Aside: Image processing via convolutions:**As fun way to gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
###Code
# Load image of a kitten and a puppy.
kitten_uri = "https://upload.wikimedia.org/wikipedia/commons/thumb/1/1b/Persian_Cat_%28kitten%29.jpg/256px-Persian_Cat_%28kitten%29.jpg"
puppy_uri = "https://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Golde33443.jpg/256px-Golde33443.jpg"
kitten, puppy = imageio.imread(kitten_uri), imageio.imread(puppy_uri)
img_size = 200 # Make this smaller if it runs too slow
x = numpy.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = skimage.transform.resize(puppy, (img_size, img_size)).transpose(
(2, 0, 1)
)
x[1, :, :, :] = skimage.transform.resize(kitten, (img_size, img_size)).transpose(
(2, 0, 1)
)
x = torch.FloatTensor(x)
# Set up a convolutional weights holding 2 filters, each 3x3
w = torch.zeros((2, 3, 3, 3), dtype=torch.float)
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = torch.tensor([[0, 0, 0], [0, 0.3, 0], [0, 0, 0]])
w[0, 1, :, :] = torch.tensor([[0, 0, 0], [0, 0.6, 0], [0, 0, 0]])
w[0, 2, :, :] = torch.tensor([[0, 0, 0], [0, 0.1, 0], [0, 0, 0]])
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = torch.tensor([[1, 2, 1], [0, 0, 0], [-1, -2, -1]])
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = torch.tensor([0, 128], dtype=torch.float)
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out = F.conv2d(x, w, b, stride=1, padding=1).numpy()
def imshow_noax(img, normalize=True):
"""Tiny helper to show images as uint8 and remove axis labels."""
if normalize:
img_max, img_min = numpy.max(img), numpy.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
matplotlib.pyplot.imshow(img.astype("uint8"))
matplotlib.pyplot.gca().axis("off")
# Show the original images and the results of the conv operation
matplotlib.pyplot.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
matplotlib.pyplot.title("Original image")
matplotlib.pyplot.subplot(2, 3, 2)
imshow_noax(out[0, 0])
matplotlib.pyplot.title("Grayscale")
matplotlib.pyplot.subplot(2, 3, 3)
imshow_noax(out[0, 1])
matplotlib.pyplot.title("Edges")
matplotlib.pyplot.subplot(2, 3, 4)
imshow_noax(kitten, normalize=False)
matplotlib.pyplot.subplot(2, 3, 5)
imshow_noax(out[1, 0])
matplotlib.pyplot.subplot(2, 3, 6)
imshow_noax(out[1, 1])
matplotlib.pyplot.show()
###Output
_____no_output_____ |
notebooks/lst/real_data/crab_analysis_src_independent.ipynb | ###Markdown
ON/OFF theta2 and alpha plotThis notebook produces both the theta2 plot and the alpha plot of a set of DL2 files.It extracts automatically also the time duration, given a set of DL2 files merged run-wise.Input:- merged DL2 run files (run-wise)- merged ON and merged all DL2 files- run numbers- selection cuts
###Code
__authors__ = 'Ruben Lopez, Luca Foffano' # [email protected], [email protected]
__version__ = '3.08.2020'
# it provides theta2 plot and estimation of run duration
import warnings
warnings.filterwarnings("ignore",category=DeprecationWarning)
warnings.filterwarnings("ignore",category=FutureWarning)
warnings.filterwarnings("ignore",category=RuntimeWarning)
import time
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from lstchain.reco.utils import reco_source_position_sky, radec_to_camera
from lstchain.tests.test_lstchain import dl2_file, dl2_params_lstcam_key
from astropy.coordinates import SkyCoord
import astropy.units as u
from gammapy.stats import WStatCountsStatistic
plt.rcParams['figure.figsize'] = (12, 12)
plt.rcParams['font.size'] = 20
######################################################################################
# SELECTION CUTS
intensity_cut = 200
leakage_cut = 0.2
wl_cut = 0.01
gammaness_cut = 0.8
n_pixels_cut = 1800 # 1800
r_cut = 1
theta2_cut = 0.1
alpha_cut = 8.
# INPUT FILES
runs_on = [1874, 1875, 1876, 1878, 1879, 1880]
runs_off = [1877, 1881]
# path to the DL2 merged files - per run - e.g. merged-dl2-run1880.h5
path_runs = '../data/crab_on_off/'
# ON and OFF data files (each one obtained merging all ON or OFF files)
on_data_file = '../../data/crab_on_off/crab_on/dl2_Run01874_merged.h5'
off_data_file = '../../data/crab_on_off/crab_off/dl2_Run01881_merged.h5'
# reads files - takes some minutes
on_data = pd.read_hdf(on_data_file, key=dl2_params_lstcam_key)
off_data = pd.read_hdf(off_data_file, key=dl2_params_lstcam_key)
# run duration estimation
print("Evaluating run duration...\n")
on_obstime_start = pd.to_datetime(on_data['dragon_time'][0], unit='s')
on_obstime_end = pd.to_datetime(on_data['dragon_time'][len(on_data)-1], unit='s')
print("duration: {:.1f} min".format((on_obstime_end - on_obstime_start).total_seconds()/60) )
total_obs_duration_on = (on_obstime_end - on_obstime_start).total_seconds()
print("ON data total duration: {:.1f} s = {:.1f} min\n".format(total_obs_duration_on,
total_obs_duration_on/60))
#####################################
off_obstime_start = pd.to_datetime(off_data['dragon_time'][0], unit='s')
off_obstime_end = pd.to_datetime(off_data['dragon_time'][len(off_data)-1], unit='s')
print("duration: {:.1f} min".format((off_obstime_end - off_obstime_start).total_seconds()/60) )
total_obs_duration_off = (off_obstime_end - off_obstime_start).total_seconds()
print("OFF data total duration: {:.1f} s = {:.1f} min\n".format(total_obs_duration_off,
total_obs_duration_off/60))
#####################################################################
# ON computation
source_position = [0,0] # assuming source located in the m_tocamera center
m_to_deg = np.rad2deg(np.arctan(1./28)) # conversion from deg / m
Tot_Non = np.shape(on_data)[0]
print("Total number of ON events", Tot_Non)
selection_cuts_on_data = np.array([
(on_data['leakage_intensity_width_2'] < leakage_cut)
& (on_data['intensity'] > intensity_cut)
& (on_data['n_pixels'] < n_pixels_cut)
& (on_data['wl'] > wl_cut)
& (on_data['gammaness'] > gammaness_cut)
& (on_data['r'] < r_cut)
])[0]
print('Number of ON events after cuts', np.sum(selection_cuts_on_data))
reco_src_x = on_data['reco_src_x'][selection_cuts_on_data]
reco_src_y = on_data['reco_src_y'][selection_cuts_on_data]
on_data['theta2'] = m_to_deg**2 * ((source_position[0] - reco_src_x)**2 + (source_position[1] - reco_src_y)**2)
theta2 = np.array(on_data['theta2'])
#####################################################################################
# OFF computation
selection_cuts_off_data = np.array([
(off_data['leakage_intensity_width_2'] < leakage_cut)
& (off_data['intensity'] > intensity_cut)
& (off_data['n_pixels'] < n_pixels_cut)
& (off_data['wl'] > wl_cut)
& (off_data['gammaness'] > gammaness_cut)
& (off_data['r'] < r_cut)
])[0]
Tot_Noff = np.shape(off_data)[0]
print("Total number of OFF events", Tot_Noff)
print('Number of OFF events after cuts', np.sum(selection_cuts_off_data))
reco_src_x_off = off_data['reco_src_x'][selection_cuts_off_data]
reco_src_y_off = off_data['reco_src_y'][selection_cuts_off_data]
off_data['theta2'] = m_to_deg**2 * ((reco_src_x_off)**2 + (reco_src_y_off)**2)
theta2_off = np.array(off_data['theta2'])
# normalization theta2
norm_range_th2_min = 0.5
norm_range_th2_max = 2.
Non_norm = np.sum((theta2 > norm_range_th2_min) & (theta2 < norm_range_th2_max))
Noff_norm = np.sum((theta2_off > norm_range_th2_min) & (theta2_off < norm_range_th2_max))
Norm_theta2 = Non_norm / Noff_norm
print("Normalization: {:.2f}".format(Norm_theta2))
Non = np.sum(theta2 < theta2_cut)
Noff = np.sum(theta2_off < theta2_cut)
Nex = Non - Noff * Norm_theta2
print("Non, Noff, Nex = {:.0f}, {:.0f}, {:.0f}".format(Non, Noff,Nex))
S = Nex / np.sqrt(Noff)
stat = WStatCountsStatistic(Non, Noff, Norm_theta2)
lima_significance = stat.sqrt_ts.item()
#print("\nSignificance: {:.2f}".format(S))
print("Significance Li&Ma: {:.2f}".format(lima_significance))
# theta2 plot
nbins = 100
range_max = 2 # deg2
########################################################
fig, ax = plt.subplots(1, 1, figsize=(12, 8))
h_on = ax.hist(theta2, label = 'ON data', bins=nbins, alpha=0.2, color = 'blue', range=[0,range_max]) # color = 'C3',
h_off = ax.hist(theta2_off, weights = Norm_theta2 * np.ones(len(theta2_off)), range=[0,range_max],
histtype='step', label = 'OFF data', bins=nbins, alpha=0.5, color = 'k')
ax.annotate(s=f'Significance Li&Ma = {lima_significance:.2f}' \
f'$\sigma$\nRate = {Nex/total_obs_duration_on * 60:.1f}' \
f'$\gamma$/min \nObstime = {total_obs_duration_on:.1f} s\nNon = {Non} Noff = {Noff} Norm_theta2 = {Norm_theta2:.2f}',
xy=(np.max(h_on[1]/4), np.max(h_on[0]/6*5)), size = 20, color = 'r')
ax.vlines(x = theta2_cut, ymin = 0, ymax = np.max(h_on[0]*1.2), linestyle='--', linewidth = 2, color = 'black', alpha = 0.2)
ax.set_xlabel(r'$\theta^2$ [deg$^2$]')
ax.set_ylabel(r'Number of events')
ax.set_ylim(0,np.max(h_on[0]*1.2))
ax.legend()
###Output
/Users/rlopezcoto/opt/anaconda3/envs/lst-dev/lib/python3.7/site-packages/ipykernel_launcher.py:17: MatplotlibDeprecationWarning: The 's' parameter of annotate() has been renamed 'text' since Matplotlib 3.3; support for the old name will be dropped two minor releases later.
|
examples/Example Data Sets.ipynb | ###Markdown
Chicago setCSV file available from https://catalog.data.gov/dataset/crimes-one-year-prior-to-present-e171f
###Code
import open_cp.sources.chicago as chicago
points = chicago.default_burglary_data()
points
type(points)
len(points.timestamps), points.time_range
bbox = points.bounding_box
print("X coord range:", bbox.xmin, bbox.xmax)
print("Y coord range:", bbox.ymin, bbox.ymax)
print(bbox.aspect_ratio)
_, ax = plt.subplots(figsize=(10,10 * bbox.aspect_ratio))
ax.scatter(points.coords[0], points.coords[1], alpha=0.1, marker="o", s=1)
###Output
_____no_output_____
###Markdown
As an American city, most streets run North-South or East-West. Further, the data is geocoded to the centre of the "block", to anonymise the data. (Though this is slightly inconsistent, if one looks closely at the raw CSV file.)In the plot above: - the clump at the upper left is the airport. - We see a large clump of theft data downtown. - It would be interesting to know what causes the visible lines running north north west from downtown.
###Code
mask = ( (points.xcoords >= 355000) & (points.xcoords <= 365000) &
(points.ycoords >= 575000) & (points.ycoords <= 585000) )
downtown = points[mask]
bbox = downtown.bounding_box
print("X coord range:", bbox.xmin, bbox.xmax)
print("Y coord range:", bbox.ymin, bbox.ymax)
_, ax = plt.subplots(figsize=(5, 5 * bbox.aspect_ratio))
ax.scatter(downtown.coords[0], downtown.coords[1], alpha=0.1, marker="o", s=1)
###Output
_____no_output_____
###Markdown
UK Crime dataWe use an example of January 2017 from West Yorkshire.
###Code
import open_cp.sources.ukpolice as ukpolice
points = ukpolice.default_burglary_data()
len(points.timestamps)
bbox = points.bounding_box
fig, ax = plt.subplots(figsize=(10, 10 * bbox.aspect_ratio))
ax.scatter(points.xcoords, points.ycoords, s=10, alpha=0.2)
###Output
_____no_output_____
###Markdown
These are longitude / latitude points, which distort distance. Assuming you have `pyproj` installed, you can project. For the UK, we use [British National Grid](http://www.spatialreference.org/ref/epsg/osgb36-british-national-grid-odn-height/)
###Code
import open_cp
projected_points = open_cp.data.points_from_lon_lat(points, epsg=7405)
bbox = projected_points.bounding_box
fig, ax = plt.subplots(figsize=(10, 10 * bbox.aspect_ratio))
ax.scatter(projected_points.xcoords, projected_points.ycoords, s=10, alpha=0.2)
###Output
_____no_output_____
###Markdown
Random data
###Code
import open_cp.sources.random as random
import datetime
region = open_cp.RectangularRegion(390000, 450000, 410000, 450000)
points = random.random_uniform(region, datetime.date(2017,1,1), datetime.date(2017,3,1), 1000)
points.time_range
bbox = points.bounding_box
fig, ax = plt.subplots(figsize=(10, 10 * bbox.aspect_ratio))
ax.scatter(*points.coords, s=10, alpha=0.2)
###Output
_____no_output_____
###Markdown
If we have scipy installed, we can quickly use a 2D Gaussian kernel density estimation to get an estimate of the "risk intensity" from the real West Yorkshire data.
###Code
import scipy.stats
kernel = scipy.stats.gaussian_kde(projected_points.coords)
X, Y = np.mgrid[bbox.xmin:bbox.xmax:100j, bbox.ymin:bbox.ymax:100j]
positions = np.vstack([X.ravel(), Y.ravel()])
Z = np.reshape(kernel(positions), X.shape)
np.max(Z)
plt.imshow(np.rot90(Z))
sampler = random.KernelSampler(region, kernel, 4e-9)
points = random.random_spatial(sampler, datetime.date(2017,1,1), datetime.date(2017,3,1), 2350)
fig, ax = plt.subplots(ncols=2, figsize=(16, 6))
ax[0].scatter(*projected_points.coords, s=10, alpha=0.2)
ax[1].scatter(*points.coords, s=10, alpha=0.2)
for i in [0, 1]:
ax[i].set_aspect(bbox.aspect_ratio)
ax[i].set(xlim=[bbox.xmin, bbox.xmax], ylim=[bbox.ymin, bbox.ymax])
ax[0].set_title("Real data, Jan 2017")
_ = ax[1].set_title("Gaussian KDE sample")
###Output
_____no_output_____
###Markdown
The real plot still looks somewhat different to the random test data, suggesting that a simple fixed bandwidth Gaussian KDE is not appropriate (which we already knew...) Using a nearest neighbour variable bandwidth Gaussian KDE
###Code
import open_cp.kernels
kernel = open_cp.kernels.kth_nearest_neighbour_gaussian_kde(projected_points.coords, k=10)
sampler = random.KernelSampler(region, kernel, 4e-9)
points10 = random.random_spatial(sampler, datetime.date(2017,1,1), datetime.date(2017,3,1), 2350)
kernel = open_cp.kernels.kth_nearest_neighbour_gaussian_kde(projected_points.coords, k=25)
sampler = random.KernelSampler(region, kernel, 4e-9)
points25 = random.random_spatial(sampler, datetime.date(2017,1,1), datetime.date(2017,3,1), 2350)
kernel = open_cp.kernels.kth_nearest_neighbour_gaussian_kde(projected_points.coords, k=50)
sampler = random.KernelSampler(region, kernel, 4e-9)
points50 = random.random_spatial(sampler, datetime.date(2017,1,1), datetime.date(2017,3,1), 2350)
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(15, 9))
ax[0,0].scatter(*projected_points.coords, s=10, alpha=0.2)
ax[0,1].scatter(*points10.coords, s=10, alpha=0.2)
ax[1,0].scatter(*points25.coords, s=10, alpha=0.2)
ax[1,1].scatter(*points50.coords, s=10, alpha=0.2)
for a in ax.ravel():
a.set_aspect(bbox.aspect_ratio)
a.set(xlim=[bbox.xmin, bbox.xmax], ylim=[bbox.ymin, bbox.ymax])
ax[0,0].set_title("Real data, Jan 2017")
ax[0,1].set_title("k=10 nearest neighbour sample")
ax[1,0].set_title("k=25 nearest neighbour sample")
ax[1,1].set_title("k=50 nearest neighbour sample")
fig.tight_layout()
None
###Output
_____no_output_____
###Markdown
Visually, having a rather narrow bandwidth seems to look better.I suspect that to produce more realistic simulations, to _geography_ of the data needs to be investigated: i.e. locate the points onto buildings and into the real street network. Self-exciting point process sampler Inhomogeneous Poisson process
###Code
import open_cp.sources.sepp as sepp
region = open_cp.RectangularRegion(0,100,0,100)
kernel = sepp.PoissonTimeGaussianSpace(1, [50, 50], [150, 25], 0.8)
sampler = sepp.InhomogeneousPoisson(region, kernel)
points = sampler.sample(0, 100)
fig, ax = plt.subplots(ncols=2, figsize=(16, 6))
ax[0].scatter(points[1], points[2])
ax[0].set_title("Space location")
ax[0].set_aspect(1)
ax[0].set_xlim(0,100)
ax[0].set_ylim(0,100)
ax[1].scatter(points[0], points[1])
ax[1].set_xlabel("time")
ax[1].set_ylabel("x coord")
ax[1].set_title("X location against time")
None
###Output
_____no_output_____
###Markdown
The coordinates in space give samples from a 2D correlated Gaussian distribution, as we expect.If we do this repeatedly, then the time coordinates along should give a poisson process.
###Code
counts = []
window = []
for _ in range(10000):
times = sampler.sample(0,100)[0]
counts.append(len(times))
window.append(np.sum(times <= 20))
fig, ax = plt.subplots(ncols=2, figsize=(16, 4))
ax[0].hist(counts)
ax[0].set_title("Number of points")
ax[1].hist(window)
ax[1].set_title("In window [0,20]")
None
###Output
_____no_output_____
###Markdown
Inhomogeneous Poisson process via factorisationIf the intensity function of the poisson process has the form $\lambda(t,x,y) = \nu(t)\mu(x,y)$ then we can simulate the time-only Poission process with density $\nu$, and then sample the space dimension as if it were a "mark" (see the notion of a "marked Poisson process" in the literature). If $\mu$ is a probability density of a standard type, this is much faster, because we can very easily draw samples for the space dimensions.
###Code
time_kernel = sepp.Exponential(exp_rate=1, total_rate=10)
space_sampler = sepp.GaussianSpaceSampler([50, 50], [150, 25], 0.8)
sampler = sepp.InhomogeneousPoissonFactors(time_kernel, space_sampler)
points = sampler.sample(0, 100)
fig, ax = plt.subplots(ncols=2, figsize=(16, 6))
ax[0].scatter(points[1], points[2])
ax[0].set_title("Space location")
ax[0].set_aspect(1)
ax[0].set_xlim(0,100)
ax[0].set_ylim(0,100)
ax[1].scatter(points[0], points[1])
ax[1].set_xlabel("time")
ax[1].set_ylabel("x coord")
ax[1].set_title("X location against time")
None
###Output
_____no_output_____
###Markdown
Self-excited point process samplerYou need to pass two intensity functions (aka kernels), one for the background events, and one for the triggered events.In the following example, the background sampler has as time component a constant rate poisson process, and a Gaussian space density, centred at (50,50).The trigger kernel has an exponential density in time (so on average each event triggers one further event) and a space kernel which is deliberate biases to jump around 5 units in the x direction. We can hence visualise the cascade of triggered events as a rightward drift on the first graph, and an upward drift on the second graph.
###Code
background_sampler = sepp.InhomogeneousPoissonFactors(sepp.HomogeneousPoisson(1),
sepp.GaussianSpaceSampler([50,50], [50,50], 0))
time_kernel = sepp.Exponential(exp_rate=1, total_rate=1)
space_sampler = sepp.GaussianSpaceSampler([5, 0], [1, 1], 0)
trigger_sampler = sepp.InhomogeneousPoissonFactors(time_kernel, space_sampler)
sampler = sepp.SelfExcitingPointProcess(background_sampler, trigger_sampler)
points = sampler.sample(0,10)
fig, ax = plt.subplots(ncols=2, figsize=(16, 6))
ax[0].scatter(points[1], points[2])
ax[0].set_title("Space location")
ax[0].set_aspect(1)
ax[0].set_xlim(0,100)
ax[0].set_ylim(0,100)
ax[1].scatter(points[0], points[1])
ax[1].set_xlabel("time")
ax[1].set_ylabel("x coord")
ax[1].set_title("X location against time")
None
###Output
_____no_output_____ |
Stock_Analysis_multidim_1stock_weekly.ipynb | ###Markdown
Analysis of a single stock - for simulation over the course of a yearGoal: This script simulates a year of weekly pred/close determinations and simulates for any given stock if it is better to invest a consistent price or buy in higher/lower depending on the current performance of the stock.Take 1 stock and run a trendline through multiple 1 year cycles, creating a linear prediction to be applied weekly. Assess the theoretical performance of adjusting weekly contributions as compared to contributing a consistent amount every week
###Code
import yfinance as yf
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Arguments Scenarios Example value
# period date period to download 1d, 5d, 1mo, 3mo, 6mo, 1y, 2y, 5y, 10y, ytd, max
# interval data interval. If it’s intraday data, the interval needs to be set within 60 days 1m, 2m, 5m, 15m, 30m, 60m, 90m, 1h, 1d, 5d, 1wk, 1mo, 3mo
# start If period is not set- Download start date string (YYYY-MM-DD) or datetime 2020-03-18
# end If period is not set - Download end date string (YYYY-MM-DD) or datetime 2020-03-19
# prepost Boolean value to include Pre and Post market data Default is False
# auto_adjust Boolean value to adjust all OHLC Default is True
# actions Boolean value download stock dividends and stock splits events Default is True
# pull data
# note: you can't choose a stock with less than 2 years of history
# AAPL, AMD, AMZN, CRM, GOOG, INTC, MDB, MSFT, NVDA, QQQ, SBUX, SQ, TSLA, TSM
stock = yf.Ticker("amd")
df = stock.history(period="2y")
#df = stock.history(period="7d", interval = "1m")
df = pd.DataFrame(df['Close'])
df = df.dropna() #in case the first row generates as nulls
df
# add index to df
#df = pd.DataFrame(df['Close'])
add_index = np.arange(1,len(df)+1)
df['Index'] = add_index
df
# create 50 dataframes in a dictionary, each 260 days: dataframes['data0'] - dataframes['data49']
# 0 is the most recent 260 days, 49 is the oldest
# 260 days isn't exactly 1 trading year, but I think it's close enough
dataframes = {}
x = (max(df['Index']))-260
y = max(df['Index'])
for i in range(50):
dataframes['data' + str(i)] = df.iloc[x:y]
x -= 5
y -= 5
# show the newest and oldest dataframes
print(dataframes['data0'])
print(dataframes['data49'])
# plot data with a trendline - most recent 260 days
x = dataframes['data0']['Index']
y = dataframes['data0']['Close']
plt.plot(x, y)
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x + b)
# plot data with a trendline - the oldest 260 days
x = dataframes['data49']['Index']
y = dataframes['data49']['Close']
plt.plot(x, y)
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x + b)
plt.show()
# plot only trendlines, weekly, each line representing 1 year of data
# if the movement is too stable, these graphs won't be useable
for i in range(len(dataframes)):
x = dataframes['data' + str(i)]['Index']
y = dataframes['data' + str(i)]['Close']
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x + b)
plt.show()
for i in range(len(dataframes)):
x = dataframes['data' + str(i)]['Index']
y = dataframes['data' + str(i)]['Close']
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x + b)
plt.plot(range(len(df)), df['Close'])
plt.show()
# create pred and pred/close list for each of the 50 dataframes
k = len(dataframes)
for e in range(k):
nlist = []
ylist = []
y = dataframes['data' + str(e)]['Close']
for i in range(1,len(dataframes['data0'])+1): # create pred
x = range(260)
m, b = np.polyfit(x, y, 1)
d = m*i+b
nlist.append(d)
dataframes['data' + str(e)]['pred'] = nlist
for i in range(1,len(dataframes['data0'])+1): # create pred/close
d = (dataframes['data' + str(e)]['pred'].iloc[i-1])/(dataframes['data' + str(e)]['Close'].iloc[i-1])
ylist.append(d)
dataframes['data' + str(e)]['pred/close'] = ylist
print(dataframes['data49'])
print(dataframes['data0'])
# pull the last 'Close' and pred/close' from each dataframe in dataframes and make a new dataframe out of it
# each row is the last close price in a 1 year period and the final pred/close derived from a 1 year trendline
# the rows have a 51 week overlap and are separated by 1 week
nlist = []
ylist = []
k = len(dataframes['data0'])
for e in reversed(range(len(dataframes))):
nlist.append(round(dataframes['data' + str(e)]['pred/close'].iloc[k-1],4))
ylist.append(round(dataframes['data' + str(e)]['Close'].iloc[k-1],4))
df = pd.DataFrame(list(zip(ylist, nlist)), columns=['Close', 'pred/close'])
print(df.head())
print('')
print(df.tail())
# determine the weeks where pred/close is >1 and therefore they are better weeks to buy in
# steady stocks could be at about 50/50 but stocks exponentially rising could have close to 0 pred/close > 1
nlist = []
for i in range(len(df)):
if df['pred/close'].iloc[i] >= 1:
nlist.append(1)
else:
nlist.append(0)
df['>1'] = nlist
print('total weeks:', len(df['>1']))
print('number above 1:', sum(df['>1']))
print('')
print(df)
# create multiple investment strategies and simulate the returns over 1 year
# the strategy that ends up with the most stock for the same amount of money is ultimately the best
print('baseline - contribute 10 every week')
print('opt1 - buy in every week proportional to the pred/close variable')
print('opt2 - buy in every week proportional to the pred/close variable - squared')
print('opt3 - contribute 20 only on the weeks where pred/close is >=1')
print('opt4 - buy in every week inversely proportional to the pred/close variable - as a fact check (should be lower)')
invest = 500 # max amount to contribute
wkly_contrib = 10 # how much to contribute each week
df['pred/close2'] = round(df['pred/close']**2,4) # make the value differences a little more pronounced
# baseline - buy in $10 weekly no matter what - baseline
df['baseline'] = 0
df['baseline_stk'] = 0
v = invest
for i in range(len(df)):
df['baseline'].iloc[i] = wkly_contrib
df['baseline_stk'].iloc[i] = round(df['baseline'].iloc[i]/df['Close'].iloc[i],4)
v -= wkly_contrib
if v < wkly_contrib:
break
baseline_left = v
# opt1 - buy in every week but proportionally to the pred/close
df['opt1'] = 0
df['opt1_stk'] = 0
v = invest
for i in range(len(df)):
df['opt1'].iloc[i] = wkly_contrib*df['pred/close'].iloc[i]
df['opt1_stk'].iloc[i] = round(df['opt1'].iloc[i]/df['Close'].iloc[i],4)
v -= wkly_contrib*df['pred/close'].iloc[i]
if i == (len(df)-1):
t = i
else:
t = i+1
if v < wkly_contrib*df['pred/close'].iloc[t]:
break
opt1_left = v
# opt2 - buy in every week but proportionally to the pred/close and pred/close is squared to be more dramatic
df['opt2'] = 0
df['opt2_stk'] = 0
v = invest
for i in range(len(df)):
df['opt2'].iloc[i] = wkly_contrib*df['pred/close2'].iloc[i]
df['opt2_stk'].iloc[i] = round(df['opt2'].iloc[i]/df['Close'].iloc[i],4)
v -= wkly_contrib*df['pred/close2'].iloc[i]
if i == (len(df)-1):
t = i
else:
t = i+1
if v < wkly_contrib*df['pred/close2'].iloc[t]:
break
opt2_left = v
# opt3 - buy in every week but proportionally to the pred/close & buy 0 on days <1
df['opt3'] = 0
df['opt3_stk'] = 0
v = invest
for i in range(len(df)):
df['opt3'].iloc[i] = wkly_contrib*2*df['>1'].iloc[i]
df['opt3_stk'].iloc[i] = round(df['opt3'].iloc[i]/df['Close'].iloc[i],4)
v -= wkly_contrib*2*df['>1'].iloc[i]
if i == (len(df)-1):
t = i
else:
t = i+1
if v < wkly_contrib*2*df['>1'].iloc[t]:
break
opt3_left = v
# opt4 - buy in every week but proportionally to the inverse of pred/close - to verify my method
df['opt4'] = 0
df['opt4_stk'] = 0
v = invest
for i in range(len(df)):
df['opt4'].iloc[i] = round(wkly_contrib/df['pred/close'].iloc[i],4)
df['opt4_stk'].iloc[i] = round(df['opt4'].iloc[i]/df['Close'].iloc[i],4)
v -= wkly_contrib*df['pred/close'].iloc[i] # technically wrong, should be a divide, but divide doesn't work???
if i == (len(df)-1):
t = i
else:
t = i+1
if v < wkly_contrib/df['pred/close'].iloc[t]:
break
opt4_left = v
d = {'name': ['baseline', 'op1', 'op2', 'op3', 'op4']
,'bought_in': [sum(df['baseline']), sum(df['opt1']), sum(df['opt2']), sum(df['opt3']),sum(df['opt4'])]
,'leftover': [baseline_left, opt1_left, opt2_left, opt3_left, opt4_left]
,'stocks_held': [round(sum(df['baseline_stk']),4), round(sum(df['opt1_stk']),4), round(sum(df['opt2_stk']),4),
round(sum(df['opt3_stk']),4), round(sum(df['opt4_stk']),4)]
,'cost_per_stock': [sum(df['baseline'])/sum(df['baseline_stk']), sum(df['opt1'])/sum(df['opt1_stk']),
sum(df['opt2'])/sum(df['opt2_stk']), sum(df['opt3'])/sum(df['opt3_stk']),
sum(df['opt4'])/sum(df['opt4_stk'])]
,'profit': [(sum(df['baseline_stk']) * df['Close'].iloc[49]) - sum(df['baseline']),
(sum(df['opt1_stk']) * df['Close'].iloc[49]) - sum(df['opt1']),
(sum(df['opt2_stk']) * df['Close'].iloc[49]) - sum(df['opt2']),
(sum(df['opt3_stk']) * df['Close'].iloc[49]) - sum(df['opt3']),
(sum(df['opt4_stk']) * df['Close'].iloc[49]) - sum(df['opt4'])]
}
df2 = pd.DataFrame(data=d)
df2['diff'] = 0
df2['diff'].iloc[1] = df2['profit'].iloc[1]-df2['profit'].iloc[0]
df2['diff'].iloc[2] = df2['profit'].iloc[2]-df2['profit'].iloc[0]
df2['diff'].iloc[3] = df2['profit'].iloc[3]-df2['profit'].iloc[0]
df2['diff'].iloc[4] = df2['profit'].iloc[4]-df2['profit'].iloc[0]
df2['%_diff'] = (df2['diff']/df2['profit'])*100
df2
###Output
baseline - contribute 10 every week
opt1 - buy in every week proportional to the pred/close variable
opt2 - buy in every week proportional to the pred/close variable - squared
opt3 - contribute 20 only on the weeks where pred/close is >=1
opt4 - buy in every week inversely proportional to the pred/close variable - as a fact check (should be lower)
###Markdown
Final notes:Stocks going up parabolically will almost never be above 1, so I can't simply not buy in when pred/close is not above 1. GOOG is like this as of 9/3/2021. opt3 can't be used. This kind of stock will also produce worse than baseline profits because opt1 and opt2 won't be investing the full 500 over the course of the year.Stocks in a big S-curve, flat ~ spike ~ flat, will only have a pred/close above 1 on the latter half of the year, so again, I can't contribute nothing. TSM and TSLA are like this as of 9/3/2021. Results as compared to the baseline (on 9/3/2021):aapl - opt1: +3.39, opt2: +7.28 amd - opt1: +11.63, opt2: +24.52 amzn - opt1: +1.14, opt2: +3.85 crm - opt1: +5.99, opt2: +11.65 goog - opt1: -8.16, opt2: -16.01intc - opt1: +3.74, opt2: +7.26 mdb - opt1: +17.92, opt2: +37.92 *huge spike on 9/3/2021, exclude due to misleadingly highmsft - opt1: +2.83, opt2: +5.81 nvda - opt1: +12.44, opt2: +28.15 qqq - opt1: +0.30, opt2: +0.68 sbux - opt1: -5.73, opt2: -10.70sq - opt1: -3.00, opt2: -3.23tsla - opt1: +0.15, opt2: +3.38 tsm - opt1: -4.02, opt2: -7.44Excluding MDB (due to misleadingly high extra profits), opt1 nets +20.7, opt2 nets +55.2. investing 500 into 13 stocks over 1 year (6500 total investment), with 55.2 extra profit over baseline. 0.85% better than baseline.
###Code
df
###Output
_____no_output_____ |
Dr. Semmelweis and the Discovery of Handwashing/notebook.ipynb | ###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
# Print out yearly
# ... YOUR CODE FOR TASK 1 ...
print(yearly)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 2 ...
yearly['proportion_deaths']=yearly.deaths/yearly.births
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly['clinic']=='clinic 1']
yearly2 = yearly[yearly['clinic']=='clinic 2']
# Print out yearly1
# ... YOUR CODE FOR TASK 2 ...
print(yearly1)
###Output
year births deaths clinic proportion_deaths
0 1841 3036 237 clinic 1 0.078063
1 1842 3287 518 clinic 1 0.157591
2 1843 3060 274 clinic 1 0.089542
3 1844 3157 260 clinic 1 0.082357
4 1845 3492 241 clinic 1 0.069015
5 1846 4010 459 clinic 1 0.114464
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
# ... YOUR CODE FOR TASK 3 ...
ax=yearly1.plot(y='proportion_deaths', x='year')
yearly2.plot(y='proportion_deaths', x='year', ax=ax)
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv', parse_dates=['date'])
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 4 ...
monthly['proportion_deaths']=monthly.deaths/monthly.births
# Print out the first rows in monthly
# ... YOUR CODE FOR TASK 4 ...
monthly.head()
###Output
_____no_output_____
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
# ... YOUR CODE FOR TASK 5 ...
ax=monthly.plot(x='date', y='proportion_deaths')
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly.date<handwashing_start]
after_washing = monthly[monthly.date>=handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
ax= before_washing.plot(x='date', y='proportion_deaths')
after_washing.plot(x='date', y='proportion_deaths', ax=ax)
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing.proportion_deaths
after_proportion = after_washing.proportion_deaths
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append(boot_after.mean()-boot_before.mean())
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([2.5, 97.5])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = False
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
# Print out yearly
print(yearly)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
yearly["proportion_deaths"] = yearly.deaths/yearly.births
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly['clinic'] == 'clinic 1']
yearly2 = yearly[yearly['clinic'] == 'clinic 2']
# Print out yearly1
print(yearly1)
###Output
year births deaths clinic proportion_deaths
0 1841 3036 237 clinic 1 0.078063
1 1842 3287 518 clinic 1 0.157591
2 1843 3060 274 clinic 1 0.089542
3 1844 3157 260 clinic 1 0.082357
4 1845 3492 241 clinic 1 0.069015
5 1846 4010 459 clinic 1 0.114464
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
ax = yearly1.plot(x='year', y='proportion_deaths', label='Clinic 1')
yearly2.plot(x='year', y='proportion_deaths', label='Clinic 2', ax=ax)
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv', parse_dates=['date'])
# Calculate proportion of deaths per no. births
monthly['proportion_deaths'] = monthly.deaths/monthly.births
# Print out the first rows in monthly
monthly.head()
###Output
_____no_output_____
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
ax = monthly['proportion_deaths'].plot(x='date', y='proportion_deaths', label='Clinic 1')
monthly['proportion_deaths'].plot(x='date', y='proportion_deaths', label='Clinic 2', ax=ax)
ax.set_ylabel("Proportion deaths")
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# This makes plots appear in the notebook
%matplotlib inline
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly['date'] < handwashing_start]
after_washing = monthly[monthly['date'] >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
ax = before_washing.plot(x='date', y='proportion_deaths', label='Clinic 1')
after_washing.plot(x='date', y='proportion_deaths', label='Clinic 2', ax=ax)
ax.set_ylabel("Proportion deaths")
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing['proportion_deaths']
after_proportion = after_washing['proportion_deaths']
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append(boot_after.mean() - boot_before.mean())
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv("datasets/yearly_deaths_by_clinic.csv")
# Print out yearly
print(yearly)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
yearly["proportion_deaths"] = yearly["deaths"] / yearly["births"]
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly["clinic"] == "clinic 1"]
yearly2 = yearly[yearly["clinic"] == "clinic 2"]
# Print out yearly1
print(yearly1)
###Output
year births deaths clinic proportion_deaths
0 1841 3036 237 clinic 1 0.078063
1 1842 3287 518 clinic 1 0.157591
2 1843 3060 274 clinic 1 0.089542
3 1844 3157 260 clinic 1 0.082357
4 1845 3492 241 clinic 1 0.069015
5 1846 4010 459 clinic 1 0.114464
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
%matplotlib inline
import matplotlib.pyplot as plt
# Plot yearly proportion of deaths at the two clinics
ax = yearly1.plot(x="year", y="proportion_deaths", label="Clinic 1")
ax = yearly2.plot(x="year", y="proportion_deaths", label="Clinic 2", ax=ax)
ax.set_xlabel("Yearly")
ax.set_ylabel("Proportion deaths")
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv("datasets/monthly_deaths.csv", parse_dates=["date"])
# Calculate proportion of deaths per no. births
monthly["proportion_deaths"] = monthly["deaths"] / monthly["births"]
# Print out the first rows in monthly
monthly.head()
###Output
_____no_output_____
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
ax = monthly.plot(x="date", y="proportion_deaths", label="Clinic 1")
ax.set_xlabel("Date")
ax.set_ylabel("Proportion deaths")
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly["date"] < handwashing_start]
after_washing = monthly[monthly["date"] >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
ax = before_washing.plot(x="date", y="proportion_deaths", label="Before Handwashing")
ax = after_washing.plot(x="date", y="proportion_deaths", label="After Handwashing", ax=ax)
ax.set_xlabel("Yearly")
ax.set_ylabel("Proportion deaths")
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
import numpy as np
before_proportion = before_washing["proportion_deaths"]
after_proportion = after_washing["proportion_deaths"]
mean_diff = np.mean(after_proportion) - np.mean(before_proportion)
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append(np.mean(boot_after) - np.mean(boot_before))
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = np.percentile(boot_mean_diff, [2.5, 97.5])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
# Print out yearly
yearly.head(10)
# Let's see clinic 1 and 2 value_counts
yearly['clinic'].value_counts()
###Output
_____no_output_____
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
yearly['proportion_deaths'] = yearly['deaths'] / yearly['births']
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly['clinic'] == 'clinic 1']
yearly2 = yearly[yearly['clinic'] == 'clinic 2']
# Print out yearly1
yearly1
###Output
_____no_output_____
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern…
###Code
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
ax = yearly1.plot(x='year' , y ='proportion_deaths',label='Clinic 1')
yearly2.plot(x ='year',y='proportion_deaths' , label = 'Clinic 2' , ax = ax)
ax.set_ylabel('Proportion of deaths')
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv' , parse_dates=['date'])
monthly.info()
# Calculate proportion of deaths per no. births
monthly["proportion_deaths"] = monthly['deaths'] / monthly['births']
# Print out the first rows in monthly
monthly.head()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 98 entries, 0 to 97
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 98 non-null datetime64[ns]
1 births 98 non-null int64
2 deaths 98 non-null int64
dtypes: datetime64[ns](1), int64(2)
memory usage: 2.4 KB
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
ax = monthly.plot(x='date' , y ='proportion_deaths')
ax.set_ylabel('Proportion of deaths')
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly['date'] < handwashing_start]
after_washing = monthly[monthly['date'] >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
ax = before_washing.plot(x='date' , y ='proportion_deaths',label='Before handwashing')
after_washing.plot(x ='date',y='proportion_deaths' , label = 'After handwashing' , ax = ax)
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing['proportion_deaths']
after_proportion = after_washing['proportion_deaths']
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac= 1 , replace= True)
boot_after = after_proportion.sample(frac= 1 , replace= True)
boot_mean_diff.append(boot_after.mean() - boot_before.mean() )
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025,0.975])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = False
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
# Print out yearly
print(yearly)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
yearly['proportion_deaths'] = yearly['deaths'] / yearly['births']
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly['clinic'] == 'clinic 1']
yearly2 = yearly[yearly['clinic'] == 'clinic 2']
# Print out yearly1
print(yearly1)
###Output
year births deaths clinic proportion_deaths
0 1841 3036 237 clinic 1 0.078063
1 1842 3287 518 clinic 1 0.157591
2 1843 3060 274 clinic 1 0.089542
3 1844 3157 260 clinic 1 0.082357
4 1845 3492 241 clinic 1 0.069015
5 1846 4010 459 clinic 1 0.114464
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
ax = yearly1.plot(x='year',y = 'proportion_deaths',label = 'Clinic 1')
yearly2.plot(x = 'year',y = 'proportion_deaths',label = 'Clinic 2',ax=ax)
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv',parse_dates = ['date'])
# Calculate proportion of deaths per no. births
monthly['proportion_deaths'] = monthly['deaths'] / monthly['births']
# Print out the first rows in monthly
print(monthly.head())
###Output
date births deaths proportion_deaths
0 1841-01-01 254 37 0.145669
1 1841-02-01 239 18 0.075314
2 1841-03-01 277 12 0.043321
3 1841-04-01 255 4 0.015686
4 1841-05-01 255 2 0.007843
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
ax = monthly.plot(x = 'date',y = 'proportion_deaths')
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly['date'] < handwashing_start]
after_washing = monthly[monthly['date'] >= handwashing_start]
after_washing
# Plot monthly proportion of deaths before and after handwashing
ax = before_washing.plot(x = 'date',y = 'proportion_deaths',label = 'Before Washing')
after_washing.plot(x = 'date',y = 'proportion_deaths',label = 'After Washing',ax=ax)
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing['proportion_deaths']
after_proportion = after_washing['proportion_deaths']
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac = 1,replace = True)
boot_after = after_proportion.sample(frac = 1,replace = True)
boot_mean_diff.append(boot_after.mean() - boot_before.mean())
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025,0.975])
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz SemmelweisThis is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
import pandas as pd
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
print(yearly)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
yearly["proportion_deaths"] = yearly["deaths"]/yearly["births"]
yearly1 = yearly[yearly["clinic"] == "clinic 1"]
yearly2 = yearly[yearly["clinic"] == "clinic 2"]
print(yearly1)
print(yearly2)
###Output
year births deaths clinic proportion_deaths
0 1841 3036 237 clinic 1 0.078063
1 1842 3287 518 clinic 1 0.157591
2 1843 3060 274 clinic 1 0.089542
3 1844 3157 260 clinic 1 0.082357
4 1845 3492 241 clinic 1 0.069015
5 1846 4010 459 clinic 1 0.114464
year births deaths clinic proportion_deaths
6 1841 2442 86 clinic 2 0.035217
7 1842 2659 202 clinic 2 0.075968
8 1843 2739 164 clinic 2 0.059876
9 1844 2956 68 clinic 2 0.023004
10 1845 3241 66 clinic 2 0.020364
11 1846 3754 105 clinic 2 0.027970
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
%matplotlib inline
ax = yearly1.plot(x="year", y="proportion_deaths", label="Clinic 1")
yearly2.plot(x="year", y="proportion_deaths", label="Clinic 2", ax=ax)
ax.set_ylabel("Proportion deaths")
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
monthly = pd.read_csv("datasets/monthly_deaths.csv", parse_dates=["date"])
monthly["proportion_deaths"] = monthly["deaths"]/monthly["births"]
monthly.head()
###Output
_____no_output_____
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
ax = monthly.plot(x="date", y="proportion_deaths")
ax.set_ylabel("Proportion deaths")
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
before_washing = monthly[monthly["date"] < handwashing_start]
after_washing = monthly[monthly["date"] >= handwashing_start]
ax = before_washing.plot(x="date", y="proportion_deaths", label="Before")
after_washing.plot(x="date", y="proportion_deaths", label="After", ax=ax)
ax.set_ylabel("Proportion deaths")
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
before_proportion = before_washing["proportion_deaths"]
after_proportion = after_washing["proportion_deaths"]
mean_diff = after_proportion.mean() - before_proportion.mean()
print(mean_diff)
###Output
-0.08395660751183336
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append(boot_after.mean() - boot_before.mean())
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to the fact:
doctors_should_wash_their_hands = True
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv("datasets/yearly_deaths_by_clinic.csv")
# Print out yearly
print(yearly)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
yearly["proportion_deaths"] = yearly["deaths"]/yearly["births"]
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly["clinic"]=="clinic 1"]
yearly2 = yearly[yearly["clinic"]=="clinic 2"]
# Print out yearly1
print(yearly1)
###Output
year births deaths clinic proportion_deaths
0 1841 3036 237 clinic 1 0.078063
1 1842 3287 518 clinic 1 0.157591
2 1843 3060 274 clinic 1 0.089542
3 1844 3157 260 clinic 1 0.082357
4 1845 3492 241 clinic 1 0.069015
5 1846 4010 459 clinic 1 0.114464
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern…
###Code
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
ax = yearly1.plot(x= "year", y="proportion_deaths", label="plot")
ax.set_ylabel("Proportion deaths")
yearly2.plot(x= "year", y="proportion_deaths", label="plot", ax=ax)
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv("datasets/monthly_deaths.csv", parse_dates = ["date"])
# Calculate proportion of deaths per no. births
monthly["proportion_deaths"] = monthly["deaths"]/monthly["births"]
# Print out the first rows in monthly
print(monthly.head(1))
###Output
date births deaths proportion_deaths
0 1841-01-01 254 37 0.145669
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
ax = monthly.plot(x="date", y="proportion_deaths")
ax.set_ylabel("Proportion deaths")
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly["date"]<handwashing_start]
after_washing = monthly[monthly["date"]>=handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
ax = before_washing.plot(x="date", y="proportion_deaths", label="plot")
after_washing.plot(x="date", y="proportion_deaths", label="plot", ax = ax)
ax.set_ylabel("Proportion deaths")
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing.proportion_deaths
after_proportion = after_washing.proportion_deaths
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append(boot_after.mean()-boot_before.mean())
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv("datasets/yearly_deaths_by_clinic.csv")
# Print out yearly
# ... YOUR CODE FOR TASK 1 ...
print(yearly)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 2 ...
yearly['proportion_deaths'] = yearly.deaths.divide(yearly.births)
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly.clinic == 'clinic 1']
yearly2 = yearly[yearly.clinic == 'clinic 2']
# Print out yearly1
# ... YOUR CODE FOR TASK 2 ...
print(yearly2)
###Output
year births deaths clinic proportion_deaths
6 1841 2442 86 clinic 2 0.035217
7 1842 2659 202 clinic 2 0.075968
8 1843 2739 164 clinic 2 0.059876
9 1844 2956 68 clinic 2 0.023004
10 1845 3241 66 clinic 2 0.020364
11 1846 3754 105 clinic 2 0.027970
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
# ... YOUR CODE FOR TASK 3 ...
ax = yearly1.plot(x='year', y='proportion_deaths', label='clinic 1')
yearly2.plot(x='year', y='proportion_deaths', label='clinic 2', ax=ax)
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv', parse_dates = ['date'])
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 4 ...
monthly['proportion_deaths'] = monthly.deaths.divide(monthly.births)
# Print out the first rows in monthly
# ... YOUR CODE FOR TASK 4 ...
print(monthly.head())
###Output
date births deaths proportion_deaths
0 1841-01-01 254 37 0.145669
1 1841-02-01 239 18 0.075314
2 1841-03-01 277 12 0.043321
3 1841-04-01 255 4 0.015686
4 1841-05-01 255 2 0.007843
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
# ... YOUR CODE FOR TASK 5 ...
ax = monthly.plot(x = 'date', y = 'proportion_deaths')
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly.date < handwashing_start]
after_washing = monthly[monthly.date >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
# ... YOUR CODE FOR TASK 6 ...
ax = before_washing.plot(x = 'date', y = 'proportion_deaths', label = 'Before handwashing')
after_washing.plot(x = 'date', y='proportion_deaths', label='After handwashing', ax=ax)
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing.proportion_deaths
after_proportion = after_washing.proportion_deaths
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac = 1, replace = True)
boot_after = after_proportion.sample(frac = 1, replace = True)
boot_mean_diff.append( boot_after.mean() - boot_before.mean() )
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean).quantile([0.025, 0.975])
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = False
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# Load in the tidyverse package
library(tidyverse)
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly <- read_csv("datasets/yearly_deaths_by_clinic.csv")
# Print out yearly
yearly
###Output
-- Attaching packages --------------------------------------- tidyverse 1.2.0 --
v ggplot2 3.1.0 v purrr 0.2.5
v tibble 1.4.2 v dplyr 0.7.8
v tidyr 0.8.2 v stringr 1.3.1
v readr 1.2.1 v forcats 0.3.0
-- Conflicts ------------------------------------------ tidyverse_conflicts() --
x dplyr::filter() masks stats::filter()
x dplyr::lag() masks stats::lag()
Parsed with column specification:
cols(
year = col_double(),
births = col_double(),
deaths = col_double(),
clinic = col_character()
)
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth.
###Code
# Adding a new column to yearly with proportion of deaths per no. births
yearly <- yearly %>% mutate(proportion_deaths = deaths/births)
# Print out yearly
yearly
###Output
_____no_output_____
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# Setting the size of plots in this notebook
options(repr.plot.width = 7, repr.plot.height = 4)
# Plot yearly proportion of deaths at the two clinics
ggplot(yearly, aes(x = year, y = proportion_deaths, col = clinic)) +
geom_line()
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly <- read_csv("datasets/monthly_deaths.csv")
# Adding a new column with proportion of deaths per no. births
monthly <- monthly %>% mutate(proportion_deaths = deaths/births)
# Print out the first rows in monthly
head(monthly)
###Output
Parsed with column specification:
cols(
date = col_date(format = ""),
births = col_double(),
deaths = col_double()
)
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
ggplot(monthly, aes(date, proportion_deaths)) +
geom_line() +
labs(x = "Year", y = "Proportion Deaths")
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# From this date handwashing was made mandatory
handwashing_start = as.Date('1847-06-01')
# Add a TRUE/FALSE to monthly called handwashing_started
monthly <- monthly %>%
mutate(handwashing_started = date >= handwashing_start)
# Plot monthly proportion of deaths before and after handwashing
ggplot(monthly, aes(x = date, y = proportion_deaths, color = handwashing_started)) +
geom_line()
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Calculating the mean proportion of deaths
# before and after handwashing.
monthly_summary <- monthly %>%
group_by(handwashing_started) %>%
summarise(mean_proportion_deaths = mean(proportion_deaths))
# Printing out the summary.
monthly_summary
###Output
_____no_output_____
###Markdown
8. A statistical analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average before handwashing to just 2% when handwashing was enforced (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using a t-test).
###Code
# Calculating a 95% Confidence intrerval using t.test
test_result <- t.test( proportion_deaths ~ handwashing_started, data = monthly)
test_result
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisThat the doctors didn't wash their hands increased the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands <- TRUE
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# Load in the tidyverse package
library(tidyverse)
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly <- read_csv('datasets/yearly_deaths_by_clinic.csv')
# Print out yearly
yearly
###Output
Parsed with column specification:
cols(
year = col_double(),
births = col_double(),
deaths = col_double(),
clinic = col_character()
)
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth.
###Code
# Adding a new column to yearly with proportion of deaths per no. births
yearly <- yearly %>%
mutate(proportion_deaths = deaths/births)
# Print out yearly
yearly
###Output
_____no_output_____
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# Setting the size of plots in this notebook
options(repr.plot.width=7, repr.plot.height=4)
# Plot yearly proportion of deaths at the two clinics
ggplot(yearly, aes(x=year, y=proportion_deaths, color=clinic)) + geom_line()
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly <- read_csv('datasets/monthly_deaths.csv')
# Adding a new column with proportion of deaths per no. births
monthly <- monthly %>%
mutate(proportion_deaths = deaths/births)
# Print out the first rows in monthly
head(monthly)
###Output
Parsed with column specification:
cols(
date = col_date(format = ""),
births = col_double(),
deaths = col_double()
)
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
ggplot(monthly, aes(x=date, y=proportion_deaths )) + geom_line() + labs(x="Date", y="Proportion of Deaths")
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# From this date handwashing was made mandatory
handwashing_start = as.Date('1847-06-01')
# Add a TRUE/FALSE column to monthly called handwashing_started
monthly <- monthly %>%
mutate(is_start_month = monthly$date == handwashing_start)
# Add a TRUE/FALSE column to monthly called handwashing_started
monthly <- monthly %>%
mutate(handwashing_started = ifelse(date >= handwashing_start, TRUE, FALSE))
# Plot monthly proportion of deaths before and after handwashing
ggplot(monthly, aes(x = date, y = proportion_deaths, col = handwashing_started)) +
geom_line()
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Calculating the mean proportion of deaths
# before and after handwashing.
monthly_summary <- monthly %>%
group_by(handwashing_started) %>%
summarise(mean_proportion_deaths = mean(proportion_deaths))
# Printing out the summary.
monthly_summary
###Output
_____no_output_____
###Markdown
8. A statistical analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average before handwashing to just 2% when handwashing was enforced (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using a t-test).
###Code
# Calculating a 95% Confidence intrerval using t.test
test_result <- t.test( proportion_deaths ~ handwashing_started, data = monthly)
test_result
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisThat the doctors didn't wash their hands increased the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands <- FALSE
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
yearly
# Print out yearly
# ... YOUR CODE FOR TASK 1 ...
###Output
_____no_output_____
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 2 ...
yearly['proportion_deaths'] = yearly['deaths'] / yearly['births']
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly['clinic']=='clinic 1']
yearly2 = yearly[yearly['clinic']=='clinic 2']
yearly1
# Print out yearly1
# ... YOUR CODE FOR TASK 2 ...
###Output
_____no_output_____
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
%matplotlib inline
ax = yearly1.plot(x='year',y='proportion_deaths',label='yearly' )
yearly2.plot(x='year',y='proportion_deaths',label='yearly2',ax=ax)
# Plot yearly proportion of deaths at the two clinics
# ... YOUR CODE FOR TASK 3 ...
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv',parse_dates=['date'])
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 4 ...
monthly['proportion_deaths'] = monthly['deaths'] / monthly['births']
monthly.head()
# Print out the first rows in monthly
# ... YOUR CODE FOR TASK 4 ...
###Output
_____no_output_____
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
ax = monthly.plot(x='date',y='proportion_deaths',label='Proportion deaths')
# ... YOUR CODE FOR TASK 5 ...
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
import matplotlib.pyplot as plt
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly['date'] < handwashing_start]
after_washing = monthly[monthly['date'] >= handwashing_start]
ax = before_washing.plot('date','proportion_deaths', color = 'red', label = 'Before Washing')
after_washing.plot('date','proportion_deaths', ax = ax, color = 'blue', label = 'After Washing')
plt.xticks(rotation = 45)
ax.legend(loc = 0)
ax.set_ylabel('Proportion Deaths')
# Plot monthly proportion of deaths before and after handwashing
# ... YOUR CODE FOR TASK 6 ...
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing['proportion_deaths']
after_proportion = after_washing['proportion_deaths']
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=i, replace=True)
boot_after = after_proportion.sample(frac=i, replace=True)
boot_mean_diff.append( boot_after.mean()-boot_before.mean() )
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = False
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
import pandas as pd
# ... YOUR CODE FOR TASK 1 ...
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
# Print out yearly
print(yearly)
# ... YOUR CODE FOR TASK 1 ...
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 2 ...
yearly['proportion_deaths'] = yearly['deaths']/yearly['births']
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly.loc[yearly['clinic'] == 'clinic 1']
yearly2 = yearly.loc[yearly['clinic'] == 'clinic 2']
# Print out yearly1
print(yearly1)
# ... YOUR CODE FOR TASK 2 ...
###Output
year births deaths clinic proportion_deaths
0 1841 3036 237 clinic 1 0.078063
1 1842 3287 518 clinic 1 0.157591
2 1843 3060 274 clinic 1 0.089542
3 1844 3157 260 clinic 1 0.082357
4 1845 3492 241 clinic 1 0.069015
5 1846 4010 459 clinic 1 0.114464
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
# ... YOUR CODE FOR TASK 3 ...
ax = yearly1.plot(x='year', y='proportion_deaths', label='yearly1')
yearly2.plot(x='year', y='proportion_deaths', label='yearly2', ax=ax)
ax.set_ylabel('proportion_deaths')
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv', parse_dates=['date'])
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 4 ...
monthly['proportion_deaths'] = monthly['deaths']/monthly['births']
# Print out the first rows in monthly
# ... YOUR CODE FOR TASK 4 ...
print(monthly.head())
###Output
date births deaths proportion_deaths
0 1841-01-01 254 37 0.145669
1 1841-02-01 239 18 0.075314
2 1841-03-01 277 12 0.043321
3 1841-04-01 255 4 0.015686
4 1841-05-01 255 2 0.007843
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
# ... YOUR CODE FOR TASK 5 ...
ax = monthly.plot(x='date', y='proportion_deaths', label='proportion_deaths')
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly.loc[monthly['date'] < handwashing_start]
after_washing = monthly.loc[monthly['date'] >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
# ... YOUR CODE FOR TASK 6 ...
ax = before_washing.plot(x='date', y='proportion_deaths', label='before_washing')
after_washing.plot(x='date', y='proportion_deaths', label='after_washing', ax=ax)
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing['proportion_deaths']
after_proportion = after_washing['proportion_deaths']
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append(boot_after.mean() - boot_before.mean())
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
import numpy as np
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
yearly
# Print out yearly
# ... YOUR CODE FOR TASK 1 ...
###Output
_____no_output_____
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 2 ...
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly.clinic=='clinic 1']
yearly2 = yearly[yearly.clinic=='clinic 2']
# Print out yearly1
yearly1
# ... YOUR CODE FOR TASK 2 ...
###Output
_____no_output_____
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
import matplotlib.pyplot as plt
%matplotlib inline
x1=yearly1.year
y1=yearly1.deaths
plt.plot(x1,y1,label="clinic1")
x2=yearly2.year
y2=yearly2.deaths
plt.plot(x2,y2,label="clinic2")
plt.xlabel("Year")
plt.ylabel("No of Deaths")
plt.title("proportion of deaths at both clinic1 and clinic2")
plt.legend()
plt.show()
# Plot yearly proportion of deaths at the two clinics
# ... YOUR CODE FOR TASK 3 ...
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv')
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 4 ...
monthly.head()
# Print out the first rows in monthly
# ... YOUR CODE FOR TASK 4 ...
###Output
_____no_output_____
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
# ... YOUR CODE FOR TASK 5 ...
x=monthly.date
y=monthly.deaths
plt.plot(x,y)
plt.xlabel('Date')
plt.ylabel('No of Deaths')
plt.title('monthly proportion of deaths')
plt.show()
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly.date<'1847-06-01']
after_washing = monthly[monthly.date>='1847-06-01']
# Plot monthly proportion of deaths before and after handwashing
# ... YOUR CODE FOR TASK 6 ...
x1=before_washing.date
y1=before_washing.deaths
plt.plot(x1,y1,label="before_washing")
x2=after_washing.date
y2=after_washing.deaths
plt.plot(x2,y2,label="after_washing")
plt.xlabel("Date")
plt.ylabel("No of Deaths")
plt.title('monthly proportion of deaths before and after handwashing')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing['deaths']
after_proportion = after_washing['deaths']
mean_diff = np.mean(after_proportion) - np.mean(before_proportion)
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append(np.mean(boot_after)-np.mean(boot_before))
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.05,0.950])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# Load in the tidyverse package
# .... YOUR CODE FOR TASK 1 ....
library(tidyverse)
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly <- read_csv("datasets/yearly_deaths_by_clinic.csv")
# Print out yearly
# .... YOUR CODE FOR TASK 1 ....
print(yearly)
###Output
Parsed with column specification:
cols(
year = [32mcol_double()[39m,
births = [32mcol_double()[39m,
deaths = [32mcol_double()[39m,
clinic = [31mcol_character()[39m
)
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth.
###Code
# Adding a new column to yearly with proportion of deaths per no. births
yearly <- yearly %>%
mutate(proportion_deaths = deaths / births)
# Print out yearly
print(yearly)
###Output
[38;5;246m# A tibble: 12 x 5[39m
year births deaths clinic proportion_deaths
[3m[38;5;246m<dbl>[39m[23m [3m[38;5;246m<dbl>[39m[23m [3m[38;5;246m<dbl>[39m[23m [3m[38;5;246m<chr>[39m[23m [3m[38;5;246m<dbl>[39m[23m
[38;5;250m 1[39m [4m1[24m841 [4m3[24m036 237 clinic 1 0.078[4m1[24m
[38;5;250m 2[39m [4m1[24m842 [4m3[24m287 518 clinic 1 0.158
[38;5;250m 3[39m [4m1[24m843 [4m3[24m060 274 clinic 1 0.089[4m5[24m
[38;5;250m 4[39m [4m1[24m844 [4m3[24m157 260 clinic 1 0.082[4m4[24m
[38;5;250m 5[39m [4m1[24m845 [4m3[24m492 241 clinic 1 0.069[4m0[24m
[38;5;250m 6[39m [4m1[24m846 [4m4[24m010 459 clinic 1 0.114
[38;5;250m 7[39m [4m1[24m841 [4m2[24m442 86 clinic 2 0.035[4m2[24m
[38;5;250m 8[39m [4m1[24m842 [4m2[24m659 202 clinic 2 0.076[4m0[24m
[38;5;250m 9[39m [4m1[24m843 [4m2[24m739 164 clinic 2 0.059[4m9[24m
[38;5;250m10[39m [4m1[24m844 [4m2[24m956 68 clinic 2 0.023[4m0[24m
[38;5;250m11[39m [4m1[24m845 [4m3[24m241 66 clinic 2 0.020[4m4[24m
[38;5;250m12[39m [4m1[24m846 [4m3[24m754 105 clinic 2 0.028[4m0[24m
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern…
###Code
# Setting the size of plots in this notebook
options(repr.plot.width=7, repr.plot.height=4)
# Plot yearly proportion of deaths at the two clinics
ggplot(data= yearl2, aes(y= proportion_deaths,
x= year,
color= clinic)) +
geom_line()
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly <- read_csv("datasets/monthly_deaths.csv")
# Adding a new column with proportion of deaths per no. births
monthly <- monthly %>%
mutate(proportion_deaths= deaths / births)
# Print out the first rows in monthly
head(monthly)
###Output
Parsed with column specification:
cols(
date = [34mcol_date(format = "")[39m,
births = [32mcol_double()[39m,
deaths = [32mcol_double()[39m
)
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
ggplot(data= monthly, aes(y= proportion_deaths, x= date)) +
geom_line() +
labs(x= "Date", y= "Proportion of Death per Births")
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# From this date handwashing was made mandatory
handwashing_start = as.Date('1847-06-01')
# Add a TRUE/FALSE column to monthly called handwashing_started
monthly <- monthly %>%
add_column(handwashing_started=
if_else(handwashing_start <= monthly$date,
TRUE,
FALSE))
# Plot monthly proportion of deaths before and after handwashing
ggplot(data= monthly,
aes(y= proportion_deaths, x= date, color= handwashing_started)) +
geom_line() +
labs(x= "Date", y= "Proportion of death per births")
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Calculating the mean proportion of deaths
# before and after handwashing.
monthly_summary <- monthly3 %>%
group_by(handwashing_started) %>%
summarise(mean_prop= mean(proportion_deaths))
# Printing out the summary.
monthly_summary
###Output
`summarise()` ungrouping output (override with `.groups` argument)
###Markdown
8. A statistical analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average before handwashing to just 2% when handwashing was enforced (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using a t-test).
###Code
# Calculating a 95% Confidence intrerval using t.test
test_result <- t.test( proportion_deaths ~ handwashing_started, data = monthly3)
test_result
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisThat the doctors didn't wash their hands increased the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands <- TRUE
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
# Print out yearly
# ... YOUR CODE FOR TASK 1 ...
print(yearly)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 2 ...
import pandas as pd
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
yearly["proportion_deaths"]=yearly["deaths"]/yearly["births"]
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly.head(6)
yearly2 = yearly.tail(6)
# Print out yearly1
# ... YOUR CODE FOR TASK 2 ...
print(yearly1)
###Output
year births deaths clinic proportion_deaths
0 1841 3036 237 clinic 1 0.078063
1 1842 3287 518 clinic 1 0.157591
2 1843 3060 274 clinic 1 0.089542
3 1844 3157 260 clinic 1 0.082357
4 1845 3492 241 clinic 1 0.069015
5 1846 4010 459 clinic 1 0.114464
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
import matplotlib.pyplot as plt
ax = yearly1.plot(x="year", y="proportion_deaths", label="Clinic 1")
yearly2.plot(x="year", y="proportion_deaths", label='Clinic 2', ax=ax)
ax.set_ylabel("proportion deaths")
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv("datasets/monthly_deaths.csv", parse_dates=["date"])
# Calculate proportion of deaths per no. births
monthly["proportion_deaths"]=monthly["deaths"]/monthly["births"]
# Print out the first rows in monthly
print(monthly.head(1))
###Output
date births deaths proportion_deaths
0 1841-01-01 254 37 0.145669
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
%matplotlib inline
# Plot monthly proportion of deaths
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv("datasets/monthly_deaths.csv", parse_dates=["date"])
# Calculate proportion of deaths per no. births
monthly["proportion_deaths"]=monthly["deaths"]/monthly["births"]
ax= monthly.plot(x="date", y="proportion_deaths")
ax.set_ylabel=("Proportion deaths")
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly["date"] < handwashing_start]
after_washing = monthly[monthly["date"] >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
ax = before_washing.plot(x="date", y="proportion_deaths", label="before washing")
after_washing.plot(x="date", y="proportion_deaths", label="after washing", ax=ax)
ax.set_ylabel("proportion deaths")
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing["proportion_deaths"]
after_proportion = after_washing["proportion_deaths"]
mean_diff = after_proportion.mean()-before_proportion.mean()
print(mean_diff)
###Output
-0.08395660751183336
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append(boot_after.mean() - boot_before.mean())
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
# Print out yearly
print(yearly)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
yearly["proportion_deaths"] = yearly["deaths"]/yearly["births"]
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly["clinic"] == "clinic 1"]
yearly2 = yearly[yearly["clinic"] == "clinic 2"]
# Print out yearly1
print(yearly1)
###Output
year births deaths clinic proportion_deaths
0 1841 3036 237 clinic 1 0.078063
1 1842 3287 518 clinic 1 0.157591
2 1843 3060 274 clinic 1 0.089542
3 1844 3157 260 clinic 1 0.082357
4 1845 3492 241 clinic 1 0.069015
5 1846 4010 459 clinic 1 0.114464
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
ax = yearly1.plot(x="year", y="proportion_deaths",
label="Clinic 1")
yearly2.plot(x="year", y="proportion_deaths",
label="Clinic 2", ax=ax)
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv', parse_dates = ["date"])
# Calculate proportion of deaths per no. births
monthly["proportion_deaths"] = monthly["deaths"]/monthly["births"]
# Print out the first rows in monthly
monthly.head()
###Output
_____no_output_____
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
ax = monthly.plot(x="date", y="proportion_deaths")
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly["date"] < handwashing_start]
after_washing = monthly[monthly["date"] >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
ax = before_washing.plot(x='date', y='proportion_deaths', label='clinic1')
after_washing.plot(x='date', y='proportion_deaths', label='clinic2', ax=ax)
ax.set_ylabel("Proportion deaths")
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing["proportion_deaths"]
after_proportion = after_washing["proportion_deaths"]
mean_diff = after_proportion.mean() - before_proportion.mean()
mean_diff
###Output
_____no_output_____
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append( ... )
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = ...
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = False
###Output
_____no_output_____
###Markdown
1. Meet Dr. Ignaz Semmelweis<!---->This is Dr. Ignaz Semmelweis, a Hungarian physician born in 1818 and active at the Vienna General Hospital. If Dr. Semmelweis looks troubled it's probably because he's thinking about childbed fever: A deadly disease affecting women that just have given birth. He is thinking about it because in the early 1840s at the Vienna General Hospital as many as 10% of the women giving birth die from it. He is thinking about it because he knows the cause of childbed fever: It's the contaminated hands of the doctors delivering the babies. And they won't listen to him and wash their hands!In this notebook, we're going to reanalyze the data that made Semmelweis discover the importance of handwashing. Let's start by looking at the data that made Semmelweis realize that something was wrong with the procedures at Vienna General Hospital.
###Code
# importing modules
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
# Read datasets/yearly_deaths_by_clinic.csv into yearly
yearly = pd.read_csv('datasets/yearly_deaths_by_clinic.csv')
# Print out yearly
# ... YOUR CODE FOR TASK 1 ...
print(yearly)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
6 1841 2442 86 clinic 2
7 1842 2659 202 clinic 2
8 1843 2739 164 clinic 2
9 1844 2956 68 clinic 2
10 1845 3241 66 clinic 2
11 1846 3754 105 clinic 2
###Markdown
2. The alarming number of deathsThe table above shows the number of women giving birth at the two clinics at the Vienna General Hospital for the years 1841 to 1846. You'll notice that giving birth was very dangerous; an alarming number of women died as the result of childbirth, most of them from childbed fever.We see this more clearly if we look at the proportion of deaths out of the number of women giving birth. Let's zoom in on the proportion of deaths at Clinic 1.
###Code
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 2 ...
yearly["proportion_deaths"] =
# Extract clinic 1 data into yearly1 and clinic 2 data into yearly2
yearly1 = yearly[yearly.clinic == 'clinic 1']
yearly2 = yearly[yearly.clinic == 'clinic 2']
# Print out yearly1
# ... YOUR CODE FOR TASK 2 ...
print(yearly1)
###Output
year births deaths clinic
0 1841 3036 237 clinic 1
1 1842 3287 518 clinic 1
2 1843 3060 274 clinic 1
3 1844 3157 260 clinic 1
4 1845 3492 241 clinic 1
5 1846 4010 459 clinic 1
###Markdown
3. Death at the clinicsIf we now plot the proportion of deaths at both clinic 1 and clinic 2 we'll see a curious pattern...
###Code
# This makes plots appear in the notebook
%matplotlib inline
# Plot yearly proportion of deaths at the two clinics
# ... YOUR CODE FOR TASK 3 ...
ax = yearly1.plot(x="year", y="deaths",
label="clinic 1")
yearly2.plot(x="year", y="deaths",
label="clinic 2", ax=ax)
ax.set_ylabel("Proportion deaths")
###Output
_____no_output_____
###Markdown
4. The handwashing beginsWhy is the proportion of deaths constantly so much higher in Clinic 1? Semmelweis saw the same pattern and was puzzled and distressed. The only difference between the clinics was that many medical students served at Clinic 1, while mostly midwife students served at Clinic 2. While the midwives only tended to the women giving birth, the medical students also spent time in the autopsy rooms examining corpses. Semmelweis started to suspect that something on the corpses, spread from the hands of the medical students, caused childbed fever. So in a desperate attempt to stop the high mortality rates, he decreed: Wash your hands! This was an unorthodox and controversial request, nobody in Vienna knew about bacteria at this point in time. Let's load in monthly data from Clinic 1 to see if the handwashing had any effect.
###Code
# Read datasets/monthly_deaths.csv into monthly
monthly = pd.read_csv('datasets/monthly_deaths.csv', parse_dates=['date'])
# Calculate proportion of deaths per no. births
# ... YOUR CODE FOR TASK 4 ...
monthly["proportion_deaths"] = monthly.deaths / monthly.births
# Print out the first rows in monthly
# ... YOUR CODE FOR TASK 4 ...
print(monthly.head())
###Output
date births deaths proportion_deaths
0 1841-01-01 254 37 0.145669
1 1841-02-01 239 18 0.075314
2 1841-03-01 277 12 0.043321
3 1841-04-01 255 4 0.015686
4 1841-05-01 255 2 0.007843
###Markdown
5. The effect of handwashingWith the data loaded we can now look at the proportion of deaths over time. In the plot below we haven't marked where obligatory handwashing started, but it reduced the proportion of deaths to such a degree that you should be able to spot it!
###Code
# Plot monthly proportion of deaths
# ... YOUR CODE FOR TASK 5 ...
ax = monthly.plot(x="date", y="proportion_deaths", label = 'Clinic 1')
ax.set_ylabel('Proportion deaths')
###Output
_____no_output_____
###Markdown
6. The effect of handwashing highlightedStarting from the summer of 1847 the proportion of deaths is drastically reduced and, yes, this was when Semmelweis made handwashing obligatory. The effect of handwashing is made even more clear if we highlight this in the graph.
###Code
# Date when handwashing was made mandatory
import pandas as pd
handwashing_start = pd.to_datetime('1847-06-01')
# Split monthly into before and after handwashing_start
before_washing = monthly[monthly["date"] < handwashing_start]
after_washing = monthly[monthly["date"] >= handwashing_start]
# Plot monthly proportion of deaths before and after handwashing
# ... YOUR CODE FOR TASK 6 ...
ax = before_washing.plot(x="date", y="proportion_deaths",label="before_washing")
after_washing.plot(x="date", y="proportion_deaths",label="after_washing", ax=ax)
ax.set_ylabel("Proportion deaths")
###Output
_____no_output_____
###Markdown
7. More handwashing, fewer deaths?Again, the graph shows that handwashing had a huge effect. How much did it reduce the monthly proportion of deaths on average?
###Code
# Difference in mean monthly proportion of deaths due to handwashing
before_proportion = before_washing['proportion_deaths']
after_proportion = after_washing['proportion_deaths']
mean_diff = after_proportion.mean() - before_proportion.mean()
print(mean_diff)
###Output
-0.08395660751183336
###Markdown
8. A Bootstrap analysis of Semmelweis handwashing dataIt reduced the proportion of deaths by around 8 percentage points! From 10% on average to just 2% (which is still a high number by modern standards). To get a feeling for the uncertainty around how much handwashing reduces mortalities we could look at a confidence interval (here calculated using the bootstrap method).
###Code
# A bootstrap analysis of the reduction of deaths due to handwashing
boot_mean_diff = []
for i in range(3000):
boot_before = before_proportion.sample(frac=1, replace=True)
boot_after = after_proportion.sample(frac=1, replace=True)
boot_mean_diff.append(boot_after.mean() - boot_before.mean() )
# Calculating a 95% confidence interval from boot_mean_diff
confidence_interval = pd.Series(boot_mean_diff).quantile([0.025, 0.975])
confidence_interval
###Output
_____no_output_____
###Markdown
9. The fate of Dr. SemmelweisSo handwashing reduced the proportion of deaths by between 6.7 and 10 percentage points, according to a 95% confidence interval. All in all, it would seem that Semmelweis had solid evidence that handwashing was a simple but highly effective procedure that could save many lives.The tragedy is that, despite the evidence, Semmelweis' theory — that childbed fever was caused by some "substance" (what we today know as bacteria) from autopsy room corpses — was ridiculed by contemporary scientists. The medical community largely rejected his discovery and in 1849 he was forced to leave the Vienna General Hospital for good.One reason for this was that statistics and statistical arguments were uncommon in medical science in the 1800s. Semmelweis only published his data as long tables of raw data, but he didn't show any graphs nor confidence intervals. If he would have had access to the analysis we've just put together he might have been more successful in getting the Viennese doctors to wash their hands.
###Code
# The data Semmelweis collected points to that:
doctors_should_wash_their_hands = True
###Output
_____no_output_____ |
hb_analysis_MD/HB_Analysis_MD_traj.ipynb | ###Markdown
Function for graph and finding paths
###Code
def addEdge(graph,u,v):
graph[u].append(v)
def find_all_path(graph, start, path, paths):
if len(path) == 6:
return paths.append(list(path))
if len(graph[start]) == 0:
return paths.append(list(path))
for node in graph[start]:
if node in path:
continue
path.append(node)
find_all_path(graph, node, path, paths)
path.pop()
###Output
_____no_output_____
###Markdown
Loading the pdb file or dcd file with psf
###Code
DCD = '/Users/zhangyingying/Dropbox (City College)/Yingying/large_file/new_trajectories_PSII_wt/step7_50.dcd'
PDB = '/Users/zhangyingying/Dropbox (City College)/Yingying/large_file/new_trajectories_PSII_wt/frame50_56-stripped.pdb'
PSF = '/Users/zhangyingying/Dropbox (City College)/Yingying/large_file/new_trajectories_PSII_wt/step5_charmm2omm_keep.psf'
###Output
_____no_output_____
###Markdown
Get chain name for each atom For some cases, there are duplicate resname+resid but with different chain name, for distinguishing these different residues, we need to know the chain name
###Code
chain = {}
i = 0
pdb = open(PDB, 'r')
for line in pdb:
if line[0:4] != 'ATOM':
continue
chain[i] = line[21:22]
i += 1
print(chain)
###Output
_____no_output_____
###Markdown
Calculate angles and distances for atoms and filter HBs
###Code
u = MDAnalysis.Universe(PDB)
h3 = MDAnalysis.analysis.hbonds.HydrogenBondAnalysis(u, 'not resname ALA and not resname GLN and not resname GLY and not resname ILE and not resname LEU and not resname PHE and not resname PRO and not resname VAL',
'not resname ALA and not resname GLN and not resname GLY and not resname ILE and not resname LEU and not resname PHE and not resname PRO and not resname VAL', distance=3.5, angle=90.0, acceptors = {'O1', 'O2'})
h3.run()
h3.generate_table()
df3 = pd.DataFrame.from_records(h3.table)
h3.generate_table()
df3 = pd.DataFrame.from_records(h3.table)
print(df3.head(10))
df3.to_csv('/Users/zhangyingying/Dropbox (City College)/Yingying/PSII/quinone/hb_network/1000ns_connection_his252sele.csv')
###Output
_____no_output_____
###Markdown
Give chain names for protein and index for water molecules
###Code
index_donor = []
index_accept = []
for index2, row2 in df3.iterrows():
if row2['donor_resnm'] == 'TIP3'and row2['acceptor_resnm'] != 'TIP3':
if row2['donor_atom'] == 'H1':
index_donor.append(row2['donor_resnm'] + '_' + str(row2['donor_index']-1))
index_accept.append(row2['acceptor_resnm'] + '_' + chain[row2['acceptor_index']] + '_' + str(row2['acceptor_resid']))
if row2['donor_atom'] == 'H2':
index_donor.append(row2['donor_resnm'] + '_' + str(row2['donor_index']-2))
index_accept.append(row2['acceptor_resnm'] + '_' + chain[row2['acceptor_index']] + '_' + str(row2['acceptor_resid']))
elif row2['acceptor_resnm'] == 'TIP3' and row2['donor_resnm'] != 'TIP3':
index_accept.append(row2['acceptor_resnm'] + '_' + str(row2['acceptor_index']))
index_donor.append(row2['donor_resnm'] + '_' + chain[row2['donor_index']] + '_' + str(row2['donor_resid']))
elif row2['acceptor_resnm'] == 'TIP3' and row2['donor_resnm'] == 'TIP3':
if row2['donor_atom'] == 'H1':
index_donor.append(row2['donor_resnm'] + '_' + str(row2['donor_index']-1))
index_accept.append(row2['acceptor_resnm'] + '_' + str(row2['acceptor_index']))
if row2['donor_atom'] == 'H2':
index_donor.append(row2['donor_resnm'] + '_' + str(row2['donor_index']-2))
index_accept.append(row2['acceptor_resnm'] + '_' + str(row2['acceptor_index']))
else:
index_donor.append(row2['donor_resnm'] + '_' + chain[row2['donor_index']] + '_' + str(row2['donor_resid']))
index_accept.append(row2['acceptor_resnm'] + '_' + chain[row2['acceptor_index']] + '_' + str(row2['acceptor_resid']))
df3['donor_residue'] = index_donor
df3['acceptor_residue'] = index_accept
print(df3.head(10))
###Output
time donor_index acceptor_index donor_resnm donor_resid donor_atom \
0 0.0 13 20 ASN 12 H
1 0.0 13 127926 ASN 12 H
2 0.0 46 127950 TRP 14 H
3 0.0 56 28536 TRP 14 HE1
4 0.0 70 25 GLU 15 H
5 0.0 85 25 ARG 16 H
6 0.0 129 68 CYS 18 H
7 0.0 129 83 CYS 18 H
8 0.0 129 135 CYS 18 H
9 0.0 136 68 CYS 18 HG
acceptor_resnm acceptor_resid acceptor_atom distance angle \
0 ASN 12 OD1 3.309775 94.632241
1 TIP3 30742 OH2 2.394234 118.093242
2 TIP3 32005 OH2 2.662945 149.125949
3 SER 25 OG 2.011113 145.927014
4 ASN 12 O 2.582404 128.395456
5 ASN 12 O 1.887107 175.984727
6 TRP 14 O 1.884332 156.853694
7 GLU 15 O 2.897370 101.905417
8 CYS 18 SG 3.106368 90.224835
9 TRP 14 O 1.860560 139.995169
donor_residue acceptor_residue
0 ASN_A_12 ASN_A_12
1 ASN_A_12 TIP3_127926
2 TRP_A_14 TIP3_127950
3 TRP_A_14 SER_H_25
4 GLU_A_15 ASN_A_12
5 ARG_A_16 ASN_A_12
6 CYS_A_18 TRP_A_14
7 CYS_A_18 GLU_A_15
8 CYS_A_18 CYS_A_18
9 CYS_A_18 TRP_A_14
###Markdown
Filter the sidechain hydrogen bond (hide the backbone HBs)
###Code
hb = pd.DataFrame()
dic_hdonnor = {'ASP':['HD1', 'HD2'], 'ARG': ['HH11', 'HH12', 'HH21', 'HH22', 'HE'], 'GLU':['HE1', 'HE2'], 'HIS':['HD1', 'HE2'], 'HSD':['HD1', 'HE2'], 'HSE':['HD1', 'HE2'], 'HSP':['HD1', 'HE2'],
'SER':['HG'], 'THR':['HG1'], 'ASN':['HD21', 'HD22'], 'GLN':['HE21', 'HE22'], 'CYS':['HG'], 'TYR':['HH'], 'TRP':['HE1'], 'LYS':['HZ1', 'HZ2', 'HZ3'], 'TIP3':['H1', 'H2'], 'HOH':['1H', '2H']}
dic_accept = {'ASP':['OD1', 'OD2'], 'HCO': ['OC1', 'OC2'], 'ARG': ['NE', 'NH1', 'NH2'], 'GLU':['OE1', 'OE2'], 'HSD':['ND1', 'NE2'], 'HSE':['ND1', 'NE2'], 'HSP':['ND1', 'NE2'], 'HIS':['ND1', 'NE2'],
'SER':['OG'], 'THR':['OG1'], 'ASN':['OD1'], 'GLN':['OE1'], 'CYS':['SG'], 'TYR':['OH'], 'LYS':['NZ'], 'MET':['SD'], 'CLX':['CLX'], 'CLA':['CLA'], 'OX2':['OX2'], 'PL9':['O1', 'O2'], 'FX':['FX'], 'TIP3':['OH2'], 'HOH':['O'], 'MQ8':['O1', 'O2']}
donor_residue_pick = []
acceptor_residue_pick = []
donor_atom_pick = []
acceptor_atom_pick = []
for index, row in df3.iterrows():
if row['donor_resnm'] in dic_hdonnor.keys() and row['acceptor_resnm'] in dic_accept.keys():
if row['donor_atom'] in dic_hdonnor[row['donor_resnm']] and row['acceptor_atom'] in dic_accept[row['acceptor_resnm']]:
donor_residue_pick.append(row['donor_residue'])
acceptor_residue_pick.append(row['acceptor_residue'])
donor_atom_pick.append(row['donor_atom'])
acceptor_atom_pick.append(row['acceptor_atom'])
else:
continue
# all connection network
hb_two = pd.DataFrame({'donor_residue':donor_residue_pick, 'donor_atom':donor_atom_pick, 'acceptor_residue':acceptor_residue_pick, 'acceptor_atom':acceptor_atom_pick})
print(hb_two.head(10))
###Output
acceptor_atom acceptor_residue donor_atom donor_residue
0 OG SER_H_25 HE1 TRP_A_14
1 OD1 ASN_A_26 HE1 TRP_A_20
2 OH2 TIP3_47469 HD21 ASN_A_26
3 OH2 TIP3_127254 HH11 ARG_A_27
4 OH2 TIP3_127353 HH11 ARG_A_27
5 OH2 TIP3_127254 HH12 ARG_A_27
6 OH2 TIP3_128304 HH12 ARG_A_27
7 OH2 TIP3_128304 HH22 ARG_A_27
8 OE1 GLU_A_132 HH TYR_A_29
9 OE2 GLU_A_132 HH TYR_A_29
###Markdown
Divide all connections to two groups: directly connection, connection via water molecules
###Code
donor_residue = []
acceptor_residue = []
donor_residue2 = []
acceptor_residue2 = []
for row in range(len(hb_two)):
if hb_two['donor_residue'][row][0:3] != 'TIP' and hb_two['acceptor_residue'][row][0:3] != 'TIP':
if hb_two['donor_residue'][row] == hb_two['acceptor_residue'][row]:
continue
else:
donor_residue.append(hb_two['donor_residue'][row])
acceptor_residue.append(hb_two['acceptor_residue'][row])
else:
if hb_two['donor_residue'][row] == hb_two['acceptor_residue'][row]:
continue
else:
donor_residue2.append(hb_two['donor_residue'][row])
acceptor_residue2.append(hb_two['acceptor_residue'][row])
dire_con = pd.DataFrame({'donor_residue': donor_residue, 'acceptor_residue': acceptor_residue, 'wat_num': [0]*len(donor_residue)})
wat_con = pd.DataFrame({'donor_residue': donor_residue2, 'acceptor_residue': acceptor_residue2})
# connection via water
wat_con = wat_con.drop_duplicates()
wat_con.index = range(0, len(wat_con))
# direct connection
dire_con = dire_con.drop_duplicates()
dire_con.index = range(0, len(dire_con))
print('Direct connection:', len(dire_con))
print('Connection with water:', len(wat_con))
###Output
Direct connection: 303
Connection with water: 2691
###Markdown
Build graph for the connction via water
###Code
graph = defaultdict(list)
for i in range(len(wat_con)):
addEdge(graph, wat_con['donor_residue'][i], wat_con['acceptor_residue'][i])
print(graph)
# print(graph['TIP3_127788'])
###Output
defaultdict(<class 'list'>, {'TIP3_128523': ['TIP3_127788', 'TIP3_129585'], 'TIP3_129054': ['TIP3_129312', 'TIP3_130125'], 'TIP3_129462': ['ASP_M_169', 'TIP3_128106', 'TIP3_129702', 'TIP3_128775', 'TIP3_129048'], 'TIP3_128214': ['TIP3_127143', 'TIP3_127917', 'ASP_B_276'], 'SER_M_221': ['TIP3_130044'], 'TIP3_127257': ['TIP3_127887', 'TIP3_127836'], 'TIP3_128514': ['TIP3_129528'], 'LYS_B_373': ['TIP3_128022'], 'TIP3_47751': ['THR_C_158'], 'TIP3_47754': ['TIP3_47733'], 'ARG_A_129': ['TIP3_47466'], 'TIP3_47490': ['ASP_A_308', 'TIP3_129336'], 'ARG_P_31': ['TIP3_128895', 'TIP3_129561'], 'TIP3_128997': ['ASP_M_205', 'GLU_M_210'], 'ASN_A_303': ['TIP3_47538', 'TIP3_47586'], 'TIP3_47550': ['ASP_A_61', 'TIP3_47517', 'TIP3_47574'], 'SER_B_391': ['TIP3_129015'], 'TIP3_129147': ['TIP3_127164', 'TIP3_129192', 'TIP3_128190', 'TIP3_129069', 'TIP3_129270'], 'HIS_M_228': ['TIP3_128223', 'TIP3_130044'], 'TIP3_129765': ['TIP3_129846'], 'TIP3_129216': ['GLU_C_413'], 'TIP3_128037': ['TIP3_127794', 'TIP3_128184', 'GLU_H_38'], 'TIP3_47505': ['GLU_A_132'], 'SER_C_46': ['TIP3_127317'], 'TIP3_129504': ['TIP3_128379'], 'TIP3_128124': ['TIP3_130116', 'TIP3_47691'], 'TIP3_128886': ['TIP3_129177', 'ASP_D_333', 'TIP3_128112'], 'ASN_D_292': ['TIP3_130167', 'TIP3_129075'], 'TIP3_127719': ['TIP3_128505'], 'TIP3_128049': ['TIP3_127983'], 'TIP3_127254': ['TIP3_128304', 'TIP3_129822', 'TIP3_127497', 'TIP3_127788'], 'TIP3_129885': ['GLU_A_226'], 'THR_M_107': ['TIP3_47610'], 'TIP3_127455': ['TIP3_127773', 'GLU_G_47', 'TIP3_127863'], 'TIP3_127242': ['TIP3_128625', 'TIP3_129615', 'ASN_C_228', 'TIP3_128592', 'TIP3_128694'], 'ARG_B_476': ['TIP3_128388'], 'THR_F_17': ['TIP3_127806', 'TIP3_130017'], 'TIP3_129795': ['HIS_D_336', 'TIP3_127281', 'TIP3_127842'], 'TIP3_127527': ['GLU_C_269', 'HIS_C_444'], 'TIP3_128721': ['TIP3_127524'], 'TIP3_128328': ['TIP3_128541', 'TIP3_129423'], 'TIP3_129141': ['TIP3_127641', 'TIP3_129747'], 'SER_B_446': ['TIP3_47649'], 'TYR_A_246': ['TIP3_129918'], 'TYR_M_7': ['TIP3_129351', 'TIP3_129969'], 'TIP3_127779': ['TIP3_127890', 'TIP3_47745'], 'TIP3_127560': ['TIP3_129129', 'THR_C_397'], 'TIP3_128403': ['TYR_B_258', 'HIS_D_87'], 'TIP3_127176': ['TIP3_47604', 'TIP3_130122'], 'LYS_B_137': ['TIP3_129438', 'TIP3_129840'], 'TIP3_47439': ['TIP3_47472', 'TIP3_47487', 'TIP3_47529'], 'ASN_A_247': ['TIP3_128415', 'TIP3_129024'], 'TIP3_128448': ['TIP3_129960'], 'TIP3_129096': ['SER_B_291'], 'TIP3_129864': ['ASP_B_313'], 'ARG_B_422': ['TIP3_128775', 'TIP3_128901', 'TIP3_129498', 'TIP3_127320', 'TIP3_128799'], 'TIP3_127461': ['TIP3_129549', 'TIP3_127401', 'TIP3_127500'], 'ARG_D_251': ['TIP3_127743', 'TIP3_128358', 'TIP3_127815', 'TIP3_128970', 'TIP3_128766'], 'TIP3_129792': ['TIP3_129993'], 'TIP3_128772': ['TIP3_129777', 'TIP3_127596'], 'SER_M_191': ['TIP3_127902'], 'THR_C_335': ['TIP3_47718'], 'TIP3_129039': ['TYR_P_35', 'TIP3_127995'], 'THR_G_5': ['TIP3_129114'], 'LYS_A_310': ['TIP3_127488', 'TIP3_127971'], 'TIP3_127344': ['TIP3_127677', 'TIP3_129675', 'TIP3_127338'], 'TIP3_129705': ['ASN_A_315', 'TIP3_129681'], 'TIP3_128415': ['TIP3_127590'], 'TIP3_130029': ['TIP3_127662'], 'ARG_A_334': ['TIP3_47886', 'TIP3_128532', 'TIP3_47547', 'TIP3_128634', 'TIP3_47484', 'TIP3_127425'], 'TIP3_47838': ['TIP3_47817', 'TIP3_47877'], 'TRP_C_425': ['TIP3_129030'], 'TIP3_127542': ['TYR_P_137'], 'ARG_J_46': ['TIP3_128934', 'TIP3_129249', 'TIP3_128400'], 'TIP3_127425': ['TIP3_128208', 'TIP3_128634', 'TIP3_129417', 'GLU_A_65'], 'TIP3_129693': ['TIP3_128127'], 'TIP3_128592': ['ASN_C_228', 'TIP3_128976', 'TIP3_129615'], 'TIP3_127548': ['TIP3_127632', 'TIP3_128412'], 'TIP3_127653': ['TIP3_127137'], 'TIP3_127449': ['TYR_P_26', 'GLU_P_122', 'TIP3_128259', 'TIP3_128469'], 'TIP3_128796': ['TIP3_127653', 'TIP3_127386', 'TIP3_128148', 'TIP3_128331'], 'TIP3_129822': ['ASP_C_473', 'TIP3_127254', 'TIP3_127497', 'TIP3_128358'], 'TIP3_129621': ['TIP3_129921', 'TIP3_129288'], 'SER_I_38': ['TIP3_127566'], 'TIP3_128634': ['GLU_D_312', 'TIP3_128532'], 'ARG_A_323': ['TIP3_129927', 'TIP3_130170'], 'THR_M_153': ['TIP3_128622'], 'TIP3_128220': ['TIP3_128421', 'TIP3_128763'], 'ARG_Q_42': ['TIP3_127578', 'TIP3_128826'], 'TIP3_127788': ['TIP3_127254', 'TIP3_127497', 'TIP3_128565', 'TIP3_127947'], 'TIP3_129789': ['TIP3_128628'], 'TIP3_127182': ['TIP3_127467', 'TIP3_127923'], 'TIP3_47694': ['ASP_B_334', 'TIP3_47685'], 'TIP3_128487': ['ASP_C_376', 'TIP3_127269', 'TIP3_128526'], 'TIP3_128808': ['TIP3_130071'], 'HIS_D_197': ['TIP3_47814'], 'TIP3_129327': ['TIP3_128259', 'TIP3_128637', 'GLU_P_122', 'TIP3_127464'], 'TIP3_127215': ['HIS_D_61', 'TIP3_127593', 'TIP3_128028'], 'TYR_C_302': ['TIP3_47778'], 'TIP3_127854': ['ASP_D_297', 'TIP3_129003', 'TIP3_129744'], 'TIP3_128775': ['TIP3_128901', 'TIP3_129543', 'ASP_M_169', 'TIP3_129048', 'TIP3_129462'], 'ASN_H_31': ['TIP3_128955'], 'TIP3_128790': ['TIP3_128085', 'TIP3_127587'], 'TIP3_128616': ['TIP3_128832', 'TIP3_127140', 'TIP3_129432'], 'TIP3_47595': ['TIP3_47712', 'TIP3_47592'], 'TIP3_128763': ['TIP3_127341', 'TIP3_129585'], 'TIP3_130164': ['TYR_E_56'], 'TIP3_127338': ['TIP3_128151', 'TIP3_127677'], 'TIP3_127368': ['TIP3_127722', 'TIP3_128841'], 'TIP3_127185': ['TIP3_129171', 'TIP3_127956'], 'TIP3_128655': ['TIP3_130137', 'TIP3_128340'], 'TIP3_47565': ['TIP3_129723'], 'TIP3_127158': ['ASP_P_53'], 'TIP3_129117': ['GLU_C_71', 'TIP3_129291', 'TIP3_129909'], 'TIP3_127281': ['TIP3_129903', 'TIP3_130083'], 'TIP3_128604': ['TIP3_127206', 'TIP3_127776', 'ASN_A_108', 'TIP3_128142', 'TIP3_129111'], 'TIP3_127521': ['TIP3_127458', 'TIP3_127557', 'TIP3_129654'], 'THR_D_60': ['TIP3_47853'], 'ASN_D_263': ['TIP3_129384', 'TIP3_129759'], 'TIP3_128409': ['SER_D_300', 'GLU_N_2', 'TIP3_127914'], 'TIP3_129153': ['TIP3_127719', 'TIP3_129555'], 'LYS_H_33': ['TIP3_127539'], 'TIP3_128709': ['ASP_C_187', 'TIP3_127851', 'TIP3_130095'], 'TIP3_128541': ['TIP3_128328', 'TIP3_129546'], 'TIP3_128631': ['TIP3_127500', 'TIP3_129078', 'SER_D_230', 'TIP3_127461'], 'TIP3_127407': ['TIP3_127128', 'TIP3_127227', 'TIP3_128742', 'TIP3_129843', 'GLU_C_394'], 'TIP3_127575': ['TIP3_129942', 'TIP3_127230', 'TIP3_128364'], 'TRP_B_468': ['TIP3_127680', 'TIP3_130125'], 'THR_M_208': ['TIP3_128682'], 'TIP3_127749': ['TIP3_127845', 'TIP3_128196', 'TIP3_129159', 'GLU_C_300'], 'TIP3_129519': ['TIP3_127410', 'TIP3_127656'], 'ARG_D_348': ['TIP3_127659', 'TIP3_127929', 'TIP3_128457'], 'TIP3_47934': ['TIP3_127167', 'TIP3_129852'], 'TIP3_47574': ['ASP_A_61', 'TIP3_47550', 'TIP3_47559'], 'TIP3_128847': ['TIP3_128592', 'TIP3_128694', 'TIP3_129834'], 'TIP3_130014': ['TIP3_128904', 'TIP3_127263'], 'TIP3_47784': ['TIP3_127998', 'TIP3_128685', 'TIP3_130104', 'TIP3_128397'], 'TIP3_47700': ['SER_B_241', 'SER_B_240'], 'TIP3_129363': ['TIP3_127995', 'TIP3_129513'], 'TIP3_127872': ['TIP3_128166', 'TIP3_129333'], 'TIP3_129714': ['TIP3_128562'], 'TIP3_128400': ['TIP3_128619'], 'SER_C_344': ['TIP3_129735'], 'SER_B_291': ['TIP3_129096'], 'TIP3_127551': ['TIP3_129102', 'TIP3_127599', 'TIP3_129471'], 'TIP3_129591': ['ASP_C_360', 'TIP3_130149', 'TIP3_127389', 'TIP3_129621', 'TIP3_129636'], 'TIP3_128340': ['ASP_P_53', 'THR_P_58'], 'TIP3_127251': ['TIP3_128778', 'ASP_M_79', 'TIP3_127269'], 'TIP3_128142': ['TIP3_129852', 'TIP3_128604', 'TIP3_129111'], 'TIP3_129492': ['GLU_P_90'], 'ASN_A_26': ['TIP3_47469'], 'TIP3_127740': ['TIP3_130149'], 'TIP3_127785': ['GLU_O_93', 'TIP3_127569'], 'TIP3_128199': ['TIP3_129291'], 'TIP3_128343': ['SER_P_39', 'TIP3_127122', 'TIP3_128349', 'TIP3_130005'], 'TIP3_47451': ['SER_A_134', 'TIP3_47520', 'CYS_A_144'], 'TIP3_129048': ['ASP_O_14', 'TIP3_127398', 'TIP3_128898', 'ASP_M_169', 'TIP3_129543', 'TIP3_129702'], 'TIP3_127323': ['TIP3_129966'], 'TIP3_128694': ['TIP3_127242', 'TIP3_128592', 'TIP3_128847', 'TIP3_128976'], 'ASN_D_220': ['TIP3_128190', 'TIP3_129318'], 'TIP3_47775': ['TIP3_47772'], 'TIP3_128130': ['ASP_G_9', 'TIP3_129957'], 'TIP3_129234': ['GLU_B_387', 'TIP3_128580'], 'ASN_A_267': ['TIP3_129033'], 'TIP3_128370': ['TIP3_128733', 'THR_B_255'], 'TIP3_130155': ['TIP3_127908'], 'TIP3_127791': ['TIP3_127218', 'TIP3_47937', 'TIP3_127821'], 'SER_K_16': ['TIP3_127869'], 'TIP3_129858': ['TIP3_128877'], 'TIP3_129501': ['GLU_D_302', 'TIP3_128007', 'TIP3_127812', 'TIP3_128397'], 'TIP3_128646': ['ASP_T_2', 'TIP3_127434'], 'TIP3_128391': ['GLU_D_242', 'TIP3_127473', 'TIP3_130002', 'SER_D_245', 'TIP3_127689'], 'TIP3_128466': ['ASP_O_96', 'TIP3_128622', 'TIP3_128949', 'ASN_M_155'], 'TIP3_127656': ['TIP3_127305', 'TIP3_127362', 'TIP3_127410', 'TIP3_128922', 'TIP3_129519'], 'ARG_B_57': ['TIP3_127905', 'TIP3_47685'], 'TIP3_129651': ['TIP3_128307', 'TIP3_128955'], 'TIP3_127194': ['THR_M_75'], 'TIP3_47766': ['THR_C_335', 'TIP3_47718', 'TIP3_128730'], 'TIP3_47622': ['TIP3_128898', 'TIP3_129945', 'GLU_B_393', 'TIP3_129015'], 'TIP3_47799': ['TIP3_129075', 'GLU_B_364'], 'TYR_O_38': ['TIP3_129603'], 'ARG_P_105': ['TIP3_128817', 'TIP3_128892', 'TIP3_128961'], 'TIP3_47544': ['ASN_A_303', 'TIP3_47538'], 'ASN_O_100': ['TIP3_128082', 'TIP3_128580', 'TIP3_127230'], 'TIP3_128103': ['TYR_A_107'], 'TIP3_128853': ['TIP3_128475', 'HIS_B_343', 'GLU_B_428', 'TIP3_129558'], 'TIP3_47535': ['TIP3_47460'], 'TIP3_129615': ['TIP3_128019', 'TIP3_129414', 'TIP3_127389', 'TIP3_128976'], 'TIP3_129840': ['TIP3_127173', 'TIP3_129240'], 'TIP3_128073': ['TIP3_127614', 'GLU_B_364', 'TIP3_127359', 'TIP3_128451'], 'TIP3_129720': ['TIP3_128784', 'TIP3_129489', 'TIP3_127893'], 'ASN_C_415': ['TIP3_128319', 'TIP3_129897'], 'TIP3_128463': ['TIP3_129138'], 'TIP3_127770': ['TIP3_128070'], 'TIP3_129357': ['TIP3_128373', 'TIP3_127344', 'TIP3_129675'], 'LYS_M_188': ['TIP3_47919', 'TIP3_47943'], 'TIP3_128841': ['TIP3_127857', 'TIP3_127722', 'TIP3_129570'], 'TIP3_129183': ['ASP_M_223', 'TIP3_127515', 'TIP3_129399', 'SER_M_191', 'TIP3_127902'], 'TIP3_127638': ['TIP3_129963'], 'TRP_B_493': ['TIP3_47859', 'TIP3_129150'], 'TIP3_47781': ['TIP3_127554'], 'TIP3_47592': ['TIP3_127992'], 'TIP3_128217': ['TIP3_129579', 'TIP3_129906', 'TIP3_129456'], 'TIP3_128526': ['TIP3_129465', 'TIP3_127269'], 'TIP3_127887': ['GLU_M_179', 'SER_M_170'], 'TIP3_47571': ['TIP3_47532', 'TIP3_47856', 'ASP_A_170', 'TIP3_47535', 'TIP3_47556'], 'HIS_A_92': ['TIP3_128241'], 'THR_K_15': ['TIP3_127869'], 'TIP3_127278': ['GLU_C_71', 'TIP3_128433', 'TIP3_129117'], 'TIP3_129951': ['TIP3_127284', 'TIP3_130053', 'ASP_A_103', 'TIP3_127152', 'TIP3_129846'], 'TIP3_127440': ['TIP3_128652', 'TIP3_129567'], 'TIP3_128202': ['TIP3_130116', 'TIP3_127257', 'TIP3_127836'], 'TYR_D_160': ['TIP3_47787'], 'TIP3_127617': ['THR_B_81', 'TIP3_128838'], 'TIP3_128430': ['TIP3_47670', 'ASP_B_313'], 'TIP3_127233': ['TIP3_127941'], 'TIP3_128088': ['TIP3_47637', 'TIP3_47613'], 'TIP3_128676': ['TIP3_129906', 'TIP3_127656'], 'TIP3_129978': ['GLU_B_492'], 'TIP3_129345': ['TIP3_127962'], 'TIP3_129318': ['TIP3_128190', 'GLU_D_219', 'TIP3_129192'], 'TYR_D_296': ['TIP3_128685'], 'LYS_O_104': ['TIP3_127158', 'TIP3_129564', 'TIP3_128655'], 'TIP3_127161': ['SER_C_416', 'TIP3_127464', 'TIP3_128637'], 'TIP3_129294': ['SER_D_88', 'GLU_D_96', 'THR_G_52'], 'TIP3_128313': ['TIP3_130038', 'TIP3_129327'], 'TIP3_128406': ['TIP3_127149', 'TIP3_127596', 'TIP3_128772', 'TIP3_130011'], 'THR_C_346': ['TIP3_129735'], 'SER_D_300': ['TIP3_127914'], 'TIP3_47601': ['TIP3_47823', 'TIP3_47835'], 'ARG_F_19': ['TIP3_129387', 'TIP3_128505'], 'TIP3_129336': ['ASN_A_312', 'TIP3_127971', 'ASP_A_308', 'TIP3_47490'], 'TIP3_127413': ['TIP3_129054'], 'TIP3_128580': ['TIP3_127326', 'GLU_B_387', 'TIP3_129228'], 'TIP3_128652': ['TIP3_127275', 'TIP3_129129', 'GLU_C_394', 'TIP3_128454', 'TIP3_129843'], 'TIP3_129828': ['TIP3_127416'], 'TIP3_128112': ['ASP_D_333', 'TIP3_128886', 'TIP3_129210', 'TIP3_127215'], 'TIP3_128064': ['TIP3_127917', 'TIP3_129000'], 'TIP3_128484': ['ASN_K_13', 'GLU_K_11'], 'TIP3_128703': ['TIP3_128169', 'TIP3_128250', 'TIP3_128754', 'GLU_A_244'], 'TIP3_127422': ['GLU_D_11'], 'TIP3_127722': ['TIP3_129570'], 'TIP3_127164': ['TIP3_128190', 'TIP3_129147', 'TIP3_129192', 'TIP3_129279', 'GLU_D_219'], 'TIP3_127755': ['HIS_C_74', 'ASP_J_23', 'ASP_J_19'], 'TIP3_127239': ['TIP3_128319', 'TIP3_129897', 'TIP3_129222'], 'TIP3_129945': ['ASP_O_14'], 'TIP3_127149': ['TIP3_128406', 'TIP3_128772', 'TIP3_130011'], 'TIP3_127686': ['SER_B_169', 'TIP3_129876', 'TIP3_130065', 'GLU_B_266'], 'TIP3_128190': ['GLU_D_219', 'TIP3_129192', 'TIP3_129318'], 'THR_D_248': ['TIP3_129045'], 'TIP3_127971': ['ASN_A_312', 'TIP3_129336'], 'TIP3_129063': ['TIP3_127374'], 'TIP3_130002': ['TIP3_127689', 'TIP3_129450'], 'TIP3_127476': ['ASP_B_15', 'TIP3_129231', 'TIP3_129807'], 'TIP3_128232': ['TIP3_128937', 'TIP3_129243'], 'TIP3_128271': ['TIP3_130092'], 'TIP3_47763': ['SER_C_275'], 'TIP3_129465': ['TIP3_127269', 'TIP3_128487', 'TIP3_128526'], 'TIP3_128316': ['TIP3_129186'], 'TIP3_128673': ['ASP_D_25', 'TIP3_127404', 'TIP3_129102'], 'TIP3_128439': ['TIP3_127245', 'ASP_B_372'], 'THR_C_412': ['TIP3_127848'], 'TIP3_129321': ['THR_G_27'], 'ASN_A_325': ['TIP3_47538'], 'TIP3_129405': ['TIP3_127866', 'TIP3_127959'], 'TIP3_130089': ['TIP3_128403'], 'TIP3_127203': ['TIP3_128835', 'TIP3_129147'], 'TIP3_127131': ['TIP3_127596'], 'TIP3_127143': ['GLU_D_337', 'TIP3_127917'], 'ASN_A_234': ['TIP3_128217', 'TIP3_128988', 'TIP3_127623'], 'TIP3_129177': ['ASP_D_333', 'TIP3_128886', 'TIP3_128940'], 'TIP3_129081': ['TIP3_130077'], 'TIP3_129936': ['ASP_B_119'], 'LYS_M_18': ['TIP3_130077'], 'TIP3_47646': ['TIP3_128154', 'GLU_D_323', 'TIP3_127998', 'TIP3_130074'], 'TIP3_127524': ['TIP3_129090', 'TIP3_129183'], 'TIP3_129777': ['TIP3_129156'], 'TIP3_129636': ['TIP3_129621', 'TIP3_127389'], 'SER_K_33': ['TIP3_47907'], 'TIP3_127308': ['GLU_C_464', 'TIP3_129450'], 'THR_D_80': ['TIP3_128613'], 'THR_B_271': ['TIP3_127827'], 'TIP3_129717': ['GLU_K_11', 'ASN_K_13'], 'TIP3_128742': ['GLU_C_394', 'TIP3_127407', 'TIP3_129843', 'TIP3_127227'], 'TRP_B_257': ['TIP3_47709'], 'TIP3_128436': ['TIP3_128733', 'THR_B_255'], 'TIP3_47892': ['TIP3_127848'], 'TIP3_127581': ['TIP3_129576'], 'TIP3_128205': ['TIP3_127506'], 'SER_B_365': ['TIP3_127737', 'TIP3_129771'], 'TIP3_127644': ['TIP3_128676'], 'ARG_N_24': ['TIP3_128523', 'TIP3_128220', 'TIP3_128304'], 'TIP3_129543': ['TIP3_127398', 'TIP3_127572', 'ASP_M_169', 'TIP3_128775', 'TIP3_129048'], 'TIP3_128865': ['TIP3_127455'], 'THR_A_316': ['TIP3_129009'], 'TIP3_128208': ['TIP3_129417', 'TIP3_127425'], 'TIP3_129180': ['TIP3_128670'], 'TIP3_128187': ['GLU_P_85'], 'LYS_P_134': ['TIP3_127848'], 'HIS_B_469': ['TIP3_127680', 'TIP3_129378'], 'TIP3_47634': ['ASP_B_276', 'TIP3_128214'], 'TIP3_128364': ['TYR_O_21', 'TIP3_129804'], 'ARG_A_64': ['TIP3_127287', 'TIP3_129852', 'TIP3_47610', 'TIP3_127194', 'TIP3_47934', 'TIP3_128142', 'TIP3_129612'], 'TIP3_127914': ['SER_D_300'], 'TIP3_129882': ['TIP3_127518', 'TIP3_129357', 'TIP3_129144'], 'TIP3_128571': ['TIP3_127953', 'TIP3_129021', 'TIP3_129282'], 'TIP3_129555': ['TIP3_128505', 'TIP3_129618', 'TIP3_127719', 'TIP3_129153'], 'TIP3_130104': ['GLU_D_323', 'TIP3_47925', 'TIP3_130074'], 'TIP3_129330': ['TIP3_127404', 'TIP3_128673', 'TIP3_129102'], 'LYS_P_47': ['TIP3_128151', 'TIP3_128688', 'TIP3_127185', 'TIP3_127956', 'TIP3_129171'], 'TIP3_129192': ['GLU_D_219', 'TIP3_128190', 'TIP3_129318'], 'TIP3_128358': ['TIP3_128766'], 'TIP3_129171': ['TIP3_127485', 'TIP3_128688', 'TIP3_129018'], 'TIP3_128850': ['TIP3_129810', 'TIP3_130062', 'TIP3_127782', 'TIP3_128145'], 'TIP3_127545': ['TIP3_127593'], 'TIP3_127701': ['TIP3_127128', 'TIP3_127227', 'TIP3_128913'], 'TIP3_127119': ['ASP_H_27', 'TIP3_128661'], 'TIP3_129633': ['TIP3_129645'], 'TIP3_47832': ['SER_D_33'], 'ARG_A_136': ['TIP3_127119'], 'ASN_M_155': ['TIP3_129099'], 'TIP3_127782': ['TIP3_128145', 'TIP3_128544', 'TIP3_129627', 'TIP3_128850'], 'TIP3_130050': ['TIP3_127980'], 'TIP3_128292': ['THR_O_44', 'ASP_P_83', 'THR_P_81'], 'TIP3_128826': ['SER_S_29', 'TIP3_127578'], 'TIP3_127350': ['TIP3_47652'], 'SER_A_169': ['TIP3_47550'], 'TIP3_129960': ['TIP3_128172'], 'TIP3_128991': ['TIP3_129168'], 'TIP3_128892': ['TIP3_127671', 'TIP3_128496'], 'TIP3_129381': ['ASN_C_327', 'TIP3_129879'], 'TIP3_128028': ['TIP3_127545'], 'TIP3_128397': ['TIP3_127347', 'TIP3_127812', 'TIP3_128007'], 'TIP3_128376': ['SER_B_76', 'GLU_B_94', 'TIP3_129582'], 'TIP3_129249': ['TIP3_47904', 'TIP3_127875'], 'TIP3_129174': ['THR_P_63', 'TIP3_129798'], 'TIP3_47793': ['TYR_D_315'], 'THR_D_75': ['TIP3_127176'], 'ARG_B_385': ['TIP3_129945', 'TIP3_128898', 'TIP3_47928'], 'TIP3_129774': ['TIP3_129708'], 'TIP3_129969': ['GLU_C_348', 'TYR_M_7', 'TIP3_129537', 'TIP3_130059'], 'TIP3_128832': ['TIP3_128910', 'TIP3_127140', 'TIP3_128616', 'TIP3_129432'], 'TIP3_128700': ['THR_C_316'], 'TIP3_128898': ['ASP_O_14', 'TIP3_47622', 'TIP3_129945', 'TIP3_129702'], 'ASN_C_405': ['TIP3_127239', 'TIP3_128319'], 'TIP3_129891': ['TIP3_127428', 'TIP3_128103'], 'TIP3_47598': ['TIP3_129516'], 'HIS_C_53': ['TIP3_47769'], 'TIP3_128883': ['TIP3_127203'], 'TIP3_127716': ['TIP3_127386'], 'TIP3_127236': ['GLU_C_308', 'TIP3_128046', 'TIP3_128445', 'ASN_C_294'], 'TIP3_127179': ['TIP3_128958'], 'TIP3_47589': ['TIP3_128163'], 'TIP3_128274': ['ASP_D_225', 'TIP3_127980'], 'TIP3_129975': ['TIP3_127314'], 'TIP3_127518': ['TIP3_127200', 'TIP3_129675', 'TIP3_129357', 'TIP3_129882'], 'ARG_D_265': ['TIP3_127611'], 'TIP3_128607': ['ASP_M_224', 'TIP3_127938', 'TIP3_128208', 'TIP3_129417'], 'SER_S_29': ['TIP3_128826'], 'THR_P_48': ['TIP3_127866', 'TIP3_127959'], 'TIP3_129468': ['TIP3_129753'], 'TIP3_127659': ['TIP3_47868', 'TIP3_128412', 'TIP3_128457'], 'TIP3_128769': ['TIP3_128727'], 'TIP3_129009': ['TIP3_129681', 'THR_A_316', 'ASP_A_319'], 'TIP3_129432': ['TIP3_127140', 'TIP3_128910', 'TIP3_127371'], 'TIP3_129246': ['TIP3_127284', 'TIP3_130053'], 'TIP3_127797': ['GLU_D_344', 'TIP3_127809', 'ASP_B_380'], 'TIP3_129870': ['ASN_C_415'], 'TIP3_127737': ['SER_B_365', 'TIP3_129771', 'TIP3_127320', 'TIP3_130116'], 'TIP3_128043': ['TYR_C_340'], 'TIP3_129735': ['TIP3_130173', 'THR_C_346', 'GLU_C_348'], 'TIP3_127677': ['TIP3_47892', 'TIP3_129675', 'TIP3_127848', 'TIP3_129297'], 'TIP3_128625': ['TIP3_128019', 'TIP3_129615'], 'TIP3_129489': ['TIP3_47679', 'TIP3_127899', 'TIP3_127893', 'TIP3_129720'], 'TIP3_129276': ['TIP3_127356'], 'TIP3_128451': ['GLU_B_364', 'TIP3_127359', 'TIP3_127614', 'ASP_D_297', 'TIP3_127347'], 'TIP3_127902': ['TIP3_127524'], 'TIP3_129429': ['TYR_B_279'], 'TIP3_129630': ['TIP3_127650'], 'ASN_A_296': ['TIP3_127197', 'TIP3_127641', 'TIP3_129684'], 'TIP3_130092': ['TIP3_128271'], 'TIP3_127809': ['GLU_D_344', 'TIP3_127797'], 'ARG_F_45': ['TIP3_128361', 'TIP3_129561', 'TIP3_128931'], 'TIP3_129957': ['TIP3_128130', 'ASP_G_9'], 'TIP3_47901': ['TYR_E_55'], 'TIP3_47643': ['TIP3_128379'], 'ASN_M_186': ['TIP3_127887'], 'TIP3_128760': ['TIP3_128394'], 'TIP3_128022': ['GLU_B_353', 'ASP_B_372', 'TIP3_127245', 'TIP3_128439'], 'TYR_E_44': ['TIP3_129339'], 'ARG_B_358': ['TIP3_128430', 'TIP3_129483', 'TIP3_128853', 'TIP3_129864'], 'TIP3_47727': ['TIP3_47748', 'TIP3_47733', 'TIP3_47754'], 'TIP3_129222': ['TIP3_128808', 'TIP3_130122'], 'TIP3_127830': ['SER_B_400'], 'TIP3_128754': ['TIP3_127581', 'TIP3_129645', 'GLU_A_244', 'TIP3_128250', 'TIP3_128703'], 'THR_P_58': ['TIP3_128340'], 'ARG_C_343': ['TIP3_127134', 'TIP3_127251', 'TIP3_128778'], 'TIP3_128250': ['GLU_A_244', 'TIP3_128703', 'TIP3_128754'], 'TIP3_128193': ['TIP3_127128', 'TIP3_129843', 'TIP3_128652'], 'ARG_M_115': ['TIP3_129765', 'TIP3_127428', 'TIP3_129891'], 'TIP3_128922': ['TYR_B_6', 'TIP3_129906'], 'TIP3_128685': ['TIP3_47784', 'TIP3_127998'], 'TIP3_128931': ['TYR_P_26'], 'TIP3_127326': ['TIP3_128412', 'TIP3_128580', 'TIP3_129228'], 'HIS_B_216': ['TIP3_47667'], 'ARG_R_39': ['TIP3_127563'], 'TIP3_128145': ['TIP3_128544', 'TIP3_129627', 'TIP3_129717'], 'ASN_A_322': ['TIP3_47583', 'TIP3_127638'], 'TIP3_128499': ['ASP_D_333', 'TIP3_128613', 'TIP3_129210'], 'TIP3_130122': ['TIP3_128808', 'TIP3_129222'], 'TIP3_128361': ['SER_I_37', 'TIP3_129828'], 'TIP3_47868': ['TIP3_127659'], 'TIP3_129573': ['TYR_A_235', 'GLU_N_25', 'TIP3_130062', 'TIP3_129810'], 'ARG_M_73': ['TIP3_127776'], 'TIP3_47562': ['ASP_A_61', 'TIP3_47559', 'TIP3_129006'], 'TIP3_127398': ['TIP3_128898', 'TIP3_129048', 'TIP3_127572'], 'TIP3_129618': ['TIP3_128505', 'TIP3_129555'], 'TIP3_130086': ['GLU_D_302', 'TIP3_129087'], 'ASN_D_194': ['TIP3_47802', 'TIP3_47796'], 'TIP3_127401': ['GLU_C_456', 'TIP3_127500', 'TIP3_127461'], 'TIP3_127632': ['TIP3_127326', 'TIP3_127659', 'TIP3_128412'], 'TIP3_127851': ['ASP_C_187', 'TIP3_128709'], 'TIP3_129225': ['TIP3_127272', 'TIP3_128064'], 'ARG_S_35': ['TIP3_128448'], 'SER_C_310': ['TIP3_47727'], 'LYS_O_47': ['TIP3_129126'], 'TIP3_47778': ['HIS_C_91', 'TIP3_47739', 'THR_C_94'], 'TIP3_130059': ['TYR_M_7'], 'TIP3_128589': ['TIP3_127383'], 'TIP3_129459': ['TIP3_127395', 'TIP3_128859'], 'TIP3_129585': ['SER_D_254', 'TIP3_128565', 'TIP3_128691', 'TIP3_127341', 'TIP3_128763'], 'TIP3_129087': ['GLU_D_302', 'TIP3_129501'], 'TIP3_129588': ['TIP3_130137'], 'TIP3_127167': ['TIP3_129612', 'TIP3_129594'], 'TIP3_128454': ['TIP3_127113', 'TIP3_129843', 'GLU_C_394', 'HIS_C_398', 'TIP3_129129', 'TIP3_129567'], 'THR_O_44': ['TIP3_128292'], 'TIP3_128418': ['TIP3_127323'], 'TYR_B_6': ['TIP3_128529'], 'TIP3_47709': ['ASN_D_292'], 'TIP3_129681': ['ASP_A_319', 'TIP3_129009'], 'TIP3_127641': ['ASN_A_296', 'TIP3_129699'], 'TIP3_127395': ['TIP3_128859', 'TIP3_129459'], 'TIP3_129552': ['TIP3_130074'], 'TIP3_127953': ['ASN_O_100', 'TIP3_128082', 'TIP3_129282', 'ASN_O_99'], 'TRP_G_62': ['TIP3_127827'], 'ARG_D_294': ['TIP3_128829', 'TIP3_47910', 'TIP3_129744'], 'TIP3_129090': ['SER_M_191', 'TIP3_127524', 'TIP3_129183'], 'LYS_B_423': ['TIP3_128799', 'TIP3_129162', 'TIP3_128226', 'TIP3_129912'], 'TIP3_128304': ['ASP_C_473', 'TIP3_129822'], 'TIP3_128574': ['GLU_M_218'], 'TIP3_128502': ['GLU_B_492'], 'TIP3_128286': ['TIP3_128658'], 'TIP3_47664': ['ASP_B_134'], 'TIP3_127728': ['TIP3_129300'], 'ARG_B_68': ['TIP3_47616', 'TIP3_128367'], 'TIP3_47538': ['TIP3_47586', 'ASN_A_322'], 'TIP3_128613': ['TIP3_128325', 'TIP3_128499', 'TIP3_129210'], 'TIP3_128421': ['TIP3_128220', 'TIP3_128763'], 'LYS_C_48': ['TIP3_127125', 'TIP3_128229'], 'TIP3_129723': ['TIP3_47565', 'GLU_D_242', 'TIP3_127473', 'TIP3_128391', 'TIP3_130002'], 'ASN_C_294': ['TIP3_128445', 'TIP3_128694', 'TIP3_129819', 'TIP3_129933', 'TIP3_128046'], 'ASN_C_228': ['TIP3_128592'], 'TIP3_129834': ['TIP3_128592', 'TIP3_128694', 'TIP3_128847', 'ASN_C_228'], 'TIP3_47661': ['THR_B_27'], 'TIP3_128277': ['ASN_A_338', 'TIP3_129099'], 'TIP3_128331': ['TIP3_127866', 'TIP3_127386', 'TIP3_128796'], 'ARG_A_140': ['TIP3_128631', 'TIP3_129078', 'TIP3_129816', 'TIP3_127164'], 'TIP3_128784': ['TIP3_47847', 'TIP3_127899', 'TIP3_129489'], 'TIP3_127950': ['TIP3_127926', 'TIP3_129198', 'TIP3_129441'], 'TIP3_127707': ['THR_B_327', 'TIP3_130119'], 'SER_I_37': ['TIP3_128361'], 'TIP3_130041': ['TIP3_127536', 'TIP3_127437'], 'SER_A_86': ['TIP3_129360'], 'TIP3_128031': ['TIP3_129153', 'TIP3_129555', 'TIP3_129618'], 'TIP3_128319': ['TIP3_128979', 'TIP3_129897'], 'TIP3_127665': ['TIP3_129732'], 'TIP3_47907': ['SER_K_32'], 'TRP_C_151': ['TIP3_129402'], 'ARG_D_24': ['TIP3_129354', 'TIP3_127806', 'TIP3_130035'], 'ARG_M_189': ['TIP3_129396'], 'ASN_A_298': ['TIP3_127992', 'TIP3_129684', 'TIP3_47592'], 'TIP3_128460': ['TIP3_47922', 'TIP3_128967'], 'SER_M_217': ['TIP3_47916'], 'TYR_E_55': ['TIP3_47898'], 'TIP3_127848': ['GLU_A_329', 'THR_C_412', 'TIP3_47892'], 'LYS_M_69': ['TIP3_127152', 'TIP3_129972'], 'TIP3_129660': ['TIP3_129504', 'TIP3_129789'], 'TIP3_47532': ['GLU_A_333', 'TIP3_47571', 'TIP3_47574', 'ASN_A_181', 'TIP3_47856'], 'TIP3_127842': ['SER_D_84', 'TIP3_127281', 'TIP3_129795'], 'TIP3_47895': ['THR_E_49'], 'HIS_C_398': ['TIP3_128454', 'TIP3_129129', 'TIP3_128343'], 'ASN_K_13': ['TIP3_128484', 'TIP3_129888', 'TIP3_128538'], 'TIP3_127938': ['TIP3_129861', 'ASP_M_222', 'TIP3_127734', 'TIP3_128223'], 'TIP3_129861': ['TYR_M_151', 'TIP3_128076'], 'TIP3_128334': ['TIP3_127182', 'TIP3_127467'], 'TIP3_129240': ['GLU_G_17', 'TIP3_47664'], 'HIS_D_336': ['TIP3_129000'], 'TIP3_129684': ['TIP3_127992', 'TIP3_130113', 'TIP3_127197', 'TIP3_127332'], 'TIP3_129309': ['ASN_O_99', 'TIP3_128277', 'TIP3_129099'], 'TIP3_127218': ['TIP3_127821', 'TIP3_129393'], 'TIP3_129819': ['TIP3_127242', 'TIP3_128625', 'TIP3_128694', 'TIP3_128445'], 'TIP3_129678': ['TIP3_127566', 'TIP3_128262'], 'TIP3_47685': ['ASP_B_334', 'TIP3_47889'], 'TIP3_127494': ['TIP3_128556', 'TIP3_130011'], 'TIP3_129513': ['TIP3_129363', 'GLU_C_83'], 'ASN_M_147': ['TIP3_127785'], 'TIP3_47697': ['TIP3_47625', 'GLU_B_41', 'TIP3_129663'], 'TIP3_127374': ['ASN_C_44', 'ASP_C_150'], 'TYR_P_137': ['TIP3_127200', 'TIP3_129522'], 'TIP3_127734': ['ASP_M_222', 'ASP_M_224', 'TIP3_127938'], 'TIP3_128949': ['ASP_O_96', 'TIP3_47937', 'TIP3_128466', 'TIP3_128622'], 'TIP3_47886': ['TIP3_47547', 'GLU_D_312'], 'TIP3_127365': ['TIP3_129651'], 'TIP3_47721': ['TIP3_47511'], 'ARG_C_449': ['TIP3_128718'], 'TIP3_47823': ['TIP3_47793'], 'TIP3_129942': ['TIP3_128466', 'TIP3_127230', 'TIP3_128595'], 'TIP3_47715': ['TIP3_47766', 'MET_C_342'], 'TIP3_128496': ['TIP3_127671', 'TIP3_128892', 'TIP3_127701', 'TIP3_128235'], 'TIP3_128706': ['TIP3_128961'], 'SER_D_172': ['TIP3_128139'], 'TIP3_130038': ['TIP3_128313', 'TIP3_127542'], 'TRP_D_93': ['TIP3_127455'], 'ARG_C_262': ['TIP3_127731'], 'TIP3_129255': ['TYR_G_49'], 'TIP3_47541': ['TYR_A_161', 'HIS_A_190', 'TIP3_47553', 'TIP3_47556', 'GLU_A_189'], 'TIP3_128388': ['GLU_B_235', 'TIP3_128283'], 'TYR_P_26': ['TIP3_127449', 'TIP3_128469'], 'TIP3_129741': ['TIP3_128871', 'TIP3_127359'], 'TIP3_127608': ['TIP3_127341', 'TIP3_128175', 'TIP3_128904', 'TIP3_127335', 'TIP3_129453'], 'TIP3_129996': ['TIP3_128601'], 'TIP3_129855': ['TIP3_127230', 'TIP3_128364'], 'LYS_P_129': ['TIP3_129927', 'TIP3_128973'], 'TIP3_127821': ['TIP3_128949', 'ASP_O_96', 'TIP3_47937'], 'ASN_B_318': ['TIP3_47634'], 'TIP3_127629': ['TIP3_127935', 'TIP3_128907'], 'TIP3_47676': ['GLU_B_428'], 'TIP3_128829': ['TIP3_47910'], 'TIP3_127299': ['TIP3_130041'], 'TIP3_128349': ['SER_P_39', 'TIP3_127122', 'TIP3_128343', 'TIP3_129624'], 'TIP3_129738': ['TIP3_129597'], 'ARG_E_8': ['TIP3_129153', 'TIP3_129786'], 'TIP3_128811': ['TIP3_128517'], 'TIP3_47613': ['TIP3_47637'], 'TIP3_128166': ['TIP3_127380'], 'TIP3_129912': ['TIP3_128202', 'TIP3_128226', 'TIP3_129162'], 'TIP3_47610': ['TYR_M_151'], 'TIP3_129810': ['TIP3_128850', 'GLU_N_25', 'TIP3_130062'], 'TIP3_127839': ['TIP3_128889', 'GLU_L_30'], 'TIP3_129033': ['TIP3_127509', 'TIP3_128640', 'TIP3_128664'], 'TIP3_127587': ['ASN_A_234', 'TIP3_128538'], 'TIP3_127896': ['TIP3_128934', 'TIP3_128400', 'TIP3_128619'], 'TIP3_129219': ['TIP3_128184', 'TIP3_127794', 'TIP3_128307'], 'TIP3_130020': ['TIP3_130038'], 'TIP3_128934': ['TIP3_129249'], 'ARG_D_326': ['TIP3_47925', 'TIP3_127329', 'TIP3_47631', 'TIP3_128055', 'TIP3_47784', 'TIP3_130104'], 'TIP3_128559': ['TIP3_129978'], 'TIP3_127377': ['TIP3_127965'], 'TIP3_130107': ['TIP3_127224'], 'SER_C_406': ['TIP3_129030'], 'TIP3_129768': ['GLU_C_389'], 'THR_N_5': ['TIP3_128472'], 'TIP3_129672': ['ASP_O_96', 'TIP3_47937', 'TIP3_128622', 'TIP3_129366'], 'LYS_M_86': ['TIP3_129540'], 'TIP3_127875': ['TIP3_127896', 'TIP3_128400', 'TIP3_128934'], 'TIP3_130149': ['TIP3_129288', 'TIP3_129591', 'TIP3_129621', 'TIP3_129636', 'ASP_C_360'], 'TIP3_127836': ['TIP3_127257'], 'TIP3_130125': ['TIP3_127680'], 'TIP3_129021': ['TIP3_128082', 'TIP3_127953', 'TIP3_128571', 'TIP3_129282'], 'ARG_D_26': ['TIP3_127806', 'TIP3_47805', 'TIP3_130035'], 'ARG_E_61': ['TIP3_128586'], 'TIP3_129348': ['TIP3_128121', 'TIP3_129810'], 'TIP3_129933': ['TIP3_128289'], 'TIP3_127128': ['TIP3_127407', 'TIP3_128193', 'TIP3_128964', 'TIP3_129843', 'TIP3_127227', 'TIP3_127701', 'TIP3_128235'], 'TIP3_127698': ['TIP3_127644', 'TIP3_127611'], 'TIP3_127500': ['TIP3_127401', 'GLU_C_456'], 'THR_C_139': ['TIP3_127317'], 'TRP_D_58': ['TIP3_128670'], 'ARG_H_34': ['TIP3_128955', 'TIP3_129651', 'TIP3_128307'], 'TIP3_128241': ['SER_C_216', 'GLU_C_221'], 'TIP3_127776': ['TIP3_127206', 'TIP3_128604', 'GLU_A_104'], 'TIP3_127764': ['TIP3_127647', 'TIP3_128514'], 'TIP3_129168': ['TIP3_128349', 'HIS_C_398', 'TIP3_127113', 'TIP3_128454'], 'TRP_A_131': ['TIP3_47721'], 'TIP3_47484': ['GLU_A_65', 'TIP3_128532'], 'TIP3_129897': ['TIP3_127566', 'TIP3_128355', 'TIP3_128058'], 'LYS_B_418': ['TIP3_129048', 'TIP3_129702', 'TIP3_47622', 'TIP3_128898'], 'TIP3_129561': ['TIP3_128802', 'TIP3_129477'], 'TIP3_128169': ['TIP3_129645', 'THR_D_243', 'TIP3_128640'], 'ARG_B_272': ['TIP3_47634', 'TIP3_47709', 'TIP3_130167', 'TIP3_129426', 'TIP3_128214', 'TIP3_128907'], 'TIP3_130173': ['GLU_C_348', 'TIP3_129735'], 'CYS_D_71': ['TIP3_47850'], 'TIP3_129783': ['TIP3_128796', 'TIP3_127716'], 'TIP3_130116': ['TIP3_127320', 'TIP3_127836', 'TIP3_128202', 'TIP3_128124', 'TIP3_128226'], 'TIP3_47829': ['TIP3_127854', 'TIP3_129003'], 'ARG_G_12': ['TIP3_129957', 'TIP3_128952'], 'TIP3_129675': ['TIP3_127518', 'TIP3_129144', 'TIP3_129357', 'TIP3_129882', 'TIP3_47892', 'TIP3_127677'], 'TIP3_47580': ['TIP3_47463'], 'TIP3_128325': ['TIP3_128133'], 'TIP3_128151': ['TIP3_128688', 'GLU_C_413'], 'SER_B_419': ['TIP3_128799'], 'TIP3_127224': ['TIP3_128244'], 'LYS_M_160': ['TIP3_129087', 'TIP3_129501', 'TIP3_130086', 'TIP3_47646', 'TIP3_128154'], 'TIP3_47877': ['TIP3_47844', 'TIP3_47817', 'TIP3_47853'], 'ASN_A_108': ['TIP3_128604', 'TIP3_129111'], 'TIP3_129210': ['ASP_D_333', 'TIP3_127593', 'HIS_D_61', 'TIP3_127215', 'TIP3_128112'], 'SER_B_76': ['TIP3_128376'], 'TIP3_129579': ['TYR_D_141', 'TIP3_128793'], 'TIP3_127332': ['TIP3_127170', 'TIP3_130113'], 'TIP3_127905': ['ASN_B_53', 'ASN_B_331'], 'TIP3_127434': ['TIP3_128232'], 'TIP3_129663': ['GLU_B_41', 'TIP3_47625'], 'ASN_S_58': ['TIP3_129204'], 'TIP3_127317': ['TIP3_129063', 'THR_C_139', 'GLU_C_141'], 'TIP3_128505': ['TIP3_127719'], 'ASN_C_155': ['TIP3_129237'], 'ASN_A_266': ['TIP3_128502'], 'TIP3_128793': ['TIP3_130152', 'TIP3_127698'], 'ARG_A_269': ['TIP3_128640', 'TIP3_127509', 'TIP3_127980', 'TIP3_128274'], 'ARG_B_384': ['TIP3_130155', 'TIP3_130146', 'TIP3_127326', 'TIP3_127659'], 'LYS_C_339': ['TIP3_128622', 'TIP3_129099', 'TIP3_129672', 'TIP3_128277', 'TIP3_129309', 'TIP3_128043'], 'TIP3_129726': ['TYR_A_73', 'TIP3_129303'], 'TIP3_128598': ['TIP3_129255'], 'TIP3_128433': ['TIP3_127278', 'TIP3_129117'], 'ARG_P_66': ['TIP3_127884', 'TIP3_129072'], 'TIP3_129972': ['ASP_A_103', 'TIP3_127152', 'TIP3_128679', 'TIP3_129951'], 'TIP3_129243': ['TIP3_128937'], 'TIP3_127311': ['GLU_A_329', 'TIP3_127833', 'TIP3_127848', 'ASP_A_342', 'TIP3_127170'], 'TIP3_127827': ['TIP3_128583', 'TIP3_129429', 'TIP3_127110'], 'TIP3_130158': ['TIP3_128511'], 'TIP3_47850': ['SER_D_65'], 'TIP3_128871': ['TIP3_127359', 'TIP3_129741', 'TIP3_127347'], 'ASN_D_350': ['TIP3_127908'], 'TIP3_130110': ['ASP_C_473', 'TIP3_127788'], 'THR_P_63': ['TIP3_47949'], 'TIP3_128001': ['HIS_B_466', 'SER_B_239'], 'TIP3_129150': ['TIP3_127155', 'TIP3_129930'], 'TIP3_129507': ['GLU_O_93', 'TIP3_128238'], 'LYS_D_264': ['TIP3_127473', 'TIP3_128250', 'TIP3_47565'], 'HIS_A_304': ['TIP3_128973', 'TIP3_129705', 'TIP3_127674', 'TIP3_129474'], 'TIP3_128109': ['TIP3_129549', 'TIP3_129816', 'TIP3_129078'], 'TIP3_129624': ['TIP3_128991'], 'SER_D_230': ['TIP3_127461', 'TIP3_128631'], 'TYR_C_149': ['TIP3_129402'], 'ASN_A_338': ['TIP3_47766', 'TIP3_128730'], 'TIP3_129126': ['TIP3_129783', 'TIP3_128292'], 'ASN_B_53': ['TIP3_128985'], 'TIP3_129267': ['TIP3_129990'], 'ARG_B_326': ['TIP3_127854', 'TIP3_129003', 'TIP3_128073', 'TIP3_128451', 'TIP3_128079', 'TIP3_129744'], 'ARG_K_7': ['TIP3_127536'], 'TIP3_47547': ['TIP3_130098'], 'TIP3_129411': ['TIP3_47928'], 'SER_A_70': ['TIP3_47448'], 'TIP3_127572': ['TIP3_127398', 'TIP3_129543'], 'TIP3_128730': ['ASN_A_338'], 'SER_B_74': ['TIP3_127818', 'TIP3_129582'], 'TYR_M_168': ['TIP3_47925'], 'TIP3_130071': ['TIP3_128808', 'TIP3_128385'], 'TIP3_128622': ['ASP_O_96', 'TIP3_47937', 'TIP3_128466', 'TIP3_128949', 'TIP3_129672'], 'LYS_L_34': ['TIP3_129051'], 'TIP3_129669': ['TIP3_127191'], 'TIP3_127335': ['SER_D_262', 'TIP3_129453', 'TIP3_129573', 'THR_K_15', 'TIP3_128904'], 'TIP3_129057': ['SER_P_39', 'TIP3_127860'], 'TIP3_127341': ['TIP3_127608', 'TIP3_128175', 'TIP3_128904', 'TIP3_129585', 'TIP3_127263'], 'SER_B_79': ['TIP3_127617'], 'TIP3_47586': ['TIP3_47538', 'TIP3_47526'], 'TIP3_127578': ['SER_S_36', 'TIP3_128118'], 'TIP3_47670': ['ASP_B_313', 'TIP3_47658', 'TIP3_47682'], 'TIP3_129006': ['TIP3_47493', 'ASP_A_61', 'TIP3_47559', 'TIP3_47562'], 'TIP3_127443': ['TIP3_129132', 'TIP3_129876'], 'TIP3_130011': ['TIP3_127494', 'TIP3_128406', 'TIP3_128556', 'TIP3_127149', 'TIP3_128610'], 'TIP3_127941': ['TIP3_128193'], 'TIP3_47478': ['TIP3_47532'], 'TIP3_129771': ['TIP3_127836', 'TIP3_130116', 'TIP3_128871'], 'TIP3_128814': ['TIP3_128271'], 'TIP3_128307': ['TIP3_128184', 'TIP3_127794'], 'TIP3_47790': ['TIP3_47829', 'TIP3_127854'], 'TIP3_127482': ['TIP3_127293'], 'TIP3_128637': ['SER_C_416', 'TIP3_128259', 'TIP3_129477'], 'TIP3_128889': ['THR_L_29'], 'TIP3_128427': ['ASP_C_460', 'TIP3_130161'], 'TIP3_47880': ['TIP3_47817'], 'LYS_C_381': ['TIP3_127977'], 'TIP3_47787': ['ASN_D_292'], 'TIP3_128988': ['TIP3_128091', 'TIP3_129924', 'TIP3_128217', 'TIP3_129906'], 'TIP3_129291': ['TIP3_129117', 'TIP3_129909'], 'ARG_C_390': ['TIP3_127233'], 'TIP3_47460': ['TYR_A_161', 'TIP3_47541', 'TIP3_47556'], 'TIP3_129909': ['GLU_C_71'], 'TIP3_127359': ['GLU_B_364', 'TIP3_127614', 'TIP3_128451', 'TIP3_127347', 'TIP3_128871'], 'TIP3_128268': ['TIP3_128595'], 'TIP3_128688': ['TIP3_129018', 'TIP3_129171'], 'TIP3_129399': ['MET_M_225', 'SER_M_191', 'ASP_M_223', 'TIP3_129183'], 'TIP3_127272': ['GLU_D_337', 'TIP3_127143', 'TIP3_127917'], 'TIP3_128733': ['SER_B_260', 'THR_B_262', 'TIP3_47703'], 'SER_B_388': ['TIP3_127797'], 'TIP3_130152': ['TIP3_128286'], 'TIP3_128553': ['TIP3_127284', 'TIP3_128844'], 'TIP3_127155': ['TIP3_129150'], 'TRP_B_340': ['TIP3_47676'], 'TIP3_127122': ['SER_P_39', 'TIP3_127860', 'TIP3_128343'], 'TIP3_127209': ['TIP3_128334', 'TIP3_127467'], 'ASN_A_312': ['TIP3_129207'], 'TIP3_128016': ['TYR_P_75'], 'TIP3_128628': ['THR_B_371'], 'TIP3_129075': ['TIP3_47799', 'TIP3_128829'], 'TIP3_128976': ['TIP3_128592', 'TIP3_128694', 'TIP3_128847', 'TIP3_127209', 'TIP3_127467'], 'TIP3_47514': ['TIP3_47517'], 'TIP3_47517': ['ASP_A_61', 'TIP3_47550', 'GLU_A_333', 'GLU_C_354'], 'TIP3_47481': ['SER_A_167'], 'TIP3_128085': ['TIP3_128790', 'TIP3_129888'], 'TIP3_129873': ['TIP3_127671', 'TIP3_128496'], 'TIP3_128664': ['TIP3_127509', 'TIP3_129633'], 'TIP3_128577': ['TIP3_127575'], 'TIP3_129015': ['GLU_B_393', 'TIP3_47622'], 'TIP3_128226': ['TIP3_128124', 'TIP3_130116', 'TIP3_127320', 'TIP3_128202', 'TIP3_129162'], 'TIP3_127329': ['TIP3_47925'], 'TIP3_127467': ['TIP3_127389', 'TIP3_129636', 'ASP_C_360', 'TIP3_129414'], 'ARG_A_257': ['TIP3_47859', 'TIP3_128559'], 'TIP3_129708': ['TIP3_129774'], 'TIP3_128019': ['ASP_C_360', 'TIP3_129591', 'TIP3_129636', 'TIP3_127389', 'TIP3_129495'], 'HIS_B_343': ['TIP3_128853', 'TIP3_129558'], 'TIP3_129699': ['ASN_A_296', 'TIP3_127641', 'TIP3_129141', 'TIP3_129747'], 'ARG_B_472': ['TIP3_127680', 'TIP3_128283', 'TIP3_128388', 'TIP3_128529'], 'TIP3_129843': ['TIP3_127113', 'TIP3_128742', 'TIP3_128964', 'GLU_C_394', 'TIP3_127407'], 'TIP3_128532': ['GLU_D_312', 'TIP3_128634', 'TIP3_47562', 'TIP3_129006'], 'ASN_A_301': ['TIP3_47712'], 'TIP3_128766': ['ASP_A_25'], 'TRP_P_130': ['TIP3_128310'], 'TYR_S_27': ['TIP3_128811'], 'TIP3_130044': ['TIP3_128223'], 'TIP3_127923': ['ASP_C_360', 'TIP3_129414', 'TIP3_127182', 'TIP3_127467'], 'TIP3_130137': ['TIP3_129228'], 'TIP3_127980': ['TIP3_127509', 'ASP_D_225'], 'TIP3_128121': ['TIP3_129759', 'TIP3_129810', 'TIP3_130062'], 'TIP3_129000': ['GLU_D_337', 'TIP3_127917'], 'LYS_E_84': ['TIP3_127191'], 'TIP3_127362': ['GLU_A_229', 'TIP3_127305'], 'ARG_B_124': ['TIP3_127356', 'TIP3_128247'], 'TIP3_129114': ['TIP3_129936'], 'TIP3_127515': ['ASP_M_222', 'TIP3_127734', 'SER_M_150', 'ASP_M_223', 'TIP3_129183', 'TIP3_129399'], 'TIP3_47577': ['TIP3_47607', 'THR_A_179'], 'TIP3_129351': ['GLU_C_348', 'TYR_M_7', 'TIP3_129333', 'TIP3_130173'], 'HIS_B_114': ['TIP3_47655'], 'TIP3_128679': ['ASP_A_103', 'TIP3_129972'], 'TIP3_47871': ['TIP3_47484', 'TIP3_47562'], 'TIP3_127695': ['GLU_D_241', 'GLU_A_242', 'GLU_D_242'], 'TIP3_47559': ['GLU_A_333', 'TIP3_47532', 'TIP3_47574', 'ASP_A_61', 'TIP3_47562', 'TIP3_129006'], 'TIP3_129450': ['GLU_C_464', 'TIP3_127308'], 'TIP3_127416': ['TIP3_127761'], 'TIP3_129111': ['ASN_A_108'], 'TIP3_128958': ['TIP3_127377', 'ASP_D_20', 'TIP3_127563'], 'TIP3_130005': ['TIP3_127485', 'TIP3_129171'], 'ARG_C_423': ['TIP3_47724', 'TIP3_127236'], 'TIP3_129597': ['TIP3_128187'], 'TIP3_129906': ['TIP3_128091', 'TIP3_128676', 'TIP3_128217', 'TIP3_129579'], 'TIP3_129711': ['TIP3_127845', 'TIP3_128196'], 'TIP3_128091': ['TYR_B_6', 'TIP3_128157', 'TIP3_129906'], 'ARG_C_41': ['TIP3_47904', 'TIP3_129249'], 'ASN_A_315': ['TIP3_128973', 'TIP3_129705', 'TIP3_130164'], 'TYR_O_42': ['TIP3_128925'], 'TIP3_127611': ['GLU_A_226'], 'TIP3_128157': ['TIP3_128091', 'TIP3_129312'], 'TIP3_127614': ['GLU_B_364', 'TIP3_47799', 'TIP3_127359', 'TIP3_128451', 'TIP3_128829'], 'TIP3_127284': ['TIP3_130032', 'TIP3_128553', 'TIP3_129951'], 'TRP_C_189': ['TIP3_129834'], 'TIP3_128610': ['TYR_B_312', 'TIP3_127494'], 'TIP3_129198': ['TIP3_127419'], 'TIP3_47889': ['TIP3_47685'], 'LYS_C_457': ['TIP3_130050'], 'TIP3_128538': ['TIP3_128085', 'TIP3_129888', 'TIP3_127587'], 'TIP3_127935': ['TIP3_127917', 'TIP3_128064'], 'TIP3_127563': ['ASP_D_20', 'TIP3_128958'], 'TIP3_47925': ['GLU_D_323', 'TYR_M_168', 'TIP3_130104', 'TIP3_47679'], 'TIP3_128751': ['TIP3_128244'], 'SER_C_299': ['TIP3_127236'], 'ASN_D_190': ['TIP3_47823', 'TIP3_47835', 'TIP3_47793'], 'TIP3_129753': ['TIP3_129231', 'TIP3_129807'], 'TIP3_127110': ['TIP3_129954', 'TIP3_130065', 'TIP3_128583'], 'TIP3_129186': ['TIP3_128316', 'TIP3_129435'], 'TIP3_127845': ['THR_C_188', 'GLU_C_300', 'TIP3_127749', 'TIP3_129159', 'TIP3_128289'], 'TIP3_127800': ['SER_C_299', 'TIP3_128289', 'GLU_C_308'], 'TIP3_127569': ['TIP3_127791', 'TIP3_128238', 'GLU_O_93'], 'TIP3_128916': ['GLU_P_122', 'TIP3_127449'], 'TIP3_128973': ['HIS_A_304', 'ASN_A_315', 'TIP3_129705'], 'TIP3_129801': ['TIP3_128160'], 'TIP3_47493': ['ASP_A_59'], 'TIP3_129144': ['TIP3_47892', 'TIP3_130146'], 'TIP3_127926': ['TIP3_127950', 'TIP3_129441'], 'SER_C_416': ['TIP3_127161'], 'LYS_C_323': ['TIP3_128148', 'TIP3_129768'], 'TIP3_129477': ['TIP3_127302', 'TIP3_128259', 'TIP3_130143'], 'TIP3_47637': ['TIP3_47613'], 'TIP3_129954': ['TIP3_127110', 'TIP3_127443'], 'TIP3_127884': ['TIP3_129072', 'TIP3_127290', 'TIP3_128016'], 'TIP3_129444': ['TIP3_129510', 'GLU_B_393'], 'ARG_D_139': ['TIP3_127413', 'TIP3_127644', 'TIP3_128676', 'TIP3_127698', 'TIP3_128157', 'TIP3_129054'], 'TIP3_128979': ['TIP3_129222', 'TIP3_127239', 'TIP3_128319', 'TIP3_129897'], 'TIP3_129528': ['TIP3_128424'], 'TIP3_47487': ['TIP3_47439'], 'TIP3_129297': ['THR_C_412', 'GLU_A_329', 'TIP3_127677', 'TIP3_127848'], 'TIP3_129333': ['TIP3_130173', 'TIP3_127872', 'TIP3_128166'], 'TIP3_128055': ['TIP3_47631'], 'ARG_B_448': ['TIP3_127110'], 'TIP3_129342': ['TIP3_129798'], 'TIP3_127647': ['TIP3_128514', 'TIP3_129528'], 'TIP3_129279': ['TIP3_127164', 'TIP3_129147'], 'ARG_C_357': ['TIP3_47550'], 'TIP3_47526': ['TIP3_47595'], 'TIP3_129807': ['ASN_K_4', 'TIP3_127476', 'ASP_B_15', 'TIP3_129231'], 'TIP3_129963': ['ASN_A_315', 'TIP3_127638', 'TIP3_129927'], 'TRP_A_105': ['TIP3_47502'], 'TIP3_127353': ['TIP3_128220', 'TIP3_128421', 'TIP3_128691', 'TIP3_128763', 'TIP3_127254', 'TIP3_128523', 'TIP3_128565', 'TIP3_129585'], 'TIP3_128490': ['TIP3_129141', 'TIP3_129921'], 'ASN_A_335': ['TIP3_47484'], 'TIP3_129606': ['TIP3_130071'], 'TIP3_127371': ['TIP3_127140', 'TIP3_128910'], 'TIP3_128475': ['TIP3_129558'], 'TYR_C_82': ['TIP3_47724', 'TIP3_128862'], 'TIP3_47703': ['TIP3_128598', 'TIP3_128436', 'TIP3_128733'], 'SER_A_222': ['TIP3_47865'], 'TIP3_129852': ['TIP3_47934', 'TIP3_127287', 'TIP3_127167'], 'TIP3_128508': ['TIP3_130047'], 'TIP3_127893': ['SER_B_365', 'TIP3_127737'], 'TIP3_129537': ['THR_M_13', 'TIP3_129969', 'TIP3_130059'], 'TIP3_47835': ['THR_D_192', 'ASN_D_190'], 'TYR_A_135': ['TIP3_127365', 'TIP3_129651'], 'TIP3_129471': ['TIP3_127404', 'TIP3_129330'], 'TIP3_129231': ['ASP_B_15', 'ASN_K_6'], 'TIP3_129924': ['TIP3_128091', 'TIP3_128988'], 'ARG_E_18': ['TIP3_129786', 'TIP3_129153'], 'TIP3_129879': ['ASN_C_327', 'TIP3_129381'], 'TIP3_129303': ['TIP3_128553', 'TIP3_130053', 'TIP3_129246'], 'TIP3_129483': ['SER_B_278', 'TIP3_128013'], 'TIP3_129051': ['GLU_L_30'], 'TIP3_128556': ['TIP3_128406'], 'TIP3_47667': ['TIP3_127803'], 'ARG_D_180': ['TIP3_128112', 'TIP3_128886', 'TIP3_128325', 'TIP3_128613', 'TIP3_128139', 'TIP3_129915', 'TIP3_129177', 'TIP3_128133', 'TIP3_128499'], 'TIP3_47718': ['TIP3_47715', 'TIP3_47508'], 'ASN_A_191': ['TIP3_47538', 'TIP3_47586', 'TIP3_47544'], 'SER_D_262': ['TIP3_129453', 'TIP3_129573'], 'TIP3_129903': ['TIP3_127584'], 'TIP3_130035': ['TIP3_130017', 'ASP_R_35', 'TIP3_128280'], 'TIP3_128046': ['TIP3_128445', 'TIP3_129819', 'GLU_C_308', 'TIP3_127236'], 'TIP3_127290': ['TIP3_129258', 'TIP3_128016'], 'TIP3_129387': ['THR_E_5', 'TIP3_127824'], 'TIP3_129744': ['TIP3_47910', 'ASP_D_297', 'TIP3_128451'], 'TIP3_129270': ['TIP3_127203', 'TIP3_128883', 'TIP3_129318'], 'TIP3_128595': ['TIP3_128268', 'TIP3_128466'], 'TIP3_47856': ['ASN_A_181', 'TIP3_129987'], 'TIP3_129798': ['THR_P_63', 'TIP3_129342', 'TIP3_129174'], 'TIP3_127134': ['TIP3_128460'], 'TIP3_128682': ['GLU_M_244'], 'TIP3_128007': ['ASP_D_297', 'TIP3_128079', 'GLU_D_302', 'TIP3_127812', 'TIP3_129501'], 'TIP3_129078': ['GLU_C_456', 'TIP3_127500'], 'TIP3_128457': ['GLU_D_343', 'TIP3_127470', 'TIP3_127929', 'TIP3_127659'], 'TIP3_129378': ['GLU_B_235', 'HIS_B_469', 'TIP3_127680'], 'TIP3_129030': ['TIP3_127890', 'TIP3_127779'], 'TIP3_129645': ['TIP3_127581', 'TIP3_128754', 'TIP3_129633'], 'TYR_C_395': ['TIP3_127800'], 'ASN_C_373': ['TIP3_127251'], 'ARG_B_18': ['TIP3_127476', 'TIP3_129807'], 'ARG_M_152': ['TIP3_127734', 'TIP3_127938', 'TIP3_128607', 'TIP3_129861', 'TIP3_127425', 'TIP3_128076'], 'SER_M_166': ['TIP3_127908', 'TIP3_130155'], 'TIP3_129564': ['ASP_P_53'], 'TIP3_127863': ['TIP3_127455', 'TIP3_128865'], 'TIP3_128259': ['TIP3_127302', 'TIP3_128916', 'GLU_P_122', 'TIP3_127449', 'TIP3_127464', 'TIP3_129327'], 'ASN_A_76': ['TIP3_127914'], 'TIP3_129582': ['GLU_B_94', 'TIP3_128376', 'SER_B_74', 'TIP3_127818'], 'TIP3_127137': ['TIP3_127653', 'TIP3_128016'], 'TIP3_47769': ['TYR_C_149'], 'SER_D_165': ['TIP3_127629', 'TIP3_129426'], 'TIP3_47736': ['TIP3_47922'], 'TIP3_129162': ['TIP3_129912', 'TIP3_127257', 'TIP3_128202'], 'TIP3_129132': ['TIP3_127443'], 'TIP3_47748': ['TIP3_47589', 'ASP_A_342', 'GLU_C_354'], 'TIP3_129690': ['TIP3_130107'], 'SER_A_85': ['TIP3_47481'], 'TIP3_129567': ['TIP3_128652', 'TIP3_129129', 'TIP3_129057'], 'TIP3_129069': ['TIP3_128190', 'TIP3_129147', 'TIP3_129270'], 'TIP3_47625': ['GLU_B_41', 'TIP3_129663'], 'TYR_P_35': ['TIP3_127995', 'TIP3_129513'], 'ARG_B_357': ['TIP3_129660'], 'TIP3_128247': ['TIP3_127356'], 'TIP3_47454': ['SER_A_70', 'TIP3_47448'], 'TIP3_127731': ['TIP3_128649'], 'TIP3_127995': ['TYR_P_35', 'TIP3_129039', 'ASN_P_106'], 'SER_B_169': ['TIP3_127686'], 'TIP3_47910': ['TIP3_127614', 'TIP3_128829'], 'TIP3_129612': ['TIP3_127167', 'TIP3_47934'], 'LYS_C_154': ['TIP3_128178', 'TIP3_128037'], 'ASN_K_4': ['TIP3_129468'], 'TIP3_127197': ['TIP3_127332', 'TIP3_129684'], 'TIP3_129426': ['ASP_B_276', 'TIP3_127629', 'TIP3_127935'], 'TIP3_129990': ['TIP3_129267'], 'HIS_C_251': ['TIP3_47751'], 'TIP3_127302': ['TIP3_129477', 'TIP3_130143', 'TIP3_128916'], 'SER_D_245': ['TIP3_127689', 'TIP3_128256'], 'HIS_C_74': ['TIP3_127755'], 'ASN_B_348': ['TIP3_129408'], 'ARG_C_391': ['TIP3_128196', 'TIP3_127941', 'TIP3_127407', 'TIP3_127128'], 'TIP3_129129': ['TIP3_127275', 'TIP3_128652', 'GLU_C_394', 'HIS_C_398', 'TIP3_128454'], 'TIP3_129654': ['TIP3_127521', 'TIP3_127557'], 'TIP3_128034': ['TIP3_127938', 'TIP3_128607', 'GLU_D_310'], 'TIP3_128925': ['TIP3_127386', 'TIP3_127716', 'TIP3_127653'], 'TIP3_128994': ['TIP3_128100'], 'TIP3_127833': ['GLU_A_329', 'THR_C_412', 'TIP3_127311'], 'TIP3_128805': ['GLU_E_7'], 'TIP3_129282': ['ASN_O_100', 'TIP3_127953', 'TIP3_128082', 'TIP3_129021', 'ASN_O_31'], 'TIP3_129702': ['ASP_O_14', 'ASP_M_169', 'TIP3_128106', 'TIP3_129462'], 'TIP3_127245': ['ASP_B_372', 'TIP3_128439'], 'THR_O_68': ['TIP3_128010'], 'TIP3_128235': ['TIP3_127701', 'TIP3_128496', 'TIP3_129873'], 'TIP3_130161': ['ASP_C_460'], 'LYS_B_321': ['TIP3_47799', 'TIP3_127614', 'TIP3_127359', 'TIP3_129741', 'TIP3_129135'], 'TRP_B_56': ['TIP3_128610'], 'TIP3_127389': ['TIP3_129495', 'TIP3_128976', 'TIP3_129615'], 'TIP3_128835': ['TIP3_129147', 'TIP3_129279', 'TIP3_129069', 'TIP3_129270'], 'ARG_G_3': ['TIP3_129936'], 'TIP3_127710': ['TIP3_127581', 'TIP3_129645'], 'HIS_A_198': ['TIP3_47442'], 'TIP3_127758': ['SER_R_32'], 'ASN_C_293': ['TIP3_128847'], 'TIP3_127593': ['HIS_D_61', 'TIP3_127215', 'TIP3_129210', 'ASP_D_333'], 'TIP3_127404': ['ASP_D_25', 'TIP3_127551', 'TIP3_129102', 'TIP3_129471'], 'TIP3_127536': ['TIP3_127506', 'TIP3_129630', 'TIP3_130041'], 'TIP3_129423': ['TIP3_128541'], 'TIP3_128082': ['ASN_O_31', 'TIP3_129234', 'TIP3_128580'], 'TIP3_47631': ['TYR_D_296'], 'TIP3_127818': ['TIP3_128376'], 'HIS_P_118': ['TIP3_127464'], 'TIP3_128907': ['TIP3_127629', 'TIP3_127935', 'TIP3_128214'], 'TIP3_128280': ['ASP_R_35', 'TIP3_130035'], 'TIP3_129384': ['TIP3_129348', 'SER_D_262'], 'TIP3_127473': ['GLU_D_242', 'TIP3_128391', 'TIP3_129723'], 'TIP3_47640': ['SER_B_439'], 'ARG_C_461': ['TIP3_127695'], 'TIP3_128004': ['SER_C_299', 'TYR_C_395'], 'TIP3_129453': ['TIP3_128175', 'SER_D_262', 'TIP3_129573'], 'TIP3_128895': ['TIP3_128802', 'TIP3_129561'], 'TIP3_129288': ['TIP3_127740', 'TIP3_130149'], 'TIP3_127269': ['ASN_C_373', 'ASP_M_79', 'TIP3_127251', 'TIP3_128487', 'TIP3_129465'], 'TIP3_127320': ['TIP3_127737', 'TIP3_129771', 'TIP3_130116', 'TIP3_127836'], 'TIP3_129576': ['TIP3_127710', 'TIP3_127911', 'TIP3_130140'], 'TIP3_127911': ['TIP3_127710', 'TIP3_128415', 'TIP3_129024'], 'TIP3_128295': ['SER_I_38'], 'TIP3_129759': ['TIP3_128880'], 'TIP3_128937': ['TIP3_128232'], 'TRP_B_78': ['TIP3_128376'], 'TIP3_47673': ['TIP3_47889'], 'THR_D_56': ['TIP3_47898'], 'TIP3_127683': ['TYR_A_235', 'SER_D_262', 'TIP3_127335'], 'TIP3_47472': ['TIP3_47496', 'TIP3_47487'], 'TIP3_127497': ['TIP3_128358', 'TIP3_127815', 'TIP3_128970'], 'TIP3_128283': ['GLU_B_235'], 'TRP_C_291': ['TIP3_128490'], 'TIP3_128445': ['GLU_C_308', 'TIP3_127800', 'TIP3_129933'], 'TIP3_129366': ['TIP3_47937', 'TIP3_127212', 'TIP3_128238'], 'TIP3_129627': ['TIP3_128145', 'TIP3_130062', 'TIP3_128544', 'TIP3_129717'], 'TIP3_129921': ['TIP3_128490'], 'ARG_B_7': ['TIP3_129519'], 'TYR_B_226': ['TIP3_129687'], 'TIP3_129876': ['SER_B_169'], 'TIP3_128799': ['TIP3_129162', 'SER_B_419', 'GLU_M_179'], 'ARG_D_128': ['TIP3_47832', 'TIP3_127155'], 'TIP3_127446': ['TIP3_47922'], 'TIP3_128565': ['TIP3_127254', 'TIP3_127353', 'TIP3_127788', 'TIP3_128691'], 'SER_A_232': ['TIP3_128790'], 'TIP3_128967': ['TIP3_128520'], 'TIP3_127623': ['TIP3_128988', 'TIP3_129924'], 'TIP3_127287': ['ASP_A_59', 'TIP3_129360'], 'TIP3_129360': ['ASN_A_108', 'ASP_A_59'], 'TIP3_47940': ['TIP3_127986'], 'TIP3_128079': ['ASP_D_297', 'TIP3_128007'], 'SER_C_330': ['TIP3_129507'], 'SER_M_150': ['TIP3_127515', 'TIP3_129183', 'TIP3_129399'], 'TIP3_47811': ['TYR_A_254'], 'TIP3_127470': ['GLU_D_343'], 'TIP3_127230': ['TIP3_128268', 'TIP3_127575', 'TIP3_128595', 'TIP3_129942'], 'TIP3_128928': ['GLU_A_329', 'TIP3_127677', 'TIP3_127848', 'TIP3_127311'], 'TIP3_47817': ['TIP3_47880'], 'TIP3_130017': ['TIP3_128280', 'TIP3_130035'], 'ASN_A_181': ['TIP3_47478'], 'TIP3_47859': ['TIP3_129150', 'ASP_D_25'], 'TIP3_128076': ['GLU_A_65', 'TYR_M_151'], 'TIP3_127596': ['GLU_B_266', 'TIP3_128697', 'TIP3_128406', 'TIP3_130011'], 'TIP3_128601': ['TIP3_129225', 'TIP3_129996'], 'TIP3_127590': ['TIP3_129306'], 'TIP3_128238': ['TIP3_127791', 'TIP3_127212', 'TIP3_129366'], 'TIP3_129003': ['ASN_K_37', 'TIP3_47829', 'ASP_D_297', 'TIP3_127854'], 'TIP3_129915': ['TIP3_128139'], 'TIP3_127998': ['GLU_D_323', 'TIP3_47646', 'TIP3_128397', 'TIP3_129501'], 'TIP3_127206': ['TIP3_128142', 'TIP3_127776'], 'SER_P_39': ['TIP3_127860'], 'TIP3_128424': ['TIP3_127383', 'TIP3_128589'], 'TYR_M_151': ['TIP3_128076', 'TIP3_129861'], 'TIP3_128223': ['ASP_M_222', 'TIP3_127938', 'GLU_D_310', 'TIP3_128034'], 'TIP3_127566': ['TIP3_128979', 'TIP3_129897', 'SER_I_38'], 'TIP3_128067': ['ASN_M_124', 'ASP_M_99', 'ASP_M_102'], 'ARG_C_447': ['TIP3_127527', 'TIP3_127401', 'TIP3_128109', 'TIP3_129549'], 'TIP3_128196': ['TIP3_129159', 'TIP3_127845', 'TIP3_129711'], 'TIP3_130113': ['TIP3_127170', 'TIP3_127332', 'TIP3_127311', 'TIP3_127833'], 'TIP3_47865': ['SER_A_221', 'SER_A_222'], 'TIP3_128802': ['SER_I_38', 'TIP3_128295', 'TIP3_129477', 'TIP3_129561'], 'TIP3_128583': ['TIP3_129429', 'TIP3_130065'], 'TIP3_47520': ['SER_A_134', 'TIP3_47451', 'ASN_D_220'], 'TIP3_127869': ['TYR_A_235', 'THR_K_15', 'GLU_N_25'], 'TIP3_128298': ['TIP3_127182'], 'TIP3_127899': ['TIP3_47847', 'TIP3_128784', 'TIP3_129489'], 'TIP3_128256': ['GLU_C_464', 'TIP3_127689', 'TIP3_129450', 'TIP3_127308'], 'ASN_P_49': ['TIP3_127653', 'TIP3_128925'], 'TIP3_129312': ['TIP3_130125'], 'TIP3_129927': ['TIP3_129963'], 'TIP3_127908': ['ASN_D_350', 'TIP3_47913'], 'ASN_K_37': ['TIP3_129003'], 'TIP3_128940': ['TIP3_129996', 'TIP3_128601'], 'SER_A_177': ['TIP3_47601'], 'TIP3_129339': ['TIP3_47901'], 'TIP3_47847': ['TIP3_127572'], 'HIS_C_444': ['TIP3_127527'], 'HIS_C_56': ['TIP3_47775'], 'TIP3_129687': ['TIP3_127296', 'TIP3_129012', 'TIP3_130008'], 'TIP3_128964': ['TIP3_127227', 'TIP3_128742', 'TIP3_127128', 'TIP3_128235'], 'TIP3_47937': ['ASP_O_96', 'TIP3_128622', 'TIP3_128949'], 'TIP3_47448': ['TIP3_47439', 'TIP3_47529'], 'TIP3_129099': ['ASN_M_155', 'ASP_O_96', 'TIP3_128622'], 'TIP3_128010': ['TIP3_129738'], 'TIP3_128640': ['TIP3_128664', 'TIP3_129033', 'TIP3_129633', 'ASN_A_267'], 'TIP3_130083': ['TIP3_127584', 'TIP3_129903'], 'ARG_C_197': ['TIP3_127851'], 'TIP3_128568': ['THR_P_46'], 'LYS_D_23': ['TIP3_130101'], 'THR_P_56': ['TIP3_128310'], 'LYS_B_308': ['TIP3_127494', 'TIP3_128697', 'TIP3_130011'], 'TYR_A_237': ['TIP3_129723'], 'TIP3_129414': ['ASP_C_360', 'TIP3_127923', 'TIP3_127389', 'TIP3_128019', 'TIP3_129591'], 'TIP3_129438': ['GLU_G_17'], 'TIP3_127992': ['TIP3_130113'], 'TIP3_128289': ['THR_C_188', 'TIP3_127845', 'TIP3_129711'], 'TIP3_128952': ['TIP3_128247', 'GLU_B_121'], 'TIP3_128139': ['TIP3_128613'], 'TIP3_129474': ['TIP3_127674', 'TIP3_128313', 'TIP3_130020'], 'TIP3_130062': ['GLU_N_25', 'TIP3_129573', 'ASN_K_13'], 'TIP3_129300': ['ASP_B_49'], 'TIP3_128163': ['TIP3_47589'], 'TIP3_129495': ['TIP3_128046', 'TIP3_129819', 'THR_C_305'], 'TIP3_47739': ['TIP3_129696'], 'TIP3_129159': ['SER_C_299', 'TYR_C_395', 'GLU_C_300', 'TIP3_127749', 'TIP3_127845'], 'THR_D_243': ['TIP3_128169', 'TIP3_128640', 'TIP3_128664'], 'TIP3_129888': ['GLU_K_11'], 'TIP3_129417': ['TIP3_128634'], 'ARG_N_28': ['TIP3_128121', 'TIP3_129810', 'TIP3_128484'], 'TYR_D_59': ['TIP3_129180'], 'LYS_P_103': ['TIP3_128454', 'TIP3_129057', 'TIP3_127113', 'TIP3_128349', 'TIP3_129168'], 'TIP3_128106': ['ASP_M_169', 'TIP3_129462', 'TIP3_129702'], 'SER_A_268': ['TIP3_128703'], 'TIP3_129207': ['TIP3_129813', 'TIP3_128586'], 'TIP3_129939': ['TIP3_128571', 'TIP3_129021', 'SER_P_51'], 'ARG_K_14': ['TIP3_127839', 'TIP3_128544', 'TIP3_129627'], 'TIP3_128520': ['TIP3_129612', 'TIP3_127194'], 'TIP3_129354': ['TIP3_127179'], 'SER_A_68': ['TIP3_47454'], 'TIP3_129285': ['TIP3_128298', 'TIP3_128334'], 'TIP3_127140': ['TIP3_128910'], 'TIP3_47853': ['TIP3_47817', 'TIP3_47838', 'TIP3_47877'], 'TIP3_128697': ['GLU_B_266', 'TIP3_127596', 'TIP3_128610', 'TIP3_130011'], 'TIP3_130065': ['TIP3_127686', 'TIP3_129876', 'TIP3_127110', 'TIP3_129954'], 'TIP3_128373': ['TIP3_127158'], 'TIP3_129228': ['TIP3_127326', 'TIP3_130137'], 'SER_D_57': ['TIP3_47862'], 'THR_C_255': ['TIP3_129237'], 'TIP3_129831': ['THR_D_248', 'TIP3_129045', 'TIP3_127308', 'TIP3_128256'], 'TIP3_128901': ['TIP3_129543', 'TIP3_129720'], 'TIP3_128469': ['HIS_P_118', 'GLU_P_122', 'TIP3_127449'], 'TIP3_47508': ['TIP3_47514'], 'THR_D_316': ['TIP3_47886'], 'SER_L_31': ['TIP3_127506', 'TIP3_128205'], 'TIP3_47706': ['TIP3_129639'], 'TIP3_127866': ['TIP3_128331'], 'TIP3_127806': ['THR_F_17', 'TIP3_130017', 'TIP3_130035'], 'TIP3_127452': ['TIP3_127665'], 'TIP3_127410': ['TIP3_127305', 'TIP3_127656', 'TIP3_129885'], 'TIP3_129510': ['TIP3_128106', 'TIP3_129702'], 'LYS_M_123': ['TIP3_129507', 'TIP3_128067'], 'TIP3_127305': ['TIP3_127410', 'TIP3_127656', 'TIP3_129885', 'TIP3_127644'], 'TIP3_128544': ['GLU_L_30', 'TIP3_127839', 'TIP3_128145', 'TIP3_129717'], 'TIP3_127248': ['ASP_J_19'], 'TIP3_129498': ['TIP3_128775', 'TIP3_129462'], 'HIS_A_337': ['TIP3_47598', 'TIP3_129516'], 'TIP3_129258': ['TIP3_128568'], 'TIP3_129012': ['ASP_B_134', 'TIP3_129687', 'TIP3_130008'], 'TIP3_127824': ['THR_E_5', 'TIP3_129387', 'THR_E_4'], 'TIP3_128913': ['TIP3_129159', 'TIP3_127128'], 'TIP3_127602': ['TIP3_128262', 'TIP3_128355', 'TIP3_127566', 'TIP3_129678'], 'TRP_B_450': ['TIP3_47613'], 'TIP3_127959': ['TIP3_127866'], 'TIP3_129786': ['SER_E_16'], 'TIP3_47523': ['SER_A_221'], 'TRP_B_5': ['TIP3_127623'], 'TIP3_128013': ['SER_B_278', 'TIP3_129483'], 'TIP3_128745': ['TIP3_129027'], 'TIP3_130146': ['TIP3_129144'], 'TIP3_127428': ['TIP3_128103', 'TIP3_129891'], 'TIP3_47604': ['THR_A_316', 'TIP3_129009'], 'ASN_A_87': ['TIP3_47514'], 'TYR_D_141': ['TIP3_128157'], 'TIP3_128862': ['TIP3_47724', 'SER_C_421'], 'TIP3_129558': ['TIP3_47676', 'HIS_B_343', 'GLU_B_428', 'TIP3_128853'], 'TYR_N_6': ['TIP3_127944'], 'TIP3_127509': ['TIP3_127980', 'ASP_D_225'], 'TIP3_130077': ['GLU_M_74', 'TIP3_129081'], 'TIP3_47652': ['TIP3_127350'], 'TIP3_127275': ['TIP3_127479'], 'HIS_C_132': ['TIP3_47742'], 'ARG_C_370': ['TIP3_129333', 'TIP3_129351'], 'TIP3_128985': ['ASP_B_46'], 'ARG_B_220': ['TIP3_127803', 'TIP3_129438', 'TIP3_129012'], 'TIP3_47898': ['THR_D_56', 'GLU_D_69'], 'TIP3_127815': ['TIP3_127497', 'TIP3_127947', 'TIP3_128970'], 'TIP3_127200': ['TYR_P_137', 'TIP3_127344', 'TIP3_129522', 'TIP3_127518', 'TIP3_129675'], 'TIP3_128778': ['SER_M_77'], 'TIP3_130053': ['TIP3_128553', 'TIP3_129303', 'TIP3_127284', 'TIP3_129246'], 'TIP3_127674': ['TIP3_127302', 'TIP3_128313', 'TIP3_129327', 'TIP3_129474'], 'TIP3_129435': ['TIP3_129714'], 'TIP3_127356': ['ASP_B_15', 'TIP3_128247'], 'TRP_J_39': ['TIP3_128400', 'TIP3_128619'], 'ARG_C_320': ['TIP3_128571', 'TIP3_129939'], 'TIP3_47733': ['TIP3_47754'], 'ARG_B_347': ['TIP3_127830'], 'LYS_D_317': ['TIP3_47559', 'TIP3_47562', 'TIP3_129006'], 'TIP3_47883': ['THR_F_17', 'GLU_E_7'], 'TIP3_128175': ['TIP3_129453', 'TIP3_128523', 'TIP3_129585'], 'TIP3_127803': ['TIP3_47667', 'TIP3_129438'], 'TIP3_127488': ['TIP3_127971'], 'TIP3_127212': ['TIP3_127218', 'TIP3_127791', 'TIP3_128238', 'TIP3_129393'], 'TIP3_130101': ['GLU_D_131'], 'TIP3_130140': ['TIP3_127911', 'TIP3_129024', 'TIP3_127710'], 'TIP3_129813': ['TYR_E_56'], 'TIP3_128367': ['TIP3_128733', 'TIP3_128370'], 'TIP3_127506': ['TIP3_127536', 'TIP3_129630', 'SER_L_31', 'TIP3_128205'], 'TIP3_127386': ['TIP3_127959', 'TIP3_128331', 'TIP3_128148'], 'TIP3_129918': ['SER_A_268', 'TIP3_129201'], 'TIP3_128970': ['TIP3_127497', 'TIP3_128358'], 'TIP3_128529': ['TIP3_127362', 'TIP3_127656', 'TIP3_128676'], 'TIP3_129024': ['TIP3_130140', 'ASN_A_247'], 'TIP3_128352': ['TIP3_127704'], 'ARG_A_27': ['TIP3_127254', 'TIP3_127353', 'TIP3_128304'], 'ASN_O_11': ['TIP3_47928', 'TIP3_129411'], 'TIP3_127152': ['ASP_A_103', 'TIP3_129951', 'TIP3_129972'], 'TIP3_128058': ['TIP3_127566', 'TIP3_127602', 'TIP3_128979', 'TIP3_129678'], 'TIP3_47556': ['GLU_A_189', 'TIP3_47460', 'TIP3_47541', 'ASP_A_170', 'TIP3_47535', 'TIP3_47571'], 'LYS_T_29': ['TIP3_129066'], 'TIP3_47553': ['GLU_A_189', 'TIP3_47541', 'TYR_A_161', 'HIS_A_190', 'TIP3_127197'], 'SER_E_39': ['TIP3_127668'], 'ASN_D_250': ['TIP3_127608'], 'TIP3_127479': ['TIP3_128253'], 'TIP3_128904': ['TIP3_127335', 'TIP3_127608', 'THR_K_15', 'TIP3_129453'], 'TIP3_127113': ['GLU_C_83', 'TIP3_128964', 'TIP3_129168', 'TIP3_129843'], 'TIP3_47679': ['TIP3_127899', 'TIP3_129489', 'GLU_D_323', 'TYR_M_168'], 'TIP3_130167': ['TIP3_128907'], 'TRP_A_32': ['TIP3_127119', 'TIP3_128661'], 'TIP3_127812': ['GLU_D_302', 'TIP3_128007', 'TIP3_129501', 'ASP_D_297', 'TIP3_128079'], 'THR_M_75': ['TIP3_127194'], 'TIP3_127539': ['ASP_C_460'], 'ARG_O_39': ['TIP3_129342', 'TIP3_129798'], 'TIP3_128229': ['GLU_C_138'], 'TIP3_47745': ['SER_C_424'], 'TIP3_127947': ['ASN_D_250', 'TIP3_127254', 'TIP3_128565'], 'TIP3_47922': ['TIP3_127446'], 'TIP3_127347': ['TIP3_127812', 'TIP3_128397', 'ASP_D_297', 'TIP3_128007', 'TIP3_128079', 'TIP3_128451'], 'TIP3_127689': ['TIP3_128256', 'TIP3_129450'], 'TIP3_128880': ['GLU_A_231', 'TIP3_128484'], 'TIP3_129402': ['TIP3_129063'], 'TIP3_47808': ['GLU_D_96'], 'TYR_A_235': ['TIP3_129573'], 'TIP3_127557': ['TIP3_127458', 'TIP3_129654'], 'THR_D_231': ['TIP3_128274'], 'TIP3_127917': ['ASP_B_276', 'TIP3_127143', 'TIP3_129000'], 'TIP3_127599': ['TIP3_129471'], 'TIP3_128148': ['TIP3_127386', 'TIP3_128331', 'TIP3_128796', 'TIP3_129768'], 'TIP3_128172': ['TIP3_129960'], 'ASN_D_318': ['TIP3_47601', 'TIP3_47835'], 'TIP3_127944': ['TIP3_129324', 'TIP3_127914', 'TIP3_128409'], 'TIP3_129522': ['TIP3_129297', 'TIP3_127338', 'TIP3_127677'], 'TIP3_128262': ['TIP3_128355', 'TIP3_129870', 'TIP3_127566', 'TIP3_127602', 'TIP3_129678'], 'TIP3_47943': ['ASP_M_224', 'TIP3_128208', 'TIP3_47919'], 'TIP3_128184': ['TIP3_128307', 'TIP3_127794', 'TIP3_129219'], 'TIP3_127965': ['TIP3_129600'], 'TIP3_128844': ['TIP3_129972'], 'TIP3_47946': ['TIP3_47949'], 'TIP3_128133': ['TIP3_128499', 'TIP3_129177'], 'TIP3_128649': ['SER_H_25'], 'TIP3_129102': ['ASP_D_25', 'TIP3_128673'], 'TIP3_47445': ['TIP3_47475'], 'THR_C_305': ['TIP3_128046'], 'TIP3_129441': ['TIP3_129198'], 'THR_B_262': ['TIP3_127974'], 'TYR_A_73': ['TIP3_129303'], 'TIP3_129525': ['TIP3_128337'], 'THR_C_295': ['TIP3_128709'], 'TIP3_128691': ['TIP3_127743', 'TIP3_127353', 'TIP3_128421'], 'TIP3_127692': ['TYR_D_59', 'TIP3_129180'], 'TIP3_127170': ['GLU_A_189', 'ASP_A_342', 'TIP3_127311'], 'HIS_B_26': ['TIP3_47661'], 'TIP3_130074': ['GLU_D_323', 'TIP3_47646', 'TIP3_130104'], 'ASN_O_31': ['TIP3_128082', 'TIP3_128580', 'TIP3_129234', 'TIP3_127230', 'TIP3_129855'], 'TIP3_127929': ['TIP3_127470', 'TIP3_128457', 'TIP3_127518', 'TIP3_129675'], 'TIP3_47583': ['TIP3_127638', 'ASP_A_319'], 'LYS_P_30': ['TIP3_128931', 'TIP3_127449', 'TIP3_129561'], 'TIP3_127974': ['TIP3_47652', 'TIP3_47703'], 'TIP3_128355': ['ASN_C_418', 'TIP3_128262', 'TIP3_129870'], 'ASN_D_72': ['TIP3_127176', 'TIP3_130122'], 'TIP3_127671': ['TIP3_128706', 'TIP3_128961'], 'CYS_A_144': ['TIP3_47451'], 'TIP3_127680': ['GLU_B_235', 'HIS_B_469', 'TIP3_129378'], 'TIP3_47568': ['ASN_A_303', 'TIP3_47544'], 'TIP3_47691': ['TIP3_128100', 'TIP3_128994', 'TIP3_128124'], 'TIP3_127227': ['TIP3_128004', 'TIP3_127128', 'TIP3_127701', 'TIP3_128913'], 'TIP3_47658': ['TIP3_47670', 'TIP3_47682'], 'TIP3_129846': ['ASP_A_103', 'TIP3_129951', 'TIP3_128553', 'TIP3_130053'], 'TIP3_127464': ['TIP3_127161', 'TIP3_128259', 'TIP3_128637', 'TIP3_129327', 'GLU_P_122'], 'TIP3_47862': ['GLU_D_69', 'TIP3_47844'], 'TIP3_129816': ['TIP3_128109', 'TIP3_128631', 'TIP3_129078', 'TIP3_129549', 'TIP3_127164']})
###Markdown
Find all the paths in the graph
###Code
visited = []
path = []
for res in range(len(wat_con)):
results = []
if wat_con['donor_residue'][res] not in visited and wat_con['donor_residue'][res][0:3] != 'TIP':
find_all_path(graph, wat_con['donor_residue'][res], [wat_con['donor_residue'][res]], results)
path = path + results
visited.append(wat_con['donor_residue'][res])
else:
continue
print(path[0:4])
###Output
[['ASN_A_26', 'TIP3_47469'], ['ARG_A_27', 'TIP3_127254', 'TIP3_128304', 'ASP_C_473'], ['ARG_A_27', 'TIP3_127254', 'TIP3_128304', 'TIP3_129822', 'ASP_C_473'], ['ARG_A_27', 'TIP3_127254', 'TIP3_128304', 'TIP3_129822', 'TIP3_127497', 'TIP3_128358']]
###Markdown
Count the water number between residues
###Code
donor = []
accept = []
wat_num = []
for item in path:
donor_column = [item[0]]
accpt_column = []
count = 0
for r in range(1, len(item)):
if item[r][0:3] != 'TIP':
donor_column.append(item[r])
accpt_column.append(item[r])
wat_num.append(count)
count = 0
else:
count += 1
if len(donor_column) > len(accpt_column):
donor_column.pop()
else:
accpt_column.pop()
donor.extend(donor_column)
accept.extend(accpt_column)
print(donor[0], accept[0], wat_num[0])
###Output
ARG_A_27 ASP_C_473 2
###Markdown
Put all data in dataframe and count the frequency of the connection
###Code
direct_connection = pd.DataFrame(columns = ['donor_residue', 'acceptor_residue'])
one_water_connection = pd.DataFrame(columns = ['donor_residue', 'acceptor_residue'])
two_water_connection = pd.DataFrame(columns = ['donor_residue', 'acceptor_residue'])
three_water_connection = pd.DataFrame(columns = ['donor_residue', 'acceptor_residue'])
four_water_connection = pd.DataFrame(columns = ['donor_residue', 'acceptor_residue'])
visited_1 = []
visited_2 = []
visited_3 = []
visited_4 = []
res_wat_res = pd.DataFrame({'donor_residue': donor, 'acceptor_residue': accept, 'wat_num': wat_num})
res_wat_res = res_wat_res.drop_duplicates()
hb_network = pd.concat([dire_con, res_wat_res])
hb_network.index = range(0, len(hb_network))
for i in range(0, len(hb_network)):
if hb_network['wat_num'][i] == 0:
new_row = pd.Series({'donor_residue': hb_network['donor_residue'][i], 'acceptor_residue': hb_network['acceptor_residue'][i]})
direct_connection = direct_connection.append(new_row, ignore_index=True)
if hb_network['wat_num'][i] <= 1 and [hb_network['donor_residue'][i], hb_network['acceptor_residue'][i]] not in visited_1:
visited_1.append([hb_network['donor_residue'][i], hb_network['acceptor_residue'][i]])
new_row = pd.Series({'donor_residue': hb_network['donor_residue'][i], 'acceptor_residue': hb_network['acceptor_residue'][i]})
one_water_connection = one_water_connection.append(new_row, ignore_index=True)
if hb_network['wat_num'][i] <= 2 and [hb_network['donor_residue'][i], hb_network['acceptor_residue'][i]] not in visited_2:
visited_2.append([hb_network['donor_residue'][i], hb_network['acceptor_residue'][i]])
new_row = pd.Series({'donor_residue': hb_network['donor_residue'][i], 'acceptor_residue': hb_network['acceptor_residue'][i]})
two_water_connection = two_water_connection.append(new_row, ignore_index=True)
if hb_network['wat_num'][i] <= 3 and [hb_network['donor_residue'][i], hb_network['acceptor_residue'][i]] not in visited_3:
visited_3.append([hb_network['donor_residue'][i], hb_network['acceptor_residue'][i]])
new_row = pd.Series({'donor_residue': hb_network['donor_residue'][i], 'acceptor_residue': hb_network['acceptor_residue'][i]})
three_water_connection = three_water_connection.append(new_row, ignore_index=True)
if hb_network['wat_num'][i] <= 4 and [hb_network['donor_residue'][i], hb_network['acceptor_residue'][i]] not in visited_4:
visited_4.append([hb_network['donor_residue'][i], hb_network['acceptor_residue'][i]])
new_row = pd.Series({'donor_residue': hb_network['donor_residue'][i], 'acceptor_residue': hb_network['acceptor_residue'][i]})
four_water_connection = four_water_connection.append(new_row, ignore_index=True)
print(direct_connection.head(5))
###Output
donor_residue acceptor_residue
0 TRP_A_14 SER_H_25
1 TRP_A_20 ASN_A_26
2 TYR_A_29 GLU_A_132
3 ARG_A_64 THR_M_75
4 ASN_A_75 SER_A_68
###Markdown
If we have more than one frame, we need to append all connection in one dataFrame
###Code
Direct = pd.DataFrame(columns = ['donor_residue', 'acceptor_residue'])
One_water = pd.DataFrame(columns = ['donor_residue', 'acceptor_residue'])
Two_water = pd.DataFrame(columns = ['donor_residue', 'acceptor_residue'])
Three_water = pd.DataFrame(columns = ['donor_residue', 'acceptor_residue'])
Four_water = pd.DataFrame(columns = ['donor_residue', 'acceptor_residue'])
# if we need to calculate more than one frame
Direct = pd.concat([Direct, direct_connection])
One_water = pd.concat([One_water, one_water_connection])
Two_water = pd.concat([Two_water, two_water_connection])
Three_water = pd.concat([Three_water, three_water_connection])
Four_water = pd.concat([Four_water, four_water_connection])
# calculate the frequency for all the connections
Direct = Direct.groupby(['donor_residue', 'acceptor_residue']).size().reset_index(name='Frequency')
One_water = One_water.groupby(['donor_residue', 'acceptor_residue']).size().reset_index(name='Frequency')
Two_water = Two_water.groupby(['donor_residue', 'acceptor_residue']).size().reset_index(name='Frequency')
Three_water = Three_water.groupby(['donor_residue', 'acceptor_residue']).size().reset_index(name='Frequency')
Four_water = Four_water.groupby(['donor_residue', 'acceptor_residue']).size().reset_index(name='Frequency')
print(Direct.head(5))
###Output
donor_residue acceptor_residue Frequency
0 ARG_A_136 ASP_H_27 1
1 ARG_A_136 GLU_A_132 1
2 ARG_A_136 TYR_A_29 1
3 ARG_A_140 GLU_D_219 1
4 ARG_A_257 ASP_D_25 1
|
DATA515hw3.ipynb | ###Markdown
1. Function code (5 points). Last week you wrote python codes that read an online file and created a data frame that has at least 3 columns. Now: (a) create a python module ``dataframe.py`` that reads the data in homework 2; and (b) ``dataframpe.py`` should generate an ValueError execption if the dataframe doesn't have the expected column names.1. Test code (5 points). Create a python file ``test_dataframe.py`` that has unit tests for dataframe.py. Include at least 2 of the following tests: - You have the expected columns. - Values in the column are all of the expected type. - There are no nan values. - The dataframe has at least one row. 1. Coding style (4 points). Make all codes PEP8 compliant and provide the output from pylint to demonstrate that you have accomplished this.
###Code
import dataframe
import test_dataframe
import unittest
import numpy as np
import pandas as pd
permits = dataframe.get_permits()
permits.head()
###Output
_____no_output_____ |
Chapter08/Recipe4-Maximum-Absolute-Scaling.ipynb | ###Markdown
Scaling to maximum value - MaxAbsScalingMaximum absolute scaling scales the data to its maximum value:X_scaled = X / X.max
###Code
import pandas as pd
# dataset for the demo
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
# the scaler - for MaxAbsScaling, with centering
from sklearn.preprocessing import MaxAbsScaler, StandardScaler
# load the the Boston House price data
# this is how we load the boston dataset from sklearn
boston_dataset = load_boston()
# create a dataframe with the independent variables
data = pd.DataFrame(boston_dataset.data,
columns=boston_dataset.feature_names)
# add target
data['MEDV'] = boston_dataset.target
data.head()
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(data.drop('MEDV', axis=1),
data['MEDV'],
test_size=0.3,
random_state=0)
X_train.shape, X_test.shape
# set up the scaler
scaler = MaxAbsScaler()
# fit the scaler to the train set, it will learn the parameters
scaler.fit(X_train)
# transform train and test sets
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# the scaler stores the maximum values of the features as learned from train set
scaler.max_abs_
# let's transform the returned NumPy arrays to dataframes
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
import matplotlib.pyplot as plt
import seaborn as sns
# let's compare the variable distributions before and after scaling
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
# before scaling
ax1.set_title('Before Scaling')
sns.kdeplot(X_train['RM'], ax=ax1)
sns.kdeplot(X_train['LSTAT'], ax=ax1)
sns.kdeplot(X_train['CRIM'], ax=ax1)
# after scaling
ax2.set_title('After Max Abs Scaling')
sns.kdeplot(X_train_scaled['RM'], ax=ax2)
sns.kdeplot(X_train_scaled['LSTAT'], ax=ax2)
sns.kdeplot(X_train_scaled['CRIM'], ax=ax2)
plt.show()
# let's compare the variable distributions before and after scaling
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
# before scaling
ax1.set_title('Before Scaling')
sns.kdeplot(X_train['AGE'], ax=ax1)
sns.kdeplot(X_train['DIS'], ax=ax1)
sns.kdeplot(X_train['NOX'], ax=ax1)
# after scaling
ax2.set_title('After Max Abs Scaling')
sns.kdeplot(X_train_scaled['AGE'], ax=ax2)
sns.kdeplot(X_train_scaled['DIS'], ax=ax2)
sns.kdeplot(X_train_scaled['NOX'], ax=ax2)
plt.show()
###Output
_____no_output_____
###Markdown
Centering + MaxAbsScalingWe can center the distributions at zero and then scale to its absolute maximum, as recommended by Scikit-learn by combining the use of 2 transformers.
###Code
# set up the StandardScaler so that it removes the mean
# but does not divide by the standard deviation
scaler_mean = StandardScaler(with_mean=True, with_std=False)
# set up the MaxAbsScaler normally
scaler_maxabs = MaxAbsScaler()
# fit the scalers to the train set, it will learn the parameters
scaler_mean.fit(X_train)
scaler_maxabs.fit(X_train)
# transform train and test sets
X_train_scaled = scaler_maxabs.transform(scaler_mean.transform(X_train))
X_test_scaled = scaler_maxabs.transform(scaler_mean.transform(X_test))
# let's transform the returned NumPy arrays to dataframes for the rest of
# the demo
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
# let's compare the variable distributions before and after scaling
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
# before scaling
ax1.set_title('Before Scaling')
sns.kdeplot(X_train['AGE'], ax=ax1)
sns.kdeplot(X_train['DIS'], ax=ax1)
sns.kdeplot(X_train['NOX'], ax=ax1)
# after scaling
ax2.set_title('After Max Abs Scaling')
sns.kdeplot(X_train_scaled['AGE'], ax=ax2)
sns.kdeplot(X_train_scaled['DIS'], ax=ax2)
sns.kdeplot(X_train_scaled['NOX'], ax=ax2)
plt.show()
###Output
_____no_output_____ |
Projects/8-Backtesting/project_8_starter.ipynb | ###Markdown
Project 8: BacktestingIn this project, you will build a fairly realistic backtester that uses the Barra data. The backtester will perform portfolio optimization that includes transaction costs, and you'll implement it with computational efficiency in mind, to allow for a reasonably fast backtest. You'll also use performance attribution to identify the major drivers of your portfolio's profit-and-loss (PnL). You will have the option to modify and customize the backtest as well. InstructionsEach problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a ` TODO` comment. Your code will be checked for the correct solution when you submit it to Udacity. PackagesWhen you implement the functions, you'll only need to you use the packages you've used in the classroom, like [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/). These packages will be imported for you. We recommend you don't add any import statements, otherwise the grader might not be able to run your code. Install Packages
###Code
import sys
!{sys.executable} -m pip install -r requirements.txt
###Output
Requirement already satisfied: matplotlib==2.1.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 1)) (2.1.0)
Requirement already satisfied: numpy==1.16.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 2)) (1.16.1)
Requirement already satisfied: pandas==0.24.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 3)) (0.24.1)
Requirement already satisfied: patsy==0.5.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 4)) (0.5.1)
Requirement already satisfied: scipy==0.19.1 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 5)) (0.19.1)
Requirement already satisfied: statsmodels==0.9.0 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 6)) (0.9.0)
Requirement already satisfied: tqdm==4.19.5 in /opt/conda/lib/python3.6/site-packages (from -r requirements.txt (line 7)) (4.19.5)
Requirement already satisfied: six>=1.10 in /opt/conda/lib/python3.6/site-packages (from matplotlib==2.1.0->-r requirements.txt (line 1)) (1.11.0)
Requirement already satisfied: python-dateutil>=2.0 in /opt/conda/lib/python3.6/site-packages (from matplotlib==2.1.0->-r requirements.txt (line 1)) (2.6.1)
Requirement already satisfied: pytz in /opt/conda/lib/python3.6/site-packages (from matplotlib==2.1.0->-r requirements.txt (line 1)) (2017.3)
Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.6/site-packages/cycler-0.10.0-py3.6.egg (from matplotlib==2.1.0->-r requirements.txt (line 1)) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /opt/conda/lib/python3.6/site-packages (from matplotlib==2.1.0->-r requirements.txt (line 1)) (2.2.0)
###Markdown
Load Packages
###Code
import scipy
import patsy
import pickle
import numpy as np
import pandas as pd
import scipy.sparse
import matplotlib.pyplot as plt
from statistics import median
from scipy.stats import gaussian_kde
from statsmodels.formula.api import ols
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Load DataWe’ll be using the Barra dataset to get factors that can be used to predict risk. Loading and parsing the raw Barra data can be a very slow process that can significantly slow down your backtesting. For this reason, it's important to pre-process the data beforehand. For your convenience, the Barra data has already been pre-processed for you and saved into pickle files. You will load the Barra data from these pickle files.In the code below, we start by loading `2004` factor data from the `pandas-frames.2004.pickle` file. We also load the `2003` and `2004` covariance data from the `covaraince.2003.pickle` and `covaraince.2004.pickle` files. You are encouraged to customize the data range for your backtest. For example, we recommend starting with two or three years of factor data. Remember that the covariance data should include all the years that you choose for the factor data, and also one year earlier. For example, in the code below we are using `2004` factor data, therefore, we must include `2004` in our covariance data, but also the previous year, `2003`. If you don't remember why must include this previous year, feel free to review the lessons.
###Code
barra_dir = '../../data/project_8_barra/'
data = {}
for year in [2004]:
fil = barra_dir + "pandas-frames." + str(year) + ".pickle"
data.update(pickle.load( open( fil, "rb" ) ))
covariance = {}
for year in [2004]:
fil = barra_dir + "covariance." + str(year) + ".pickle"
covariance.update(pickle.load( open(fil, "rb" ) ))
daily_return = {}
for year in [2004, 2005]:
fil = barra_dir + "price." + str(year) + ".pickle"
daily_return.update(pickle.load( open(fil, "rb" ) ))
###Output
_____no_output_____
###Markdown
Shift Daily Returns Data (TODO)In the cell below, we want to incorporate a realistic time delay that exists in live trading, we’ll use a two day delay for the `daily_return` data. That means the `daily_return` should be two days after the data in `data` and `cov_data`. Combine `daily_return` and `data` together in a dict called `frames`.Since reporting of PnL is usually for the date of the returns, make sure to use the two day delay dates (dates that match the `daily_return`) when building `frames`. This means calling `frames['20040108']` will get you the prices from "20040108" and the data from `data` at "20040106".Note: We're not shifting `covariance`, since we'll use the "DataDate" field in `frames` to lookup the covariance data. The "DataDate" field contains the date when the `data` in `frames` was recorded. For example, `frames['20040108']` will give you a value of "20040106" for the field "DataDate".
###Code
frames ={}
dlyreturn_n_days_delay = 2
# TODO: Implement
date_shifts = zip(
sorted(data.keys()),
sorted(daily_return.keys())[dlyreturn_n_days_delay : len(data) + dlyreturn_n_days_delay])
# TODO
for data_date, price_date in date_shifts:
frames[price_date] = data[data_date].merge(daily_return[price_date], on='Barrid')
df = frames['20040108']
df.head()
###Output
_____no_output_____
###Markdown
Add Daily Returns date column (Optional)Name the column `DlyReturnDate`.**Hint**: create a list containing copies of the date, then create a pandas series.
###Code
for DlyReturnDate, df in frames.items():
n_rows = df.shape[0]
df['DlyReturnDate'] = pd.Series([DlyReturnDate] * n_rows)
df = frames['20040108']
df.head()
###Output
_____no_output_____
###Markdown
WinsorizeAs we have done in other projects, we'll want to avoid extremely positive or negative values in our data. Will therefore create a function, `wins`, that will clip our values to a minimum and maximum range. This process is called **Winsorizing**. Remember that this helps us handle noise, which may otherwise cause unusually large positions.
###Code
def wins(x,a,b):
return np.where(x <= a,a, np.where(x >= b, b, x))
###Output
_____no_output_____
###Markdown
Density PlotLet's check our `wins` function by taking a look at the distribution of returns for a single day `20040102`. We will clip our data from `-0.1` to `0.1` and plot it using our `density_plot` function.
###Code
def density_plot(data):
density = gaussian_kde(data)
xs = np.linspace(np.min(data),np.max(data),200)
density.covariance_factor = lambda : .25
density._compute_covariance()
plt.plot(xs,density(xs))
plt.xlabel('Daily Returns')
plt.ylabel('Density')
plt.show()
test = frames['20040108']
test['DlyReturn'] = wins(test['DlyReturn'],-0.1,0.1)
density_plot(test['DlyReturn'])
###Output
_____no_output_____
###Markdown
Factor Exposures and Factor ReturnsRecall that:$r_{i,t} = \sum_{j=1}^{k} (\beta_{i,j,t-2} \times f_{j,t})$ where $i=1...N$ (N assets), and $j=1...k$ (k factors).where $r_{i,t}$ is the return, $\beta_{i,j,t-2}$ is the factor exposure, and $f_{j,t}$ is the factor return. Since we get the factor exposures from the Barra data, and we know the returns, it is possible to estimate the factor returns. In this notebook, we will use the Ordinary Least Squares (OLS) method to estimate the factor exposures, $f_{j,t}$, by using $\beta_{i,j,t-2}$ as the independent variable, and $r_{i,t}$ as the dependent variable.
###Code
def get_formula(factors, Y):
L = ["0"]
L.extend(factors)
return Y + " ~ " + " + ".join(L)
def factors_from_names(n):
return list(filter(lambda x: "USFASTD_" in x, n))
def estimate_factor_returns(df):
## build universe based on filters
estu = df.loc[df.IssuerMarketCap > 1e9].copy(deep=True)
## winsorize returns for fitting
estu['DlyReturn'] = wins(estu['DlyReturn'], -0.25, 0.25)
all_factors = factors_from_names(list(df))
form = get_formula(all_factors, "DlyReturn")
model = ols(form, data=estu)
results = model.fit()
return results
facret = {}
for date in frames:
facret[date] = estimate_factor_returns(frames[date]).params
my_dates = sorted(list(map(lambda date: pd.to_datetime(date, format='%Y%m%d'), frames.keys())))
###Output
_____no_output_____
###Markdown
Choose Alpha FactorsWe will now choose our alpha factors. Barra's factors include some alpha factors that we have seen before, such as:* **USFASTD_1DREVRSL** : Reversal* **USFASTD_EARNYILD** : Earnings Yield* **USFASTD_VALUE** : Value* **USFASTD_SENTMT** : SentimentWe will choose these alpha factors for now, but you are encouraged to come back to this later and try other factors as well.
###Code
alpha_factors = ["USFASTD_1DREVRSL", "USFASTD_EARNYILD", "USFASTD_VALUE", "USFASTD_SENTMT"]
facret_df = pd.DataFrame(index = my_dates)
for dt in my_dates:
for alp in alpha_factors:
facret_df.at[dt, alp] = facret[dt.strftime('%Y%m%d')][alp]
for column in facret_df.columns:
plt.plot(facret_df[column].cumsum(), label=column)
plt.legend(loc='upper left')
plt.xlabel('Date')
plt.ylabel('Cumulative Factor Returns')
plt.show()
###Output
/opt/conda/lib/python3.6/site-packages/pandas/plotting/_converter.py:129: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters.
To register the converters:
>>> from pandas.plotting import register_matplotlib_converters
>>> register_matplotlib_converters()
warnings.warn(msg, FutureWarning)
###Markdown
Merge Previous Portfolio Holdings In order to optimize our portfolio we will use the previous day's holdings to estimate the trade size and transaction costs. In order to keep track of the holdings from the previous day we will include a column to hold the portfolio holdings of the previous day. These holdings of all our assets will be initialized to zero when the backtest first starts.
###Code
def clean_nas(df):
numeric_columns = df.select_dtypes(include=[np.number]).columns.tolist()
for numeric_column in numeric_columns:
df[numeric_column] = np.nan_to_num(df[numeric_column])
return df
previous_holdings = pd.DataFrame(data = {"Barrid" : ["USA02P1"], "h.opt.previous" : np.array(0)})
df = frames[my_dates[0].strftime('%Y%m%d')]
df = df.merge(previous_holdings, how = 'left', on = 'Barrid')
df = clean_nas(df)
df.loc[df['SpecRisk'] == 0]['SpecRisk'] = median(df['SpecRisk'])
###Output
_____no_output_____
###Markdown
Build Universe Based on Filters (TODO)In the cell below, implement the function `get_universe` that creates a stock universe by selecting only those companies that have a market capitalization of at least 1 billion dollars **OR** that are in the previous day's holdings, even if on the current day, the company no longer meets the 1 billion dollar criteria.When creating the universe, make sure you use the `.copy()` attribute to create a copy of the data. Also, it is very important to make sure that we are not looking at returns when forming the portfolio! to make this impossible, make sure to drop the column containing the daily return.
###Code
def get_universe(df):
"""
Create a stock universe based on filters
Parameters
----------
df : DataFrame
All stocks
Returns
-------
universe : DataFrame
Selected stocks based on filters
"""
# TODO: Implement
universe = df.loc[(df['IssuerMarketCap'] >= 1e9) | (abs(df['h.opt.previous']) > 0)].copy()
universe = universe.drop(columns = 'DlyReturn')
return universe
universe = get_universe(df)
date = str(int(universe['DataDate'][1]))
###Output
_____no_output_____
###Markdown
FactorsWe will now extract both the risk factors and alpha factors. We begin by first getting all the factors using the `factors_from_names` function defined previously.
###Code
all_factors = factors_from_names(list(universe))
print('Number of factors:', len(all_factors))
###Output
Number of factors: 81
###Markdown
We will now create the function `setdiff` to just select the factors that we have not defined as alpha factors
###Code
def setdiff(temp1, temp2):
s = set(temp2)
temp3 = [x for x in temp1 if x not in s]
return temp3
risk_factors = setdiff(all_factors, alpha_factors)
print('Number of risk factors: ', len(risk_factors))
###Output
Number of risk factors: 77
###Markdown
We will also save the column that contains the previous holdings in a separate variable because we are going to use it later when we perform our portfolio optimization.
###Code
h0 = universe['h.opt.previous']
print('Number of stocks in the portfolio: ', h0.shape[0])
###Output
Number of stocks in the portfolio: 2265
###Markdown
Matrix of Risk Factor ExposuresOur dataframe contains several columns that we'll use as risk factors exposures. Extract these and put them into a matrix.The data, such as industry category, are already one-hot encoded, but if this were not the case, then using `patsy.dmatrices` would help, as this function extracts categories and performs the one-hot encoding. We'll practice using this package, as you may find it useful with future data sets. You could also store the factors in a dataframe if you prefer. How to use patsy.dmatrices`patsy.dmatrices` takes in a formula and the dataframe. The formula tells the function which columns to take. The formula will look something like this: `SpecRisk ~ 0 + USFASTD_AERODEF + USFASTD_AIRLINES + ...` where the variable to the left of the ~ is the "dependent variable" and the others to the right are the independent variables (as if we were preparing data to be fit to a model).This just means that the `pasty.dmatrices` function will return two matrix variables, one that contains the single column for the dependent variable `outcome`, and the independent variable columns are stored in a matrix `predictors`.The `predictors` matrix will contain the matrix of risk factors, which is what we want. We don't actually need the `outcome` matrix; it's just created because that's the way patsy.dmatrices works.
###Code
formula = get_formula(risk_factors, "SpecRisk")
def model_matrix(formula, data):
outcome, predictors = patsy.dmatrices(formula, data)
return predictors
B = model_matrix(formula, universe)
BT = B.transpose()
print(B.shape)
###Output
(2265, 77)
###Markdown
Calculate Specific VarianceNotice that the specific risk data is in percent:
###Code
universe['SpecRisk'][0:2]
###Output
_____no_output_____
###Markdown
Therefore, in order to get the specific variance for each stock in the universe we first need to multiply these values by `0.01` and then square them:
###Code
specVar = (0.01 * universe['SpecRisk']) ** 2
###Output
_____no_output_____
###Markdown
Factor covariance matrix (TODO)Note that we already have factor covariances from Barra data, which is stored in the variable `covariance`. `covariance` is a dictionary, where the key is each day's date, and the value is a dataframe containing the factor covariances.
###Code
covariance['20040102'].head()
###Output
_____no_output_____
###Markdown
In the code below, implement the function `diagonal_factor_cov` to create the factor covariance matrix. Note that the covariances are given in percentage units squared. Therefore you must re-scale them appropriately so that they're in decimals squared. Use the given `colnames` function to get the column names from `B`. When creating factor covariance matrix, you can store the factor variances and covariances, or just store the factor variances. Try both, and see if you notice any differences.
###Code
def colnames(B):
if type(B) == patsy.design_info.DesignMatrix:
return B.design_info.column_names
if type(B) == pd.core.frame.DataFrame:
return B.columns.tolist()
return None
## extract a diagonal element from the factor covariance matrix
def get_cov(cv, factor1, factor2):
try:
return(cv.loc[(cv.Factor1==factor1) & (cv.Factor2==factor2),"VarCovar"].iloc[0])
except:
print(f"didn't find covariance for: factor 1: {factor1} factor2: {factor2}")
return 0
def diagonal_factor_cov(date, B):
"""
Create the factor covariance matrix
Parameters
----------
date : string
date. For example 20040102
B : patsy.design_info.DesignMatrix OR pandas.core.frame.DataFrame
Matrix of Risk Factors
Returns
-------
Fm : Numpy ndarray
factor covariance matrix
"""
cv = covariance[date]
k = np.shape(B)[1]
Fm = np.zeros([k,k])
# Zero out covariance
for i in range(0, k):
fac = colnames(B)[i]
# Convert from percentage units squared to decimal
Fm[i,i] = (0.01 ** 2) * get_cov(cv, fac, fac)
return Fm
Fvar = diagonal_factor_cov(date, B)
###Output
_____no_output_____
###Markdown
Transaction CostsTo get the transaction cost, or slippage, we have to multiply the price change due to market impact by the amount of dollars traded:$$\mbox{tcost_{i,t}} = \% \Delta \mbox{price}_{i,t} \times \mbox{trade}_{i,t}$$In summation notation it looks like this: $$\mbox{tcost}_{i,t} = \sum_i^{N} \lambda_{i,t} (h_{i,t} - h_{i,t-1})^2$$ where$$\lambda_{i,t} = \frac{1}{10\times \mbox{ADV}_{i,t}}$$Note that since we're dividing by ADV, we'll want to handle cases when ADV is missing or zero. In those instances, we can set ADV to a small positive number, such as 10,000, which, in practice assumes that the stock is illiquid. In the code below if there is no volume information we assume the asset is illiquid.
###Code
def get_lambda(universe, composite_volume_column = 'ADTCA_30'):
universe.loc[np.isnan(universe[composite_volume_column]), composite_volume_column] = 1.0e4
universe.loc[universe[composite_volume_column] == 0, composite_volume_column] = 1.0e4
adv = universe[composite_volume_column]
return 0.1 / adv
Lambda = get_lambda(universe)
###Output
_____no_output_____
###Markdown
Alpha Combination (TODO)In the code below create a matrix of alpha factors and return it from the function `get_B_alpha`. Create this matrix in the same way you created the matrix of risk factors, i.e. using the `get_formula` and `model_matrix` functions we have defined above. Feel free to go back and look at the previous code.
###Code
def get_B_alpha(alpha_factors, universe):
formula = get_formula(alpha_factors, "SpecRisk")
B_alpha = model_matrix(formula, universe)
return B_alpha
B_alpha = get_B_alpha(alpha_factors, universe)
###Output
_____no_output_____
###Markdown
Now that you have the matrix containing the alpha factors we will combine them by adding its rows. By doing this we will collapse the `B_alpha` matrix into a single alpha vector. We'll multiply by `1e-4` so that the expression of expected portfolio return, $\alpha^T \mathbf{h}$, is in dollar units.
###Code
def get_alpha_vec(B_alpha):
"""
Create an alpha vecrtor
Parameters
----------
B_alpha : patsy.design_info.DesignMatrix
Matrix of Alpha Factors
Returns
-------
alpha_vec : patsy.design_info.DesignMatrix
alpha vecrtor
"""
# TODO: Implement
alpha_vec = 1e-4 * np.sum(B_alpha, axis=1)
return alpha_vec
alpha_vec = get_alpha_vec(B_alpha)
###Output
_____no_output_____
###Markdown
Optional ChallengeYou can also try to a more sophisticated method of alpha combination, by choosing the holding for each alpha based on the same metric of its performance, such as the factor returns, or sharpe ratio. To make this more realistic, you can calculate a rolling average of the sharpe ratio, which is updated for each day. Remember to only use data that occurs prior to the date of each optimization, and not data that occurs in the future. Also, since factor returns and sharpe ratios may be negative, consider using a `max` function to give the holdings a lower bound of zero. Objective function (TODO)The objective function is given by:$$f(\mathbf{h}) = \frac{1}{2}\kappa \mathbf{h}_t^T\mathbf{Q}^T\mathbf{Q}\mathbf{h}_t + \frac{1}{2} \kappa \mathbf{h}_t^T \mathbf{S} \mathbf{h}_t - \mathbf{\alpha}^T \mathbf{h}_t + (\mathbf{h}_{t} - \mathbf{h}_{t-1})^T \mathbf{\Lambda} (\mathbf{h}_{t} - \mathbf{h}_{t-1})$$Where the terms correspond to: factor risk + idiosyncratic risk - expected portfolio return + transaction costs, respectively. We should also note that $\textbf{Q}^T\textbf{Q}$ is defined to be the same as $\textbf{BFB}^T$. Review the lessons if you need a refresher of how we get $\textbf{Q}$.Our objective is to minimize this objective function. To do this, we will use Scipy's optimization function:`scipy.optimize.fmin_l_bfgs_b(func, initial_guess, func_gradient)`where:* **func** : is the function we want to minimize* **initial_guess** : is out initial guess* **func_gradient** : is the gradient of the function we want to minimizeSo, in order to use the `scipy.optimize.fmin_l_bfgs_b` function we first need to define its parameters.In the code below implement the function `obj_func(h)` that corresponds to the objective function above that we want to minimize. We will set the risk aversion to be `1.0e-6`.
###Code
risk_aversion = 1.0e-6
def get_obj_func(h0, risk_aversion, Q, specVar, alpha_vec, Lambda):
def obj_func(h):
# print(f'h: {h.shape}'
# f'h0: {h0.shape}'
# f'Q: {Q.shape}'
# f'risk_aversion: {risk_aversion}'
# f'specVar: {np.diag(specVar).shape}'
# f'alpha_vec: {alpha_vec.shape}'
# f'Lambda: {np.diag(Lambda).shape}')
# h: (2265,)h0: (2265,)Q: (77, 2265)risk_aversion: 1e-06specVar: (2265, 2265)alpha_vec: (2265,)Lambda: (2265, 2265)
# TODO: Implement
factor_risk = 0.5 * risk_aversion * scipy.linalg.norm(Q @ h) ** 2
idiosyncratic_risk = 0.5 * risk_aversion * np.dot(h ** 2, specVar)
portfolio_return = np.dot(h, alpha_vec)
trans_cost = np.dot((h - h0) ** 2, Lambda)
f = factor_risk + idiosyncratic_risk - portfolio_return + trans_cost
return f
return obj_func
###Output
_____no_output_____
###Markdown
Gradient (TODO)Now that we can generate the objective function using `get_obj_func`, we can now create a similar function with its gradient. The reason we're interested in calculating the gradient is so that we can tell the optimizer in which direction, and how much, it should shift the portfolio holdings in order to improve the objective function (minimize variance, minimize transaction cost, and maximize expected portfolio return).Before we implement the function we first need to know what the gradient looks like. The gradient, or derivative of the objective function, with respect to the portfolio holdings h, is given by: $$f'(\mathbf{h}) = \frac{1}{2}\kappa (2\mathbf{Q}^T\mathbf{Qh}) + \frac{1}{2}\kappa (2\mathbf{Sh}) - \mathbf{\alpha} + 2(\mathbf{h}_{t} - \mathbf{h}_{t-1}) \mathbf{\Lambda}$$In the code below, implement the function `grad(h)` that corresponds to the function of the gradient given above.
###Code
def get_grad_func(h0, risk_aversion, Q, QT, specVar, alpha_vec, Lambda):
def grad_func(h):
# TODO: Implement
g = risk_aversion * (QT @ (Q @ h)) + risk_aversion * specVar * h - alpha_vec + 2 * (h - h0) * Lambda
return np.asarray(g)
return grad_func
###Output
_____no_output_____
###Markdown
Optimize (TODO)Now that we can generate the objective function using `get_obj_func`, and its corresponding gradient using `get_grad_func` we are ready to minimize the objective function using Scipy's optimization function. For this, we will use out initial holdings as our `initial_guess` parameter.In the cell below, implement the function `get_h_star` that optimizes the objective function. Use the objective function (`obj_func`) and gradient function (`grad_func`) provided within `get_h_star` to optimize the objective function using the `scipy.optimize.fmin_l_bfgs_b` function.
###Code
risk_aversion = 1.0e-6
Q = np.matmul(scipy.linalg.sqrtm(Fvar), BT)
QT = Q.transpose()
def get_h_star(risk_aversion, Q, QT, specVar, alpha_vec, h0, Lambda):
"""
Optimize the objective function
Parameters
----------
risk_aversion : int or float
Trader's risk aversion
Q : patsy.design_info.DesignMatrix
Q Matrix
QT : patsy.design_info.DesignMatrix
Transpose of the Q Matrix
specVar: Pandas Series
Specific Variance
alpha_vec: patsy.design_info.DesignMatrix
alpha vector
h0 : Pandas Series
initial holdings
Lambda : Pandas Series
Lambda
Returns
-------
optimizer_result[0]: Numpy ndarray
optimized holdings
"""
obj_func = get_obj_func(h0, risk_aversion, Q, specVar, alpha_vec, Lambda)
grad_func = get_grad_func(h0, risk_aversion, Q, QT, specVar, alpha_vec, Lambda)
# TODO: Implement
optimizer_result = scipy.optimize.fmin_l_bfgs_b(obj_func, h0, fprime=grad_func)
return optimizer_result[0]
h_star = get_h_star(risk_aversion, Q, QT, specVar, alpha_vec, h0, Lambda)
###Output
_____no_output_____
###Markdown
After we have optimized our objective function we can now use, `h_star` to create our optimal portfolio:
###Code
opt_portfolio = pd.DataFrame(data = {"Barrid" : universe['Barrid'], "h.opt" : h_star})
###Output
_____no_output_____
###Markdown
Risk Exposures (TODO)We can also use `h_star` to calculate our portfolio's risk and alpha exposures.In the cells below implement the functions `get_risk_exposures` and `get_portfolio_alpha_exposure` that calculate the portfolio's risk and alpha exposures, respectively.
###Code
def get_risk_exposures(B, BT, h_star):
"""
Calculate portfolio's Risk Exposure
Parameters
----------
B : patsy.design_info.DesignMatrix
Matrix of Risk Factors
BT : patsy.design_info.DesignMatrix
Transpose of Matrix of Risk Factors
h_star: Numpy ndarray
optimized holdings
Returns
-------
risk_exposures : Pandas Series
Risk Exposures
"""
# TODO: Implement
risk_exposures = pd.Series(BT @ h_star, index=colnames(B))
return risk_exposures
risk_exposures = get_risk_exposures(B, BT, h_star)
def get_portfolio_alpha_exposure(B_alpha, h_star):
"""
Calculate portfolio's Alpha Exposure
Parameters
----------
B_alpha : patsy.design_info.DesignMatrix
Matrix of Alpha Factors
h_star: Numpy ndarray
optimized holdings
Returns
-------
alpha_exposures : Pandas Series
Alpha Exposures
"""
# TODO: Implement
return pd.Series(B_alpha.T @ h_star, index = colnames(B_alpha))
portfolio_alpha_exposure = get_portfolio_alpha_exposure(B_alpha, h_star)
###Output
_____no_output_____
###Markdown
Transaction Costs (TODO)We can also use `h_star` to calculate our total transaction costs:$$\mbox{tcost} = \sum_i^{N} \lambda_{i} (h_{i,t} - h_{i,t-1})^2$$In the cell below, implement the function `get_total_transaction_costs` that calculates the total transaction costs according to the equation above:
###Code
def get_total_transaction_costs(h0, h_star, Lambda):
"""
Calculate Total Transaction Costs
Parameters
----------
h0 : Pandas Series
initial holdings (before optimization)
h_star: Numpy ndarray
optimized holdings
Lambda : Pandas Series
Lambda
Returns
-------
total_transaction_costs : float
Total Transaction Costs
"""
# TODO: Implement
return np.dot((h_star - h0) ** 2, Lambda)
total_transaction_costs = get_total_transaction_costs(h0, h_star, Lambda)
###Output
_____no_output_____
###Markdown
Putting It All TogetherWe can now take all the above functions we created above and use them to create a single function, `form_optimal_portfolio` that returns the optimal portfolio, the risk and alpha exposures, and the total transactions costs.
###Code
def form_optimal_portfolio(df, previous, risk_aversion):
df = df.merge(previous, how = 'left', on = 'Barrid')
df = clean_nas(df)
df.loc[df['SpecRisk'] == 0]['SpecRisk'] = median(df['SpecRisk'])
universe = get_universe(df)
date = str(int(universe['DataDate'][1]))
all_factors = factors_from_names(list(universe))
risk_factors = setdiff(all_factors, alpha_factors)
h0 = universe['h.opt.previous']
B = model_matrix(get_formula(risk_factors, "SpecRisk"), universe)
BT = B.transpose()
specVar = (0.01 * universe['SpecRisk']) ** 2
Fvar = diagonal_factor_cov(date, B)
Lambda = get_lambda(universe)
B_alpha = get_B_alpha(alpha_factors, universe)
alpha_vec = get_alpha_vec(B_alpha)
Q = np.matmul(scipy.linalg.sqrtm(Fvar), BT)
QT = Q.transpose()
h_star = get_h_star(risk_aversion, Q, QT, specVar, alpha_vec, h0, Lambda)
opt_portfolio = pd.DataFrame(data = {"Barrid" : universe['Barrid'], "h.opt" : h_star})
risk_exposures = get_risk_exposures(B, BT, h_star)
portfolio_alpha_exposure = get_portfolio_alpha_exposure(B_alpha, h_star)
total_transaction_costs = get_total_transaction_costs(h0, h_star, Lambda)
return {
"opt.portfolio" : opt_portfolio,
"risk.exposures" : risk_exposures,
"alpha.exposures" : portfolio_alpha_exposure,
"total.cost" : total_transaction_costs}
###Output
_____no_output_____
###Markdown
Build tradelistThe trade list is the most recent optimal asset holdings minus the previous day's optimal holdings.
###Code
def build_tradelist(prev_holdings, opt_result):
tmp = prev_holdings.merge(opt_result['opt.portfolio'], how='outer', on = 'Barrid')
tmp['h.opt.previous'] = np.nan_to_num(tmp['h.opt.previous'])
tmp['h.opt'] = np.nan_to_num(tmp['h.opt'])
return tmp
###Output
_____no_output_____
###Markdown
Save optimal holdings as previous optimal holdings.As we walk through each day, we'll re-use the column for previous holdings by storing the "current" optimal holdings as the "previous" optimal holdings.
###Code
def convert_to_previous(result):
prev = result['opt.portfolio']
prev = prev.rename(index=str, columns={"h.opt": "h.opt.previous"}, copy=True, inplace=False)
return prev
###Output
_____no_output_____
###Markdown
Run the backtestWalk through each day, calculating the optimal portfolio holdings and trade list. This may take some time, but should finish sooner if you've chosen all the optimizations you learned in the lessons.
###Code
trades = {}
port = {}
for dt in tqdm(my_dates, desc='Optimizing Portfolio', unit='day'):
date = dt.strftime('%Y%m%d')
result = form_optimal_portfolio(frames[date], previous_holdings, risk_aversion)
trades[date] = build_tradelist(previous_holdings, result)
port[date] = result
previous_holdings = convert_to_previous(result)
###Output
Optimizing Portfolio: 100%|██████████| 252/252 [21:29<00:00, 5.12s/day]
###Markdown
Profit-and-Loss (PnL) attribution (TODO)Profit and Loss is the aggregate realized daily returns of the assets, weighted by the optimal portfolio holdings chosen, and summed up to get the portfolio's profit and loss.The PnL attributed to the alpha factors equals the factor returns times factor exposures for the alpha factors. $$\mbox{PnL}_{alpha}= f \times b_{alpha}$$Similarly, the PnL attributed to the risk factors equals the factor returns times factor exposures of the risk factors.$$\mbox{PnL}_{risk} = f \times b_{risk}$$In the code below, in the function `build_pnl_attribution` calculate the PnL attributed to the alpha factors, the PnL attributed to the risk factors, and attribution to cost.
###Code
## assumes v, w are pandas Series
def partial_dot_product(v, w):
common = v.index.intersection(w.index)
return np.sum(v[common] * w[common])
def build_pnl_attribution():
df = pd.DataFrame(index = my_dates)
for dt in my_dates:
date = dt.strftime('%Y%m%d')
p = port[date]
fr = facret[date]
mf = p['opt.portfolio'].merge(frames[date], how = 'left', on = "Barrid")
mf['DlyReturn'] = wins(mf['DlyReturn'], -0.5, 0.5)
df.at[dt,"daily.pnl"] = np.sum(mf['h.opt'] * mf['DlyReturn'])
# TODO: Implement
df.at[dt,"attribution.alpha.pnl"] = partial_dot_product(fr,p['alpha.exposures'])
df.at[dt,"attribution.risk.pnl"] = partial_dot_product(fr,p['risk.exposures'])
df.at[dt,"attribution.cost"] = p['total.cost']
return df
attr = build_pnl_attribution()
for column in attr.columns:
plt.plot(attr[column].cumsum(), label=column)
plt.legend(loc='upper left')
plt.xlabel('Date')
plt.ylabel('PnL Attribution')
plt.show()
###Output
_____no_output_____
###Markdown
Build portfolio characteristics (TODO)Calculate the sum of long positions, short positions, net positions, gross market value, and amount of dollars traded.In the code below, in the function `build_portfolio_characteristics` calculate the sum of long positions, short positions, net positions, gross market value, and amount of dollars traded.
###Code
def build_portfolio_characteristics():
df = pd.DataFrame(index = my_dates)
for dt in my_dates:
date = dt.strftime('%Y%m%d')
p = port[date]
tradelist = trades[date]
h = p['opt.portfolio']['h.opt']
# TODO: Implement
long = np.sum(h[h>0])
short = np.sum(h[h<0])
df.at[dt,"long"] = long
df.at[dt,"short"] = short
df.at[dt,"net"] = long + short
df.at[dt,"gmv"] = np.abs(long) + np.abs(short)
df.at[dt,"traded"] = np.sum(np.abs(tradelist['h.opt'] - tradelist['h.opt.previous']))
return df
pchar = build_portfolio_characteristics()
for column in pchar.columns:
plt.plot(pchar[column], label=column)
plt.legend(loc='upper left')
plt.xlabel('Date')
plt.ylabel('Portfolio')
plt.show()
###Output
_____no_output_____ |
examples/00_quick_start/fastai_movielens.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
from tempfile import TemporaryDirectory
import sys
import os
import itertools
import pandas as pd
import numpy as np
import scrapbook as sb
import torch, fastai
from fastai.collab import EmbeddingDotBias, collab_learner, CollabDataBunch, load_learner
from recommenders.utils.timer import Timer
from recommenders.datasets import movielens
from recommenders.datasets.python_splitters import python_stratified_split
from recommenders.models.fastai.fastai_utils import cartesian_product, score
from recommenders.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from recommenders.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Pandas version: 0.25.3
Fast AI version: 1.0.46
Torch version: 1.4.0
Cuda Available: False
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
# Split the dataset
train_valid_df, test_df = python_stratified_split(
ratings_df,
ratio=0.75,
min_rating=1,
filter_by="item",
col_user=USER,
col_item=ITEM
)
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
with Timer() as preprocess_time:
data = CollabDataBunch.from_df(train_valid_df,
user_name=USER,
item_name=ITEM,
rating_name=RATING,
valid_pct=0)
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
with Timer() as train_time:
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
tmp = TemporaryDirectory()
model_path = os.path.join(tmp.name, "movielens_model.pkl")
learn.export(model_path)
###Output
_____no_output_____
###Markdown
Generating RecommendationsLoad the learner from disk.
###Code
learner = load_learner(tmp.name, "movielens_model.pkl")
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.merge(users_items, train_valid_df.astype(str), on=[USER, ITEM], how='left')
training_removed = training_removed[training_removed[RATING].isna()][[USER, ITEM]]
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
with Timer() as test_time:
top_k_scores = score(learner,
test_df=training_removed,
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.9734 seconds for 1511060 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.026115
NDCG: 0.155065
Precision@K: 0.136691
Recall@K: 0.054940
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df` user-items only.
###Code
scores = score(learner,
test_df=test_df.copy(),
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.902379
MAE: 0.712163
Explained variance: 0.346523
R squared: 0.345672
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
sb.glue("map", eval_map)
sb.glue("ndcg", eval_ndcg)
sb.glue("precision", eval_precision)
sb.glue("recall", eval_recall)
sb.glue("rmse", eval_rmse)
sb.glue("mae", eval_mae)
sb.glue("exp_var", eval_exp_var)
sb.glue("rsquared", eval_r2)
sb.glue("train_time", train_time.interval)
sb.glue("test_time", test_time.interval)
tmp.cleanup()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import os
import itertools
import pandas as pd
import numpy as np
import papermill as pm
import scrapbook as sb
import torch, fastai
from fastai.collab import EmbeddingDotBias, collab_learner, CollabDataBunch, load_learner
from reco_utils.common.timer import Timer
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_stratified_split
from reco_utils.recommender.fastai.fastai_utils import cartesian_product, score
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Pandas version: 0.25.3
Fast AI version: 1.0.46
Torch version: 1.4.0
Cuda Available: False
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
# Split the dataset
train_valid_df, test_df = python_stratified_split(
ratings_df,
ratio=0.75,
min_rating=1,
filter_by="item",
col_user=USER,
col_item=ITEM
)
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
with Timer() as preprocess_time:
data = CollabDataBunch.from_df(train_valid_df,
user_name=USER,
item_name=ITEM,
rating_name=RATING,
valid_pct=0)
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
with Timer() as train_time:
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
learn.export('movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Generating RecommendationsLoad the learner from disk.
###Code
learner = load_learner(path=".", fname='movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.merge(users_items, train_valid_df.astype(str), on=[USER, ITEM], how='left')
training_removed = training_removed[training_removed[RATING].isna()][[USER, ITEM]]
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
with Timer() as test_time:
top_k_scores = score(learner,
test_df=training_removed,
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.9734 seconds for 1511060 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.026115
NDCG: 0.155065
Precision@K: 0.136691
Recall@K: 0.054940
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df` user-items only.
###Code
scores = score(learner,
test_df=test_df.copy(),
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.902379
MAE: 0.712163
Explained variance: 0.346523
R squared: 0.345672
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
sb.glue("map", eval_map)
sb.glue("ndcg", eval_ndcg)
sb.glue("precision", eval_precision)
sb.glue("recall", eval_recall)
sb.glue("rmse", eval_rmse)
sb.glue("mae", eval_mae)
sb.glue("exp_var", eval_exp_var)
sb.glue("rsquared", eval_r2)
sb.glue("train_time", train_time.interval)
sb.glue("test_time", test_time.interval)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
from tempfile import TemporaryDirectory
import sys
import os
import itertools
import pandas as pd
import numpy as np
import scrapbook as sb
import torch, fastai
from fastai.collab import EmbeddingDotBias, collab_learner, CollabDataBunch, load_learner
from recommenders.utils.timer import Timer
from recommenders.datasets import movielens
from recommenders.datasets.python_splitters import python_stratified_split
from recommenders.models.fastai.fastai_utils import cartesian_product, score
from recommenders.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from recommenders.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Pandas version: 0.25.3
Fast AI version: 1.0.46
Torch version: 1.4.0
Cuda Available: False
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
# Split the dataset
train_valid_df, test_df = python_stratified_split(
ratings_df,
ratio=0.75,
min_rating=1,
filter_by="item",
col_user=USER,
col_item=ITEM
)
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
with Timer() as preprocess_time:
data = CollabDataBunch.from_df(train_valid_df,
user_name=USER,
item_name=ITEM,
rating_name=RATING,
valid_pct=0)
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
with Timer() as train_time:
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
tmp = TemporaryDirectory()
model_path = os.path.join(tmp.name, "movielens_model.pkl")
learn.export(model_path)
###Output
_____no_output_____
###Markdown
Generating RecommendationsLoad the learner from disk.
###Code
learner = load_learner(tmp.name, "movielens_model.pkl")
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.merge(users_items, train_valid_df.astype(str), on=[USER, ITEM], how='left')
training_removed = training_removed[training_removed[RATING].isna()][[USER, ITEM]]
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
with Timer() as test_time:
top_k_scores = score(learner,
test_df=training_removed,
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.9734 seconds for 1511060 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.026115
NDCG: 0.155065
Precision@K: 0.136691
Recall@K: 0.054940
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df` user-items only.
###Code
scores = score(learner,
test_df=test_df.copy(),
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.902379
MAE: 0.712163
Explained variance: 0.346523
R squared: 0.345672
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
sb.glue("map", eval_map)
sb.glue("ndcg", eval_ndcg)
sb.glue("precision", eval_precision)
sb.glue("recall", eval_recall)
sb.glue("rmse", eval_rmse)
sb.glue("mae", eval_mae)
sb.glue("exp_var", eval_exp_var)
sb.glue("rsquared", eval_r2)
sb.glue("train_time", train_time.interval)
sb.glue("test_time", test_time.interval)
tmp.cleanup()
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
with Timer() as train_time:
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
tmp = TemporaryDirectory()
model_path = os.path.join(tmp.name, "movielens_model.pkl")
learn.export(model_path)
###Output
_____no_output_____
###Markdown
Generating RecommendationsLoad the learner from disk.
###Code
learner = load_learner(tmp.name, "movielens_model.pkl")
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.merge(users_items, train_valid_df.astype(str), on=[USER, ITEM], how='left')
training_removed = training_removed[training_removed[RATING].isna()][[USER, ITEM]]
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
with Timer() as test_time:
top_k_scores = score(learner,
test_df=training_removed,
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.9734 seconds for 1511060 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.026115
NDCG: 0.155065
Precision@K: 0.136691
Recall@K: 0.054940
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df` user-items only.
###Code
scores = score(learner,
test_df=test_df.copy(),
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.902379
MAE: 0.712163
Explained variance: 0.346523
R squared: 0.345672
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
sb.glue("map", eval_map)
sb.glue("ndcg", eval_ndcg)
sb.glue("precision", eval_precision)
sb.glue("recall", eval_recall)
sb.glue("rmse", eval_rmse)
sb.glue("mae", eval_mae)
sb.glue("exp_var", eval_exp_var)
sb.glue("rsquared", eval_r2)
sb.glue("train_time", train_time.interval)
sb.glue("test_time", test_time.interval)
tmp.cleanup()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
from tempfile import TemporaryDirectory
import sys
import os
import itertools
import pandas as pd
import numpy as np
import scrapbook as sb
import torch, fastai
from fastai.collab import EmbeddingDotBias, collab_learner, CollabDataBunch, load_learner
from reco_utils.common.timer import Timer
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_stratified_split
from reco_utils.recommender.fastai.fastai_utils import cartesian_product, score
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Pandas version: 0.25.3
Fast AI version: 1.0.46
Torch version: 1.4.0
Cuda Available: False
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
# Split the dataset
train_valid_df, test_df = python_stratified_split(
ratings_df,
ratio=0.75,
min_rating=1,
filter_by="item",
col_user=USER,
col_item=ITEM
)
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
with Timer() as preprocess_time:
data = CollabDataBunch.from_df(train_valid_df,
user_name=USER,
item_name=ITEM,
rating_name=RATING,
valid_pct=0)
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
with Timer() as train_time:
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
tmp = TemporaryDirectory()
model_path = os.path.join(tmp.name, "movielens_model.pkl")
learn.export(model_path)
###Output
_____no_output_____
###Markdown
Generating RecommendationsLoad the learner from disk.
###Code
learner = load_learner(tmp.name, "movielens_model.pkl")
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.merge(users_items, train_valid_df.astype(str), on=[USER, ITEM], how='left')
training_removed = training_removed[training_removed[RATING].isna()][[USER, ITEM]]
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
with Timer() as test_time:
top_k_scores = score(learner,
test_df=training_removed,
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.9734 seconds for 1511060 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.026115
NDCG: 0.155065
Precision@K: 0.136691
Recall@K: 0.054940
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df` user-items only.
###Code
scores = score(learner,
test_df=test_df.copy(),
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.902379
MAE: 0.712163
Explained variance: 0.346523
R squared: 0.345672
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
sb.glue("map", eval_map)
sb.glue("ndcg", eval_ndcg)
sb.glue("precision", eval_precision)
sb.glue("recall", eval_recall)
sb.glue("rmse", eval_rmse)
sb.glue("mae", eval_mae)
sb.glue("exp_var", eval_exp_var)
sb.glue("rsquared", eval_r2)
sb.glue("train_time", train_time.interval)
sb.glue("test_time", test_time.interval)
tmp.cleanup()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
from tempfile import TemporaryDirectory
import sys
sys.path.append("../../")
import os
import itertools
import pandas as pd
import numpy as np
import scrapbook as sb
import torch, fastai
from fastai.collab import EmbeddingDotBias, collab_learner, CollabDataBunch, load_learner
from reco_utils.common.timer import Timer
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_stratified_split
from reco_utils.recommender.fastai.fastai_utils import cartesian_product, score
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Pandas version: 0.25.3
Fast AI version: 1.0.46
Torch version: 1.4.0
Cuda Available: False
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
# Split the dataset
train_valid_df, test_df = python_stratified_split(
ratings_df,
ratio=0.75,
min_rating=1,
filter_by="item",
col_user=USER,
col_item=ITEM
)
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
with Timer() as preprocess_time:
data = CollabDataBunch.from_df(train_valid_df,
user_name=USER,
item_name=ITEM,
rating_name=RATING,
valid_pct=0)
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
from tempfile import TemporaryDirectory
import sys
import os
import pandas as pd
import numpy as np
import scrapbook as sb
import torch, fastai
from fastai.collab import collab_learner, CollabDataBunch, load_learner
from recommenders.utils.constants import (
DEFAULT_USER_COL as USER,
DEFAULT_ITEM_COL as ITEM,
DEFAULT_RATING_COL as RATING,
DEFAULT_TIMESTAMP_COL as TIMESTAMP,
DEFAULT_PREDICTION_COL as PREDICTION
)
from recommenders.utils.timer import Timer
from recommenders.datasets import movielens
from recommenders.datasets.python_splitters import python_stratified_split
from recommenders.models.fastai.fastai_utils import cartesian_product, score
from recommenders.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from recommenders.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.11 | packaged by conda-forge | (default, Aug 5 2020, 20:09:42)
[GCC 7.5.0]
Pandas version: 0.25.3
Fast AI version: 1.0.46
Torch version: 1.4.0
Cuda Available: False
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
# Split the dataset
train_valid_df, test_df = python_stratified_split(
ratings_df,
ratio=0.75,
min_rating=1,
filter_by="item",
col_user=USER,
col_item=ITEM
)
# Remove "cold" users from test set
test_df = test_df[test_df.userID.isin(train_valid_df.userID)]
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
with Timer() as preprocess_time:
data = CollabDataBunch.from_df(train_valid_df,
user_name=USER,
item_name=ITEM,
rating_name=RATING,
valid_pct=0)
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
with Timer() as train_time:
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
tmp = TemporaryDirectory()
model_path = os.path.join(tmp.name, "movielens_model.pkl")
learn.export(model_path)
###Output
_____no_output_____
###Markdown
Generating RecommendationsLoad the learner from disk.
###Code
learner = load_learner(tmp.name, "movielens_model.pkl")
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.merge(users_items, train_valid_df.astype(str), on=[USER, ITEM], how='left')
training_removed = training_removed[training_removed[RATING].isna()][[USER, ITEM]]
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
with Timer() as test_time:
top_k_scores = score(learner,
test_df=training_removed,
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.9734 seconds for 1511060 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.026115
NDCG: 0.155065
Precision@K: 0.136691
Recall@K: 0.054940
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df` user-items only.
###Code
scores = score(learner,
test_df=test_df.copy(),
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.902379
MAE: 0.712163
Explained variance: 0.346523
R squared: 0.345672
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
sb.glue("map", eval_map)
sb.glue("ndcg", eval_ndcg)
sb.glue("precision", eval_precision)
sb.glue("recall", eval_recall)
sb.glue("rmse", eval_rmse)
sb.glue("mae", eval_mae)
sb.glue("exp_var", eval_exp_var)
sb.glue("rsquared", eval_r2)
sb.glue("train_time", train_time.interval)
sb.glue("test_time", test_time.interval)
tmp.cleanup()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import time
import os
import itertools
import pandas as pd
import numpy as np
import papermill as pm
import scrapbook as sb
import torch, fastai
from fastai.collab import EmbeddingDotBias, collab_learner, CollabDataBunch, load_learner
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_stratified_split
from reco_utils.recommender.fastai.fastai_utils import cartesian_product, score
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0]
Pandas version: 0.24.1
Fast AI version: 1.0.46
Torch version: 1.0.1.post2
Cuda Available: True
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
# Split the dataset
train_valid_df, test_df = python_stratified_split(
ratings_df,
ratio=0.75,
min_rating=1,
filter_by="item",
col_user=USER,
col_item=ITEM
)
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
start_time = time.time()
data = CollabDataBunch.from_df(train_valid_df, user_name=USER, item_name=ITEM, rating_name=RATING, valid_pct=0)
preprocess_time = time.time() - start_time
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
start_time = time.time()
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
train_time = time.time() - start_time + preprocess_time
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
learn.export('movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Generating RecommendationsLoad the learner from disk.
###Code
learner = load_learner(path=".", fname='movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.merge(users_items, train_valid_df.astype(str), on=[USER, ITEM], how='left')
training_removed = training_removed[training_removed[RATING].isna()][[USER, ITEM]]
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
start_time = time.time()
top_k_scores = score(learner,
test_df=training_removed,
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
test_time = time.time() - start_time
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.928511142730713 seconds for 1511060 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.026112
NDCG: 0.155062
Precision@K: 0.136691
Recall@K: 0.054940
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df` user-items only.
###Code
scores = score(learner,
test_df=test_df.copy(),
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.902386
MAE: 0.712164
Explained variance: 0.346513
R squared: 0.345662
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
sb.glue("map", eval_map)
sb.glue("ndcg", eval_ndcg)
sb.glue("precision", eval_precision)
sb.glue("recall", eval_recall)
sb.glue("rmse", eval_rmse)
sb.glue("mae", eval_mae)
sb.glue("exp_var", eval_exp_var)
sb.glue("rsquared", eval_r2)
sb.glue("train_time", train_time)
sb.glue("test_time", test_time)
###Output
/data/anaconda/envs/reco_gpu/lib/python3.6/site-packages/ipykernel_launcher.py:2: DeprecationWarning: Function record is deprecated and will be removed in verison 1.0.0 (current version 0.19.0). Please see `scrapbook.glue` (nteract-scrapbook) as a replacement for this functionality.
|
NoSQL/Cassandra_working_03_2020.ipynb | ###Markdown
* http://cassandra.apache.org/doc/latest/getting_started/ * https://help.aiven.io/en/articles/1803299-getting-started-with-aiven-for-cassandra
###Code
# https://github.com/datastax/python-driver
!pip install cassandra-driver
!pip install --user cassandra-driver
from cassandra.cluster import Cluster
cluster = Cluster()
class Config:
ca_path='ca.pem'
host='cassandra-3630668e-valdis-c169.aivencloud.com'
password='realpwneeded'
port=23114
username='avnadmin'
# Copyright (c) 2018 Aiven, Helsinki, Finland. https://aiven.io/
import ssl
from cassandra.auth import PlainTextAuthProvider
from cassandra.cluster import Cluster
from cassandra.policies import DCAwareRoundRobinPolicy
def cassandra_example(args):
auth_provider = PlainTextAuthProvider(args.username, args.password)
ssl_options = {"ca_certs": args.ca_path, "cert_reqs": ssl.CERT_REQUIRED}
with Cluster([args.host], port=args.port, ssl_options=ssl_options, auth_provider=auth_provider,
load_balancing_policy=DCAwareRoundRobinPolicy(local_dc='aiven')) as cluster:
with cluster.connect() as session:
# Create a keyspace
session.execute("""
CREATE KEYSPACE IF NOT EXISTS example_keyspace
WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'aiven': 3}
""")
# Create a table
session.execute("""
CREATE TABLE IF NOT EXISTS example_keyspace.example_python (
id int PRIMARY KEY,
message text
)
""")
# Insert some data
for i in range(10):
session.execute("""
INSERT INTO example_keyspace.example_python (id, message)
VALUES (%s, %s)
""", (i, "Hello from Python!"))
# Read it back
for row in session.execute("SELECT id, message FROM example_keyspace.example_python"):
print("Row: id = {}, message = {}".format(row.id, row.message))
def cassandra_qry(args, qry):
auth_provider = PlainTextAuthProvider(args.username, args.password)
ssl_options = {"ca_certs": args.ca_path, "cert_reqs": ssl.CERT_REQUIRED}
with Cluster([args.host], port=args.port, ssl_options=ssl_options, auth_provider=auth_provider,
load_balancing_policy=DCAwareRoundRobinPolicy(local_dc='aiven')) as cluster:
with cluster.connect() as session:
for row in session.execute(qry):
print(f"Row: id = {row.id}")
for key,value in row._asdict().items():
print(f"Column {key} - Value {value}")
auth_provider = PlainTextAuthProvider(args.username, args.password)
ssl_options = {"ca_certs": args.ca_path, "cert_reqs": ssl.CERT_REQUIRED}
cluster = Cluster([args.host], port=args.port, ssl_options=ssl_options, auth_provider=auth_provider,\
load_balancing_policy=DCAwareRoundRobinPolicy(local_dc='aiven'))
session = cluster.connect()
session.execute("""
CREATE KEYSPACE IF NOT EXISTS mydb
WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'aiven': 3}
""")
sess = session # one mor allias
sess.execute("""
CREATE TABLE IF NOT EXISTS mydb.tasks(
id int PRIMARY KEY,
task text,
created timestamp,
finished boolean,
cost float
)
""")
r = sess.execute("""
INSERT INTO mydb.tasks (id, task) VALUES (101, 'Buy Milk')
""")
print(r)
id = 10
import random
id += 1
cost = 4 + random.random()*2
r = sess.execute(f"""
INSERT INTO mydb.tasks (id, task, created, finished, cost)
VALUES ({id}, 'Buy Dinner', toTimeStamp(now()), False, {cost})
""")
print(r)
r = sess.execute("""
SELECT * FROM mydb.tasks
""")
reslist = list(r)
print(len(reslist))
dinnerlist = [row for row in reslist if row.task == 'Buy Dinner']
dinnerlist
cheapfood = [row for row in dinnerlist if row.cost < 5]
cheapfood
id += 1
r = sess.execute(f"""
INSERT INTO mydb.tasks (id, task, created)
VALUES ({id}, 'Get Dinner', toTimeStamp(now()))
""")
print(r)
r = sess.execute("""
SELECT * FROM mydb.tasks
""")
results = list(r)
len(results)
r = sess.execute("""
SELECT * FROM mydb.tasks
WHERE id = 101
""")
results = list(r)
len(results)
print(results)
row = results[0]
row
type(row)
dir(row)
for value in row:
print(value)
row._fields
row._asdict()
for key,value in row._asdict().items():
print(f"Column {key} - Value {value}")
# qry = "SELECT id, message FROM example_keyspace.example_python"
qry = "INSERT INTO example_keyspace.example_python (id, message) VALUES (15, 'Valdis')"
qry = "SELECT id, message FROM example_keyspace.example_python"
cassandra_qry(args, qry)
args = Config()
args.ca_path
args.username
cassandra_example(args)
# Create new table mydb.users in your Cassandra DB
# EXTRACT - Read ALL data from JSON API at Mockaroo (could use my at https://my.api.mockaroo.com/mar07.json?key=58227cb0)
# TRANSFORM
# LOAD Insert ALL data into mydb.users
## For extra challenge add timestamp into users table
# SELECT ALL from users
# filter all users from Italy (with .it)
import requests
url = "https://my.api.mockaroo.com/mar07.json?key=58227cb0"
req = requests.get(url)
req.status_code
data = req.json() #requests has json decoding built in
len(data)
data[:5]
# https://docs.datastax.com/en/dse/6.0/cql/cql/cql_reference/cql_commands/cqlDropTable.html
r = sess.execute("""
DROP TABLE IF EXISTS mydb.users ;
""")
# https://docs.datastax.com/en/dse/6.0/cql/cql/cql_reference/cql_commands/cqlCreateTable.html#cqlCreateTable
r = sess.execute("""
CREATE TABLE IF NOT EXISTS mydb.users(
id int PRIMARY KEY,
first_name text,
created timestamp,
last_name text,
ip_address inet,
gender text,
email text
)
""")
# https://docs.datastax.com/en/dse/6.0/cql/cql/cql_reference/cql_commands/cqlAlterTable.html
sess.execute("""
ALTER TABLE mydb.users
ADD passcode int
""")
list(r.all())
r = sess.execute("""SELECT *
FROM system_schema.keyspaces""")
rlist = list(r)
len(rlist)
print(rlist)
tinfo = sess.execute("""
SELECT *
FROM system_schema.columns
WHERE keyspace_name = 'mydb'
AND table_name = 'users';""")
tlist = list(tinfo)
tlist
len(data)
frow = data[0]
frow
frow['id']
frow.get('id')
columns = [r.column_name for r in tlist]
columns
# https://docs.datastax.com/en/dse/6.0/cql/cql/cql_reference/cql_commands/cqlInsert.html
sess.execute("""
INSERT INTO mydb.users
(created, email, first_name, gender, id, ip_address, last_name, passcode)
VALUES (toTimeStamp(now()), %s, %s, %s, %s, %s, %s, %s)
""", (frow.get('email'), frow.get('first_name'), frow.get('gender'),
frow.get('id'), frow.get('ip_address'), frow.get('last_name'), 9000
))
res = sess.execute("""
SELECT * FROM mydb.users
""")
rlist = list(res)
rlist
for frow in data:
sess.execute("""
INSERT INTO mydb.users
(created, email, first_name, gender, id, ip_address, last_name, passcode)
VALUES (toTimeStamp(now()), %s, %s, %s, %s, %s, %s, %s)
""", (frow.get('email'), frow.get('first_name'), frow.get('gender'),
frow.get('id'), frow.get('ip_address'), frow.get('last_name'), 9000
))
res = sess.execute("""
SELECT * FROM mydb.users
""")
rlist = list(res)
len(rlist)
rlist[0]
rlist[0].email
japanese = [row for row in rlist if row.email.endswith('.jp')]
japanese
# https://docs.datastax.com/en/dse/5.1/cql/cql/cql_reference/cql_commands/cqlDropIndex.html
sess.execute("""
DROP INDEX mydb.users_email_idx
""")
# We need to create secondary index for filtering by WHERE with LIKE
# https://docs.datastax.com/en/cql-oss/3.3/cql/cql_using/useSecondaryIndex.html
sess.execute("""
CREATE INDEX ON mydb.users (email) ;""")
# TURNS OUT WE need a special SASI index
# http://www.tsoft.se/wp/2016/08/12/sql-like-operation-in-cassandra-is-possible-in-v3-4/
sess.execute("""
CREATE CUSTOM INDEX ON mydb.users (email)
USING 'org.apache.cassandra.index.sasi.SASIIndex'
WITH OPTIONS = {'mode': 'CONTAINS',
'analyzer_class': 'org.apache.cassandra.index.sasi.analyzer.StandardAnalyzer',
'case_sensitive': 'false'};
""")
# https://docs.datastax.com/en/dse/6.7/cql/cql/cql_using/search_index/nativeCqlQueryExamples.html
res = sess.execute("""
SELECT * FROM mydb.users
WHERE email LIKE '%.jp';
""")
rlist = list(res)
len(rlist)
rlist
###Output
_____no_output_____ |
notebooks/Data_Creation_from_Sample_Adult_and_Family.ipynb | ###Markdown
First some environment variablesWe now use the files that are stored in the RAW directory.If we decide to change the data format by changing names, adding features, created summary data frames etc., we will save those files in the INTERIM directory.
###Code
PROJECT_DIR = os.path.dirname(dotenv_path)
RAW_DATA_DIR = PROJECT_DIR + os.environ.get("RAW_DATA_DIR")
INTERIM_DATA_DIR = PROJECT_DIR + os.environ.get("INTERIM_DATA_DIR")
files=os.environ.get("FILES").split()
print("Project directory is : {0}".format(PROJECT_DIR))
print("Raw data directory is : {0}".format(RAW_DATA_DIR))
print("Interim directory is : {0}".format(INTERIM_DATA_DIR))
###Output
Project directory is : /home/gsentveld/lunch_and_learn
Raw data directory is : /home/gsentveld/lunch_and_learn/data/raw
Interim directory is : /home/gsentveld/lunch_and_learn/data/interim
###Markdown
Importing pandas and matplotlib.pyplot
###Code
# The following jupyter notebook magic makes the plots appear in the notebook.
# If you run in batch mode, you have to save your plots as images.
%matplotlib inline
# matplotlib.pyplot is traditionally imported as plt
import matplotlib.pyplot as plt
# numpy is imported as np
import numpy as np
# Pandas is traditionaly imported as pd.
import pandas as pd
from pylab import rcParams
# some display options to size the figures. feel free to experiment
pd.set_option('display.max_columns', 25)
rcParams['figure.figsize'] = (17, 7)
###Output
_____no_output_____
###Markdown
Reading a file in PandasReading a CSV file is really easy in Pandas. There are several formats that Pandas can deal with.|Format Type|Data Description|Reader|Writer||---|---|---|---||text|CSV|read_csv|to_csv||text|JSON|read_json|to_json||text|HTML|read_html|to_html||text|Local clipboard|read_clipboard|to_clipboard||binary|MS Excel|read_excel|to_excel||binary|HDF5 Format|read_hdf|to_hdf||binary|Feather Format|read_feather|to_feather||binary|Msgpack|read_msgpack|to_msgpack||binary|Stata|read_stata|to_stata||binary|SAS|read_sas |||binary|Python Pickle Format|read_pickle|to_pickle||SQL|SQL|read_sql|to_sql||SQL|Google Big Query|read_gbq|to_gbq| Psychological well-being among US adults with arthritis and the unmet need for mental health carehttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC5436776/pdf/oarrr-9-101.pdfThis article suggests a relationship between arthritis and serious psychological distress (SPD).First we will look at the article to recreate the data set from the NIHS data we got in session 2. We will use pd.read_csv().As you will see, the Jupyter notebook prints out a very nice rendition of the DataFrame object that is the result
###Code
family=pd.read_csv(RAW_DATA_DIR+'/familyxx.csv')
samadult=pd.read_csv(RAW_DATA_DIR+'/samadult.csv')
# Start with a data frame to collect all the data in
df = pd.DataFrame()
###Output
_____no_output_____
###Markdown
Mental health conditionsIndividuals were determined to have SPD using the validatedKessler 6 (K6) scale.31,32 K6 scores are derived fromresponses to six questions asking how often in the past 30days the individual felt “nervous”, “restless”, “hopeless”,“worthless”, “everything feels like an effort”, and “so sadthat nothing cheers them up”, with responses ranging from0 (none of the time) to 4 (all of the time). The responsesfor these six variables are summed to obtain the K6 score(maximum possible score of 24), and individuals with a scoreof ≥13 are considered to have SPD.Corresponding columns:ASINERV, ASIRSTLS, ASIHOPLS, ASIWTHLS, ASIEFFRT, ASISAD
###Code
# Calculate Kessler 6
# How often did you feel:
# nervous, restless, hopeless, worthless, everything is an effort, so sad nothing mattered.
# ASINERV, ASIRSTLS, ASIHOPLS, ASIWTHLS, ASIEFFRT, ASISAD
kessler_6_questions=['ASINERV', 'ASIRSTLS', 'ASIHOPLS', 'ASIWTHLS', 'ASIEFFRT', 'ASISAD']
# 1 ALL of the time
# 2 MOST of the time
# 3 SOME of the time
# 4 A LITTLE of the time
# 5 NONE of the time
# 7 Refused
# 8 Not ascertained
# 9 Don't know
# These have to be encoded as:
# 7, 8, 9 -> NaN
# 5 -> 0
# 4 -> 1
# 3 -> 2
# 2 -> 3
# 1 -> 4
kessler_6_map = { 1:4, 2:3, 3:2, 4:1, 5:0}
kessler_6=pd.DataFrame()
for col in kessler_6_questions:
kessler_6[col]=[ kessler_6_map.get(x, None) for x in samadult[col]]
df['SPD']= kessler_6.sum(axis=1)>=13
df['SPD'] = np.where(df['SPD'], 'Yes', 'No')
del kessler_6
df.head(5)
###Output
_____no_output_____
###Markdown
Arthritis indicator itself is very simple
###Code
# Arthritis Status
arth_map= {1:'Yes', 2:'No'}
df['ARTH1']=[ arth_map.get(x, None) for x in samadult['ARTH1']]
###Output
_____no_output_____
###Markdown
Chronic condition countFrom the article: We created a chronic condition count based onthe following eight nonarthritis chronic conditions: cancer(except nonmelanoma skin); heart condition (includingcoronary heart disease, angina, myocardial infarction, or anyother heart condition); diabetes; hepatitis or liver condition;hypertension (on at least two different visits); respiratoryconditions (current asthma, emphysema, or chronic bronchitis);stroke; and weak or failing kidneys, defined similar tothe recommendations of Goodman et al.From the NIHS file:- CANEV, - CNKIND22: cancer (except nonmelanoma skin)- CHDEV: heart condition (including coronary heart disease, angina, myocardial infarction, or any other heart condition)- DIBEV: diabetes- AHEP, LIVEV: hepatitis or liver condition- HYPDIFV: hypertension (on at least two different visits)- AASMEV, EPHEV, CBRCHYR: respiratory conditions (current asthma, emphysema, or chronic bronchitis)- STREV, ALCHRC8: stroke- KIDWKYR: and weak or failing kidneys
###Code
# the following variables are used for the chronic condition count
straight_chronic_condition_questions = ['CHDEV','DIBEV','HYPDIFV', 'KIDWKYR']
cancer_nonmelanoma_skin= ['CANEV','CNKIND22'] # CANEV minus CNKIND22
hep_liver=['AHEP','LIVEV']
respiratory=['AASMEV','EPHEV', 'CBRCHYR']
stroke=['STREV','ALCHRC8']
# Create a temporary dataframe and collect the straight forward conditions
chronic_ind=pd.DataFrame()
# this could be a bit too liberal with the Unknown and Refused to answer values
for col in straight_chronic_condition_questions:
chronic_ind[col]=samadult[col]==1
# Assume CANCER is false. Set to True for those diagnosed, and reset a few that were CNKIND22
chronic_ind['CANCER']=False
chronic_ind.loc[samadult['CANEV']==1,'CANCER'] = True
# override a few that have nonmelanoma skin
chronic_ind.loc[samadult['CNKIND22']==1, 'CANCER'] = False
# Assume Hepatitis or Liver condition is false and then set to True if either is reported
chronic_ind['HEPLIVER']=False
chronic_ind.loc[(samadult['AHEP']==1) | (samadult['LIVEV']==1), 'HEPLIVER'] = True
# Assume Respiratory condition is False and set to True if either of the three is reported
chronic_ind['RESPIRATORY']=False
chronic_ind.loc[(samadult['AASMEV']==1) | (samadult['EPHEV']==1) | (samadult['CBRCHYR']==1), 'RESPIRATORY'] = True
# Assume Stroke condition is false and then set to True if either flag is reported
chronic_ind['STROKE']=False
chronic_ind.loc[(samadult['STREV']==1) | (samadult['ALCHRC8']==1), 'STROKE'] = True
chronic_ind.head()
###Output
_____no_output_____
###Markdown
Now count the TRUE values over this dataframe.Keep the values for 0, 1 and 2 and call everything else >=3
###Code
# Now count the chronic conditions and assign to df
chronic_ind['CHRONIC_CT']=np.array(np.sum(chronic_ind, axis=1))
chron_map = {0:'0',1:'1', 2:'2'}
df['CHRONIC_CT']=[chron_map.get(x, '>=3') for x in chronic_ind['CHRONIC_CT']]
del chronic_ind
df.head(10)
# General Health Status, does not exist as in study Very Good/Excellent, Good, Poor/Fair.
# Only and indicator if it was worse, same, better
# we will use it as a proxy.
status_map={1:"Very Good", 2:"Poor", 3: "Good"}
df['GENERAL_HEALTH_STATUS']=[status_map.get(x, None) for x in samadult['AHSTATYR']]
###Output
_____no_output_____
###Markdown
Another Pandas manipulation trickHere we have numerical range that we want to transform into 3 different categories.We can do a loop, but Pandas allows for a more Pythonic way to do this
###Code
# BMI
bmi=pd.DataFrame()
bmi['BMI']=samadult['BMI']
bmi.loc[bmi['BMI'] < 2500, 'BMI_C'] = '<25'
bmi.loc[(bmi['BMI'] >= 2500)&(bmi['BMI'] < 3000), 'BMI_C'] = '25<30'
bmi.loc[(bmi['BMI'] >= 3000)&(bmi['BMI'] < 9999), 'BMI_C'] = '>30'
df['BMI_C']=bmi['BMI_C']
del bmi
###Output
_____no_output_____
###Markdown
Physical Activity LevelAt least 150 Moderate or 75 Vigorous minutes per week.Questions are answered with per day, per week, per month, per year and recoded to units per week. But the file also has it recoded to units per week. Those units are either minutes or hours. So we have to do some math to figure out if we get more than 150 moderate equivalent minutes.Another interesting way to manipulate data, this time using 'apply' and a user defined function.
###Code
def determine_activity(x):
minutes = 0
if x['VIGLNGTP']==1:
minutes = minutes + x['VIGLNGNO']*2
elif x['VIGLNGTP']==2:
minutes = minutes + x['VIGLNGNO']*120
if x['MODLNGTP']==1:
minutes = minutes + x['MODLNGNO']
elif x['MODLNGTP']==2:
minutes = minutes + x['MODLNGNO']*60
return 'Meets' if minutes >= 150 else 'Does not meet'
physical_activity=pd.DataFrame()
physical_activity=samadult[['VIGLNGNO','VIGLNGTP', 'MODLNGNO', 'MODLNGTP']].copy()
physical_activity['ACTIVITY']=physical_activity.apply(determine_activity, axis=1)
df['ACTIVITY']=physical_activity['ACTIVITY']
del physical_activity
df.head(20)
###Output
_____no_output_____
###Markdown
Similar activities for Age, Sex, and RaceHere we do similar activities for Age, Sex and Race and we start to see that the coding of the data is slightly different than is suggested in the article for some fields. This is interesting as it will schew the categories probabilities.
###Code
# Age
age=pd.DataFrame()
age['AGE_P']=samadult['AGE_P']
age.loc[age['AGE_P'] < 45, 'AGE_C'] = '18-44'
age.loc[(age['AGE_P'] >= 45)&(age['AGE_P'] < 65), 'AGE_C'] = '45-64'
age.loc[age['AGE_P'] >= 65, 'AGE_C'] = '65-'
df['AGE_C']=age['AGE_C']
del age
# Sex
df['SEX']=[ 'Male' if x == 1 else 'Female' for x in samadult['SEX']]
# Race. Not exactly a match with the study. Not sure why.
# RACERPI2
race_map= {1: 'White', 2: 'Black/African American', 3:'AIAN', 4: 'Asian',5: 'not releasable',6: 'Multiple'}
df['RACE']=[ race_map.get(x, None) for x in samadult['RACERPI2']]
###Output
_____no_output_____
###Markdown
Some fields are not found or hard to reconstructEducation Level and Employment status are not encoded as expected.Education level can't be found at all and employment status is a mix between workstatus and why did you not work last week. Which is an odd way to determine if someone is retired, or a student.
###Code
# Educational level:
# Less than high school
# High school diploma
# Some college or Associates degree
# College or greater
# Can't find it in data?
# Employment status: complex between workstatus and why not worked last week, logic is not described
# Maybe at least get "Out of Work", "Retired", "Other"?
###Output
_____no_output_____
###Markdown
Do the same for the other fields
###Code
# marital status
# R_MARITL
# 0 Under 14 years -> will combine that with Never Married
# 1 Married - spouse in household \
# 2 Married - spouse not in household > -- will combine these
# 3 Married - spouse in household unknown /
# 4 Widowed
# 5 Divorced \ will combine these
# 6 Separated /
# 7 Never married
# 8 Living with partner
# 9 Unknown marital status -> will combine with 7
marital_map = { 0: "Never Married"
, 1: "Married"
, 2: "Married"
, 3: "Married"
, 4: "Widowed"
, 5: "Divorced/Separated"
, 6: "Divorced/Separated"
, 7: "Never Married"
, 8: "Living with Partner"
, 9: "Never Married"}
df['MARITAL_STATUS']=[ marital_map.get(x, "Never Married") for x in samadult['R_MARITL']]
# Functional limitation score
fl_columns=['FLWALK','FLCLIMB','FLSTAND','FLSIT','FLSTOOP','FLREACH','FLGRASP','FLCARRY','FLPUSH']
fl_cols=samadult[fl_columns].copy()
for col in fl_columns:
fl_cols.loc[fl_cols[col]>=6] = 0
fl_cols['FL_AVG']=fl_cols.mean(axis=1)
fl_cols.loc[fl_cols['FL_AVG'] == 0,'FUNC_LIMIT'] = 'None'
fl_cols.loc[(fl_cols['FL_AVG'] > 0)&(fl_cols['FL_AVG'] <=1),'FUNC_LIMIT'] = 'Low'
fl_cols.loc[(fl_cols['FL_AVG'] > 1)&(fl_cols['FL_AVG'] <=2),'FUNC_LIMIT'] = 'Medium'
fl_cols.loc[fl_cols['FL_AVG'] > 2,'FUNC_LIMIT'] = 'High'
df['FUNC_LIMIT']=fl_cols['FUNC_LIMIT']
del fl_cols
# Social participation restriction
# We defined social participation restriction as
# difficulty or inability to shop, go to events, or participate in
# social activities without special equipment, per previously
# published analyses.
# FLSHOP and FLSOCL
restr_map={1:"Yes", 2:"Yes", 3: "Yes", 4: "Yes"}
social_cols=pd.DataFrame()
social_cols['FLSHOP']=[restr_map.get(x, 'No') for x in samadult['FLSHOP']]
social_cols['FLSOCL']=[restr_map.get(x, 'No') for x in samadult['FLSOCL']]
social_cols.loc[(social_cols['FLSHOP']=='Yes')|(social_cols['FLSOCL']=='Yes'), 'SOC_RESTR']='Yes'
social_cols.loc[(social_cols['FLSHOP']=='No')&(social_cols['FLSOCL']=='No'), 'SOC_RESTR']='No'
df['SOC_RESTR']=social_cols['SOC_RESTR']
#Could not afford mental health care, past 12 months
# AHCAFYR2
# No = 2
# Yes = 1
df['NOT_AFFORD']=[ 'Yes' if x == 1 else 'No' for x in samadult['AHCAFYR2']]
#Seen a mental health professional, past 12 months
# AHCSYR1
#No = 2
#Yes = 1
df['SEEN_MENTAL_DR']=[ 'Yes' if x == 1 else 'No' for x in samadult['AHCSYR1']]
###Output
_____no_output_____
###Markdown
What do we have so far.
###Code
df.head(34)
###Output
_____no_output_____
###Markdown
Now get the Insurance and Poverty Ratio fields from the Family file.
###Code
#From Familyxx get poverty ratio
fam_df=pd.DataFrame()
ratio_map={
1: '<1' # Under 0.50
,2: '<1' # 0.50 - 0.74
,3: '<1' # 0.75 - 0.99
,4: '1 to <2' # 1.00 - 1.24
,5: '1 to <2' # 1.25 - 1.49
,6: '1 to <2' # 1.50 - 1.74
,7: '1 to <2' # 1.75 - 1.99
,8: '>=2' # 2.00 - 2.49
,9: '>=2' # 2.50 - 2.99
,10: '>=2' # 3.00 - 3.49
,11: '>=2' # 3.50 - 3.99
,12: '>=2' # 4.00 - 4.49
,13: '>=2' # 4.50 - 4.99
,14: '>=2' # 5.00 and over
,15: '<1' # Less than 1.00 (no further detail)
,16: '1 to <2' # 1.00 - 1.99 (no further detail)
,17: '>=2' # 2.00 and over (no further detail)
,96: '1 to <2' # Undefinable
,99: '1 to <2' # Unknown
}
fam_df['POV_RATIO']=[ratio_map.get(x, None) for x in family['RAT_CAT4']]
# Just going to go for Yes and No and any unknown/refused as No
# Health insurance
#Any private
#Public only
# Not covered
# FHICOVYN
fam_df['INSURANCE']=['Yes' if x == 1 else 'No' for x in family['FHICOVYN']]
###Output
_____no_output_____
###Markdown
This is how you join two datasets in Pandas.To join two data sets in Pandas, you can merge based on key fields.In the NIHS datasets the key field that links a person to the family is the Houshold Key and the Family Key.
###Code
df['HHX']=samadult['HHX']
df['FMX']=samadult['FMX']
fam_df['HHX']=family['HHX']
fam_df['FMX']=family['FMX']
###Output
_____no_output_____
###Markdown
And you then do a merge, with the columns indicated in the on= parameter and, very important, specify it is a left join, so that you don't loose any people, if you can't find the family.
###Code
joined_df=pd.merge(df, fam_df, on=['HHX','FMX'],how='left', sort=False)
joined_df.drop(['HHX','FMX'], axis=1,inplace=True )
joined_df.head()
###Output
_____no_output_____
###Markdown
Save the result in the INTERIM data directory
###Code
df=joined_df
df.to_csv(INTERIM_DATA_DIR+'/arthritis_study.csv')
###Output
_____no_output_____ |
landcover_change_application/level3_test.ipynb | ###Markdown
An environmental layers testing framework for the FAO land cover classification system The purpose of this notebook is to provide an easy-to-use method for testing environmental layers to use for classification and seeing how changes to particular layers effect the final Land Cover Classification. You can easily test with different environmental layer inputs, and different locations. This code defines 5 variables to contain the binary layers required to reach a level 3 classification:1. **vegetat_veg_cat:** Vegetated / Non-Vegetated 2. **aquatic_wat_cat:** Water / Terrestrial 3. **cultman_agr_cat:** Natural Veg / Crop or Managed Veg4. **artific_urb_cat:** Natural Surfaces / Artificial Surfaces (bare soil/urban) 5. **artwatr_wat_cat:** Natural water / Artificial waterWhilst this example is using open data cube to load the required data, it can be loaded from anywhere - so long as all input layers cover the same geographic region and are defined in a correctly labelled dataset, before being passed to the classification code.
###Code
import numpy
import xarray
import scipy
from matplotlib import pyplot
from matplotlib import cm
import datacube
from datacube.storage import masking
dc = datacube.Datacube(app="le_lccs")
#import classification script
import lccs_l3
###Output
_____no_output_____
###Markdown
Define details of data to be loaded - area, resolution, crs..
###Code
# Define area of interest
# Ayr
x = (1500000, 1600000)
y = (-2200000, -2100000)
# # Diamentina
#x = (800000, 900000)
#y = (-2800000, -2700000)
# # Gwydir
#x = (1600000, 1700000)
#y = (-3400000, -3300000)
# Leichhardt
#x = (800000, 900000)
#y = (-2000000, -1900000)
# # Barmah-Millewa
#x = (1100000, 1200000)
#y = (-4000000, -3900000)
# # Forescue marshes
#x = (-1200000, -1300000)
#y = (-2500000, -2400000)
# # Snowy
#x = (1400000, 1500000)
#y = (-4100000, -4000000)
res = (-25, 25)
crs = "EPSG:3577"
time = ("2010-01-01", "2010-12-15")
sensor= 'ls5'
query=({'x':x,
'y':y,
'crs':crs,
'resolution':res})
out_filename = "Townsville-2010.tif"
###Output
_____no_output_____
###Markdown
Create environmental layers Presence/Absence of Vegetation INITIAL-LEVEL DISTINCTION * *Primarily Vegetated Areas*: This class applies to areas that have a vegetative cover of at least 4% for at least two months of the year, consisting of Woody (Trees, Shrubs) and/or Herbaceous (Forbs, Graminoids) lifeforms, or at least 25% cover of Lichens/Mosses when other life forms are absent. * *Primarily Non-Vegetated Areas*: Areas which are not primarily vegetated. Here we're using Fractional cover annual percentiles to distinguish between vegetated and not. http://data.auscover.org.au/xwiki/bin/view/Product+pages/Landsat+Fractional+Cover **Creating your own layer**: To use a different veg/non-veg layer, replace the following two cells with code to create a binary layer with vegetated (1) and non-vegetated (0), using the method of choice, and save into `vegetat_veg_cat_ds`
###Code
# Load data from datacube
fc_ann = dc.load(product="fc_percentile_albers_annual",
measurements=["PV_PC_50", "NPV_PC_50", "BS_PC_50"],
time=time, **query)
fc_ann = masking.mask_invalid_data(fc_ann)
# Create binary layer representing vegetated (1) and non-vegetated (0)
#vegetat = ((fc_ann["PV_PC_50"] >= 55) | (fc_ann["NPV_PC_50"] >= 55))
vegetat = (fc_ann["BS_PC_50"] < 40)
# Convert to Dataset and add name
vegetat_veg_cat_ds = vegetat.to_dataset(name="vegetat_veg_cat").squeeze().drop('time')
# # Plot output
# vegetat_veg_cat_ds["vegetat_veg_cat"].plot(figsize=(6, 5))
###Output
_____no_output_____
###Markdown
Aquatic or regularly flooded / Terrestrial SECOND-LEVEL DISTINCTIONThis layer breaks the initial veg/non-veg into 4 classes based on the presence or absence of water * *Primarily vegetated, Terrestrial*: The vegetation is influenced by the edaphic substratum * *Primarily Non-Vegetated, Terrestrial*: The cover is influenced by the edaphic substratum * *Primarily vegetated, Aquatic or regularly flooded*: The environment is significantly influenced by the presence of water over extensive periods of time. The water is the dominant factor determining natural soil development and the type of plant communities living on its surface * *Primarily Non-Vegetated, Aquatic or regularly flooded*: Here we're using a Water Observations from Space (WOfS) annual summary to separate terrestrial areas from aquatic. We're using a threshold of 20% to rule out one-off flood events.[WOfS](https://doi.org/10.1016/j.rse.2015.11.003) **Creating your own layer**: To use a different veg/non-veg layer, replace the following two cells with code to create a binary layer with aquatic (1) and terrestrial (0), using the method of choice, and save into `aquatic_wat_cat_ds`
###Code
# Load data from datacube
wofs_ann = dc.load(product="wofs_annual_summary", measurements=["frequency"],
time=time, **query)
wofs_ann = masking.mask_invalid_data(wofs_ann)
# Create binary layer representing aquatic (1) and terrestrial (0)
aquatic_wat = ((wofs_ann["frequency"] >= 0.2))
# Convert to Dataset and add name
aquatic_wat_cat_ds = aquatic_wat.to_dataset(name="aquatic_wat_cat").squeeze().drop('time')
# # Plot output
# aquatic_wat_cat_ds["aquatic_wat_cat"].plot(figsize=(6, 5))
###Output
_____no_output_____
###Markdown
cultman_agr_cat TERTIARY-LEVEL DISTINCTIONThis layer breaks the initial terrestrial and aquatic, vegetated categories into either cultivated/managed, or (semi-)natural * *Primarily vegetated, Terrestrial, Artificial/Managed*: Cultivated and Managed Terrestrial Areas * *Primarily vegetated, Terrestrial, (Semi-)natural*: Natural and Semi-Natural Vegetation * *Primarily vegetated, Aquatic or Regularly Flooded, Artificial/Managed*: Cultivated Aquatic or Regularly Flooded Areas * *Primarily vegetated, Aquatic or Regularly Flooded, (Semi-)natural*: Natural and Semi-Natural Aquatic or Regularly Flooded Vegetation Here we're using the Median Absolute Deviation (MAD) to distinguish between natural and cultivated areas. This looks to be an interesting option, but more investigation is required to see if we can get a reliable, robust layer using this.
###Code
# Load data from datacube
ls8_mads = dc.load(product=sensor +"_nbart_tmad_annual", time=time, **query)
ls8_mads = masking.mask_invalid_data(ls8_mads)
# Create binary layer representing cultivated (1) and natural (0)
cultman = ((ls8_mads["edev"] > 0.115))
# Convert to Dataset and add name
cultman_agr_cat_ds = cultman.to_dataset(name="cultman_agr_cat").squeeze().drop('time')
# # Plot output
# cultman_agr_cat_ds["cultman_agr_cat"].plot(figsize=(6, 5))
###Output
_____no_output_____
###Markdown
artific_urb_cat This layer breaks the initial terrestrial, non-vegetated category into either artificial surfaces or bare areas * *Primarily non-vegetated, Terrestrial, Artificial/managed*: Areas that have an artificial cover as a result of human activities such as construction, extraction or waste disposal * *Primarily non-vegetated, Terrestrial, (Semi-)natural*: Bare areas that do not have an artificial cover as a result of human activities. These areas include areas with less than 4% vegetative cover. Included are bare rock areas, sands and deserts Here we've used the Normalized Difference Built-up Index (NDBI) to distinguish urban from bare soil. It doesn't do a great job and has issues classifying correctly in bright bare areas.
###Code
# Load data
ls8_gm = dc.load(product= sensor + "_nbart_geomedian_annual", time=time, **query)
ls8_gm = masking.mask_invalid_data(ls8_gm).squeeze().drop('time')
# Calculate ndvi
ndvi = ((ls8_gm.nir - ls8_gm.red) / (ls8_gm.nir + ls8_gm.red))
# Calculate NDBI
NDBI = ((ls8_gm.nir - ls8_gm.swir1) / (ls8_gm.nir + ls8_gm.swir1))
# Create binary layer representing urban (1) and baresoil (0)
urban = (NDBI.where(ndvi<0.15) < 0)
# Convert to Dataset and add name
artific_urb_cat = urban.to_dataset(name="artific_urb_cat")
# # Plot output
# artific_urb_cat["artific_urb_cat"].plot(figsize=(6, 5))
###Output
_____no_output_____
###Markdown
artwatr_wat_cat This layer breaks the initial Aquatic, non-vegetated category into either artificial water bodies or natural ones * *Primarily non-vegetated, Aquatic or Regularly Flooded, Artificial/managed*: areas that are covered by water due to the construction of artefacts such as reservoirs, canals, artificial lakes, etc. * *Primarily non-vegetated, Aquatic or Regularly Flooded, (Semi-)natural*: areas that are naturally covered by water, such as lakes, rivers, snow or ice As differentiating between natural and artificial waterbodies using only satellite imagery is extremely difficult, here we use a static layer. Australian Hydrological Geospatial Fabric (Geofabric) is a dataset of hydrological features derived from manually interpreted topographic map grids. It classifies the land in terms of: 0: Unclassified, 1: CanalArea, 2: Flat, 3: ForeshoreFlat, 4: PondageArea, 5: RapidArea, 6: WatercourseArea, 7: Lake, 8: Reservoir, 9: SwampHere, CanalArea & Reservoir are used to define artificial water.
###Code
# Load data
geofab = dc.load(product="geofabric",measurements=["band1"], **query)
geofab = geofab.squeeze().drop('time')
# # Plot data
# geofab.band1.plot.imshow(cmap="nipy_spectral")
# Create binary layer representing artificial water (1) and natural water (0)
artwatr_wat_cat_ds = ((geofab["band1"] == 1) | (geofab["band1"] == 8))
# Convert to Dataset and add name
artwatr_wat_cat_ds = artwatr_wat_cat_ds.to_dataset(name="artwatr_wat_cat")
# # Plot output
# artwatr_wat_cat_ds["artwatr_wat_cat"].plot(figsize=(5, 5))
###Output
_____no_output_____
###Markdown
Collect environmental variables into array for passing to classification system
###Code
variables_xarray_list = []
variables_xarray_list.append(artwatr_wat_cat_ds)
variables_xarray_list.append(aquatic_wat_cat_ds)
variables_xarray_list.append(vegetat_veg_cat_ds)
variables_xarray_list.append(cultman_agr_cat)
variables_xarray_list.append(artific_urb_cat)
###Output
_____no_output_____
###Markdown
Classification The LCCS classificaition is hierachial. The 8 classes are shown below.| Class name | Code | Numeric code ||----------------------------------|-----|-----|| Cultivated Terrestrial Vegetated | A11 | 111 || Natural Terrestrial Vegetated | A12 | 112 || Cultivated Aquatic Vegetated | A23 | 123 || Natural Aquatic Vegetated | A24 | 124 || Artificial Surface | B15 | 215 || Natural Surface | B16 | 216 || Artificial Water | B27 | 227 || Natural Water | B28 | 228 |
###Code
# Merge to a single dataframe
classification_data = xarray.merge(variables_xarray_list)
#classification_data
# Apply Level 3 classification using separate function. Works through in three stages
level1, level2, level3 = lccs_l3.classify_lccs_level3(classification_data)
# Save classification values back to xarray
out_class_xarray = xarray.Dataset(
{"level1" : (classification_data["vegetat_veg_cat"].dims, level1),
"level2" : (classification_data["vegetat_veg_cat"].dims, level2),
"level3" : (classification_data["vegetat_veg_cat"].dims, level3)})
classification_data = xarray.merge([classification_data, out_class_xarray])
col_level2 = cm.get_cmap("Set1", 2)
# classification_data.level2.plot(cmap=(col_level2))
# print("level 1:",numpy.unique(classification_data.level1))
# print("level 2:",numpy.unique(classification_data.level2))
# print("level 3:",numpy.unique(classification_data.level3))
#To check the results for level 3 use colour_lccs_level3 to get the colour scheme.
pyplot.figure(figsize=(10, 10))
red, green, blue, alpha = lccs_l3.colour_lccs_level3(level3)
pyplot.imshow(numpy.dstack([red, green, blue, alpha]))
###Output
_____no_output_____
###Markdown
Save results to geotiff
###Code
import gdal
def array_to_geotiff(fname, data, geo_transform, projection,
nodata_val=0, dtype=gdal.GDT_Int16):
# Set up driver
driver = gdal.GetDriverByName('GTiff')
# Create raster of given size and projection
rows, cols = data.shape
dataset = driver.Create(fname, cols, rows, 1, dtype)
dataset.SetGeoTransform(geo_transform)
dataset.SetProjection(projection)
# Write data to array and set nodata values
band = dataset.GetRasterBand(1)
band.WriteArray(data)
band.SetNoDataValue(nodata_val)
# Close file
dataset = None
###Output
_____no_output_____ |
motif/final_project.ipynb | ###Markdown
IntroductionOur work seeks to curate audio features to train a music genre classifier. Such a classifier would be able to take in a set of audio features for a song and accurately determine the genre of that song--a task that is accomplished by most humans with minimal background in music. There are a number of difficulties in such a problem not limited to the definition of "genre" and selecting appropriate audio to train the model. MotivationIt is a somewhat simple task for a trained musician or musicologist to listen to a work of music and label its genre. What do we need to help a computer complete the same task? Questions we want to answer:1. What features of music make it a part of its genre?2. Is genre classification a problem well-suited to classical machine learning?We hypothesize that the MFCC coefficients will be important, because others doing genre classification have found them important, at least in deep learning models. We think that taking the mean and variance of the coefficients for each song will retain the most important information while making the problem tractable.We would note that one difficulty related to this task relates to how we define genres. It is a very abstract and subjective question, and the lines between genres are blurry at best. Thus, any machine learning genre classifier will be subject to the issue of vague class divisions depending on who labelled the data and what metric they used. Related WorkThere have been many studies in the area of genre classification in machine learning. Traditionally models have used learning algorithms for SVM and KNN and have relied heavily on common spectral features including the MFCCs (1). The state of the art has improved over time with most classical machine learning classifiers managing 60-70% accuracy. This is similar to human capabilities with short song intervals according to some human trials (2). In more recent years, neural networks have been able to make more accurate predictions near 80-90% accuracy in some cases. DataOur data comes from the Free Music Archive (https://github.com/mdeff/fma) created by Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, Xavier Bresson. International Society for Music Information Retrieval Conference (ISMIR), 2017.We use the audio files and genre tags, but build our own features. We also use the small data set composed of 8000 30-second songs (8 GB in `.mp3` fromat). We convert each file to a `.wav` for simplicity. Each song is designated by a `track_id` and labeled with one of eight genres: Hip-Hop, Pop, Folk, Experimental, Rock, International, Electronic, and Instrumental. There songs are distributed evenly across genres with 1000 songs per genre. Potential IssuesOne potential issue with our data is that the dataset is composed entirely of free music (creative commons), and therefore our model may have difficulty analyzing other kinds of music, which may be quite different.Specifically, we have reason to believe that the genre definitions, quality, and style of a free music database may differ from commercial music, so a future step could be finding a way to evaluate how well a model trained on a free music database can generalize to samples of commercial music. Missing DataThe dataset is fairly robust, but of the 8000 tracks, there are 6 that are only a few seconds long. We ignore these tracks from our analysis, since our algorithms for feature extraction depend on the songs being of a certain length in order to be accurate. Ethical Concerns and ImplicationsThe music used in our work comes from the Creative Commons and is liscensed for this kind of use. We see no privacy concerns with the collection of this data. As music genre does not make a serious impact on the commercialization of music or the daily lives of non-musicians, we do not anticipate any negative repercussions from our work. The lines around genre are vague enough to ensure that professors of music theory and music history need not worry that they shall be out of a job. Feature EngineeringSince our original data was made up only of track IDs corresponding to wav files, and their genre labels, our feature extraction makes up all of our useful data. We created a dataframe that has the following features as its columns. In the next section, we discuss the meaning of each added feature column. Feature Descriptions and Reasoning**Track ID**: each wav file corresponds to a number, and we have a function that generates the file path to access each track if needed.Genre Code: We have encoded our eight genres by a 1:1 mapping to integers 0-7.**Mel Frequency Cepstral Coefficients (MFCCs)**: Represents the short term power spectrum of the sound. Aligns closely with the human auditory system’s reception of sound. These 30 coefficients describe the sound of a song in a human way. MFCCs are being used more and more in Music Information Retrieval specifically with genre tasks because they encapsulate the human experience of sound. We feel this will improve accuracy.**Zero Crossing Rate**: Indicates the average rate at which the sign of the signal changes. Higher zero crossing rates match with higher percussiveness in the song. We added this feature because genres often have a certain feel relative to beat and percussive sound.**Frequency Range**: The max and min frequency the audio ignoring the top 20% and bottom 20%. Clipping the top and bottom was important because almost all of our audio files go from 10 Hz to 10000 Hz. But seeing the range in where most of the sound of a song is seems to be connected to genre. Some genres have greater ranges while others are in a small range.**Key and Tonality**: We used the Krumhansl-Schmuckler algorithm to estimate the most likely key that the audio sample is in, and whether the key is major or minor. We chose this because even though most genres have songs in different keys, knowing the key will aid in normalizing pitch information for other features.**Spectral Rolloff**: The frequency below which a certain percent of the total spectral energy (pitches) are contained. When audio signals are noisy, the highest and lowest pitches present do not convey much information. What is more useful is knowing the frequency range that 99% of the signal is contained in, which is what the spectral rolloff represents.**The Three Highest Tempo Autocorrelation Peaks**: Indicative of what we would guess the average BPM will be for this audio file (3 columns). This is a way of summing up the entire tempogram array in just a few numbers so that comparing tempo features for each track is tractable.**Average Tonnetz over all Time**: The mean and variance of the x and y dimensions of the tonal centers for the major and minor thirds, as well as the fifths (this ends up being 6 means and 6 variances for a total of 12 columns). Here we take the means and variances to reduce the information down from a 6xt matrix (where t is the number of time values, about 1200) to just 12 numbers that sum up that matrix for each track. We have included the following code as an example of our feature engineering; we defined a lot of functions for our feature engineering that we don't have space here to include. The full code can be found at https://github.com/clarkedb/motif and in our supplementary files. ```python coefficients from: http://rnhart.net/articles/key-finding/major_coeffs = la.circulant( stats.zscore( np.array( [6.35, 2.23, 3.48, 2.33, 4.38, 4.09, 2.52, 5.19, 2.39, 3.66, 2.29, 2.88] ) )).Tminor_coeffs = la.circulant( stats.zscore( np.array( [6.33, 2.68, 3.52, 5.38, 2.60, 3.53, 2.54, 4.75, 3.98, 2.69, 3.34, 3.17] ) )).Tdef find_key(y: np.ndarray, sr: int) -> Tuple[bool, int]: """ Estimate the major or minor key of the input audio sample :param y: np.ndarray [shape=(n,)] Audio time series :param sr: number > 0 Sampling rate of y :return: (bool, int) Whether the sample is in a major key (as opposed to a minor key) Key of the audio sample """ compute the chromagram of the audio sample chroma_cq = librosa.feature.chroma_cqt(y=y, sr=sr) find the average of each pitch over the entire audio sample average_pitch = chroma_cq.mean(axis=1) Krumhansl-Schmuckler algorithm (key estimation) x = stats.zscore(average_pitch) major_corr, minor_corr = major_coeffs.dot(x), minor_coeffs.dot(x) major_key, minor_key = major_corr.argmax(), minor_corr.argmax() determine if the key is major or minor is_major = major_corr[major_key] > minor_corr[minor_key] return is_major, major_key if is_major else minor_key``` Visualization and Analysis Visualization
###Code
genres = [
"Hip-Hop",
"Pop",
"Folk",
"Experimental",
"Rock",
"International",
"Electronic",
"Instrumental",
]
df = pd.read_csv('../data/features.csv', header=0)
df['genre'] = df.genre_code.apply(lambda x : genres[x])
df[df.genre.isin(['Electronic', 'Experimental', 'Folk'])][['zcr', 'genre']].groupby('genre').boxplot(column='zcr', grid=False, layout=(1,3), figsize=(11,3))
plt.suptitle('Zero Crossing Rate Distribution by Genre', y=1.1)
plt.show()
###Output
_____no_output_____
###Markdown
These boxplots show the Zero Crossing Rate distribution by genre. ZCR is usually thought of as a good measure to include when doing a genre analysis because it conveys something of the percusiveness of the song. We see that the distributions differ enought to justify including it, but some genres are more drastic than others.
###Code
fig, ax = plt.subplots(1, 2)
df.plot(ax=ax[0], kind='hexbin', x='max_freq', y='rolloff_mean', gridsize=25, figsize=(7, 5), cmap='Blues', sharex=False)
ax[0].set_title('Max Frequency and Spectral Rolloff Mean')
rolloff_mean = df["rolloff_mean"]
ax[1].boxplot(np.array([
rolloff_mean[df["genre_code"] == i] for i in range(len(genres))
], dtype=object), labels=genres, showfliers=False)
ax[1].set_title("Mean of Spectral Rolloff")
ax[1].set_ylabel("Mean")
ax[1].set_xticklabels(labels=genres, rotation=45)
fig.set_size_inches((10, 4))
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The hexbin plot (left) compares the max frequency and the spectrall rolloff mean. Because the spectral rolloff mean is the mean frequency greater than 99% of a time frame's frequencies, it make sense that it may be redundant information or colinear with max_frequency. A couple things to note from the mean of spectral rolloff plot (right) are the distributions of the mean spectral rolloff of experimental and instrumental music, which tend to be skewed lower than for other genres. Note that we omitted outliers from the boxplot.
###Code
mfcc_cols = [f'mfcc{i}' for i in range(1,4)]
mfcc_by_genre = df[mfcc_cols + ['genre']].groupby('genre')
fig, axes = plt.subplots(1, 2, figsize=(10, 3))
mfcc_by_genre.mean().transpose().plot(ax=axes[0])
axes[0].set_title('Mean of First 3 MFCCs by Genre')
axes[0].get_legend().remove()
mfcc_by_genre.var().transpose().plot(ax=axes[1])
axes[1].set_title('Variance of First 3 MFCCs by Genre')
axes[1].legend(title='Genre', loc='center left', bbox_to_anchor=(1.0, 0.5))
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Above, we plot only the first three MFCCs by genre. The first MFCC was fairly distinct for each genre with a high variance. However, the higher MFCCs have almost no variance and a very similar mean for each genre. We conclude that the earlier MFCCs are more important for classification.
###Code
# Load the data and get the labels
data = pd.read_csv('./../data/features.csv', index_col=0)
# Save the genre labels
genre_labels = ["Hip-Hop", "Pop", "Folk", "Experimental", "Rock", "International", "Electronic", "Instrumental"]
tonnetz_labels = ['Fifth x-axis', 'Fifth y-axis', 'Minor Third x-axis', 'Minor Third y-axis', 'Major Third x-axis', 'Major Third y-axis']
# Get the tonnetz features in their own dataframe and group by genre
tonnetz_features = data[['genre_code', 'tonnetz1', 'tonnetz2', 'tonnetz3', 'tonnetz4', 'tonnetz5', 'tonnetz6', 'tonnetz7', 'tonnetz8', 'tonnetz9', 'tonnetz10', 'tonnetz11', 'tonnetz12']]
group = tonnetz_features.groupby('genre_code')
# Make some bar plots
fig, ax = plt.subplots(2, 1)
group.mean()['tonnetz' + str(5)].plot(kind='barh', ax=ax.reshape(-1)[0])
ax.reshape(-1)[0].set_yticklabels(genre_labels)
ax.reshape(-1)[0].set_xlabel('Mean Tonal Center')
ax.reshape(-1)[0].set_ylabel('')
ax.reshape(-1)[0].set_title(str(tonnetz_labels[2]))
group.mean()['tonnetz' + str(9)].plot(kind='barh', ax=ax.reshape(-1)[1])
ax.reshape(-1)[1].set_yticklabels(genre_labels)
ax.reshape(-1)[1].set_xlabel('Mean Tonal Center')
ax.reshape(-1)[1].set_ylabel('')
ax.reshape(-1)[1].set_title(str(tonnetz_labels[4]))
plt.suptitle('Mean of Tonnetz Data by Genre\n')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
For each tonnetz, we calculated the mean and variance of the x and y directions for that tonal center for each song. Above are the plots of the averages of two of those means across each genre. We show plots of the major and minor third x-axis means, and much of the other data behaves similarly. Which tones are positive and negative changes for each tone, indicating that the mean tonal center data could be useful in making decisions between genres.
###Code
genre_labels = ["Hip-Hop", "Pop", "Folk", "Experimental", "Rock", "International", "Electronic", "Instrumental"]
data = pd.read_csv('./../data/features.csv', index_col=0)
tempo_features = data['tempo1']
plt.boxplot(np.array([
tempo_features[data['genre_code'] == i] for i in range(len(genre_labels))
], dtype=object), labels=genre_labels, showfliers=False)
plt.xticks(rotation=45)
plt.title('Tempo Estimates by Genre')
plt.show()
###Output
_____no_output_____
###Markdown
The tempo estimates are all somewhat similar in shape, in that all are skewed toward the lower end of the tempo ranges and all have outliers in the higher tempo ranges. We do see, however, that electronic and hip-hop songs appear to have a stronger clustering of tempo estimates at the lower/slower end of the spectrum, which could indicate that the tempo data may be useful for classification. We note that we are ignoring the outliers to focus more on the distribution of the tempo estimates; some of the outliers had values as high as 1200. That may indicate that the algorithm failed to pick out a tempo for these songs, or that some of the experimental music doesn't have a tempo.
###Code
scree_plot()
###Output
_____no_output_____
###Markdown
Using principal component analysis, we see that most of the variation in our features (90%) are explained by about 20 components. There is a strong dropoff in the amount of variance explained by each individual component after about the fourth component, seen in the scree plot (orange). Because we only had about 30 features, we decided to use the original features in our models, rather than the principal components. Models We trained each of the models we learned in class on our engineered features; the results are below. We have also included the code for our random forest model, which we found performed the best. ```pythondef random_forest( filename="../data/features.csv", test_size=0.3, plot_matrix=False, normalize=True, print_feature_importance=False): df = pd.read_csv(filename, index_col=0) x = preprocessing.scale(df.drop(["track_id", "genre_code"], axis=1)) y = df["genre_code"] x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=test_size, stratify=y) params = {"n_estimators": 1000} clf = RandomForestClassifier() clf.set_params(**params) clf.fit(x_train, y_train) if print_feature_importance: get feature importance features = df.drop(["track_id", "genre_code"], axis=1).columns imp = clf.feature_importances_ sorted_features = np.argsort(imp) print("Most-Important:", [features[i] for i in sorted_features[-3:]]) print("Least-Important:", [features[i] for i in sorted_features[:3]]) predictions = clf.predict(x_test) print( "RF Accuracy:", (len(y_test) - np.count_nonzero(predictions - y_test)) / len(y_test), ) if plot_matrix: plot_confusion_matrix(y_test, predictions, genres, normalize=normalize, title="Random Forest Confusion Matrix") return clf``` Table of Accuracy| Model | Accuracy ||-------|----------||Logistic Regression |44% ||XGBoost |49% ||Random Forest |53% ||Multilayer Perceptron|43% ||K-nearest Neighbors |40% | Among the models we trained on the features, XGBoost and random forests (with around 1000 trees) had the highest accuracy.The confusion matrix below tells us that pop is misidentified most of the time, whereas hip-hop is classified correctly the majority of the time. We can conclude that even though the overall accuracy is low, this is largely due to a couple genres.
###Code
# random forest
plt.rcParams['figure.figsize'] = 8, 5
a = random_forest(plot_matrix=True);
###Output
_____no_output_____ |
AKosir-OPvTK-Lec08_ProbabilityTeoryStats_SLO.ipynb | ###Markdown
8. Elementi teorije verjetnosti in statistike in statistike Andrej Košir, Lucami, FE Kontakt: prof. dr. Andrej Košir, [email protected], skype=akosir_sid 1 8. Elementi teorije verjetnosti in statistike Cilji ■ Cilj, vsebina- Cilj: - Spoznati / ponovite osnove teorije verjetnosti za potrebe optimizacije v TK - Spoznati osnove modeliranja s slučajnimi spremenljivkami- Potrebujemo za - Eksperimenti z uporabniki - Markovske verige - Časovne vrste – modeli za TK promet - Čakalne vrste 2 8. Elementi teorije verjetnosti in statistike Cilji ■ Poglavja8.1. Uvod■ Zgodovina opisa verjetnosti■ Intuitivni uvod – primer merjenja napetosti■ Različne vpeljave verjetnosti8.2. Verjetnostni prostor, slučajne spremenljivke■ Verjetnosti prostor in verjetnost■ Slučajna spremenljivka■ Porazdelitev in gostota porazdelitve■ Neodvisnost dogodkov, računanje z dogodki■ Pogojna verjetnost in Bayesova formula■ Momenti – matematično upanje in varianca■ Zaporedje slučajnih spremenljivk■ Pomembne porazdelitve■ Centralni limitni izrek8.3. Testiranje hipotez■ Problem: ali je razlika slučajna■ Ničelna hipozeteza, p-vrednost■ Stopnja tveganja $\alpha$■ Napake ■ Določanje velikosti vzorca8.4. Povezanost med podatki, korelacija in dimenzija podatkov■ Problem: kdaj sta dva podatkovna niza povezna■ Korelacija■ Dimenzionalnost podatkov 3 8. Elementi teorije verjetnosti in statistike 8.1 Uvod 8.1 Uvod■ Zgodovina opisa verjetnosti■ Intuitivni uvod – primer merjenja napetosti■ Različne vpeljave verjetnosti 4 8. Elementi teorije verjetnosti in statistike 8.1 Uvod ■ Zgodovina opisa verjetnosti- 17. stoletje: kocke ne padajo po pričakovanju B. Pascal, P. Fermat, C. de Mere- Tri kocke, kako verjetno skupaj pade 11 in kako verjetno pade 12: - Enako število možnosti - Poskus pravi drugače $S=11$ $S=12$ $146$ $156$ $236$ $246$ $155$ $336$ $245$ $246$ $335$ $345$ $443$ $354$ - Sklep: neodvisnost dogodkov: - $444$ pade manj krat kot $156$ - Definicija: dogodka sta neodvisna, če $𝑃[𝐴𝐵]=𝑃[𝐴]𝑃[𝐵]$ - C. de Mere odkril statistično definicijo verjetnosti 5 8. Elementi teorije verjetnosti in statistike 8.1 Uvod ■ Intuitivni uvod – primer merjenja napetosti- Merimo konstantno napetost, meritvi je dodan šum- Koraki 1. Zaporedna merjenja: $\{1.92, 2.03, ....\}$; 2. Model merjenja: $$ u_i = u_0 + \varepsilon_i $$ 3. Histogram, relativne frekvence 4. Slučajna spremenljivka $U$ in njena relaizacija $u_i$ 5. Gostota porazdelitve: kako se slučajna spremenljivka obnaša 6. Dogodek: pogojene realizacije slučajnih spremenljivk: $$ u \in [1.93, 2.081] $$ 7. Verjetnost dogodka: $$ P(U \in [1.93, 2.081])=0.61; $$ 8. Porazdelitvena funkcija $$ F_U (u) = P[U \leq u], $$ gostota porazdelitve: $$ p_U(𝑢) = \frac{F_U(u)}{d u}, $$ prilega se histogramu; 6 8. Elementi teorije verjetnosti in statistike 8.1 Uvod ■ Različne vpeljave verjetnosti- Statistična definicija verjetnosti: $$ P[A] = \frac{n_k}{n} $$ - V ozadju je zakon velikih števil- Geometrijska definicija verjetnosti: $$ P[A] = \frac{m(A)}{m(G)} $$ - Primerna osnova za matematično definicijo - Metoda Monte Carlo- Matematična vpeljava je univerzalna:dogodki so podmnožice 7 8. Elementi teorije verjetnosti in statistike 8.2. Verjetnostni prostor, slučajne spremenljivke 8.2. Verjetnostni prostor, slučajne spremenljivke■ Verjetnosti prostor in verjetnost■ Slučajna spremenljivka■ Porazdelitev in gostota porazdelitve■ Neodvisnost dogodkov, računanje z dogodki■ Pogojna verjetnost in Bayesova formula■ Momenti – matematično upanje in varianca■ Zaporedje slučajnih spremenljivk■ Pomembne porazdelitve■ Centralni limitni izrek 8 8. Elementi teorije verjetnosti in statistike 8.2. Verjetnostni prostor, slučajne spremenljivke ■ Verjetnosti prostor in verjetnost- Verjetnostni prostor $G$- Dogodek $A\subset G$, $A\in \cal G$- Množica dogodkov $\cal G$ - Gotov dogodek: $$ G\in \cal G $$ - Komplement: $$ A\in {\cal G} \Rightarrow A^c \in \cal G $$ - Unija: $$ A_i\in\cal G \Rightarrow \cup_{i=1}^n A_i\in \cal G $$ - Verjetnost: $P: \cal G \to [0,1]$ - Aditivnost: $$ P\left(\cup_{i=1}^n A_i\right) = \sum_{i=1}^n P(A_i) $$ - Velja: $$ P(G)=1, P(\emptyset) = 0 $$ - Velja: $$ P(A^c) = 1 - P(A) $$ 9 8. Elementi teorije verjetnosti in statistike 8.2. Verjetnostni prostor, slučajne spremenljivke ■ Slučajna spremenljivka- Slučajna spremenljivka: $$ X:\cal G \to ℝ $$ - Zahteva: $$ X^{-1}([a,b)) = [X\in [a,b)] \in\cal G $$ - Zvezne, diskretne - Zvezna: napetost - Diskretna: dogodek uporabnika - Realizacija slučajne spremenljivke in histogram 10 8. Elementi teorije verjetnosti in statistike 8.2. Verjetnostni prostor, slučajne spremenljivke ■ Porazdelitev in gostota porazdelitve- Slučajna spremenljivka, histogram in posplošitev histograma- Prazdelitvena funkcija: $$ F_f(x) = P[X\leq x] $$ - Velja $$ P[a \leq X \leq b] = F_X(b) - F_X(a) $$- Gostota porazdelitve: - zvezna $$ P[X \leq b] = \int_a^b p_X(x) dx $$ - diskretna $$ P[X \leq b] = \sum_{k\in\{a,\ldots b\}} p_k $$- Primeri zveznih - Normalna (Gausova) - Chi kvadrat- Primeri diskretnih - Bernoulijeva - Poissonova 11 8. Elementi teorije verjetnosti in statistike 8.2. Verjetnostni prostor, slučajne spremenljivke ■ Neodvisnost dogodkov, računanje z dogodki- Dogodka $A,B\in \cal G$ sta neodvisna, če velja $$ P[A B] = P[A] P[B] $$- Ujema se z ituitivno definicijo - Zgodita se neodvisno, torej se verjetnosti „ne motita“ tudi če se zgodita hkrati - Računanje z dogodki: - $A$ ali $B$ je $A\cup B$ - $A$ in $B$ je $A\cap B = A B$ - Velja$$ P[A\cup B] = P[A] + P[B] - P[A B] $$ 12 8. Elementi teorije verjetnosti in statistike 8.2. Verjetnostni prostor, slučajne spremenljivke ■ Pogojna verjetnost in Bayesova formula- Pogojna verjetnst Če je $P[B] > 0$, potem je $$ P[A|_B] = \frac{P[AB]}{P[B]} $$ - Velja $$ P[A_1 A_2 \cdots A_n] = P[A_1] P[A_2|_{A_1}] P[A_3|_{A_1 A_2}] \cdots P[A_n|_{A_1 \cdots A_{n-1}}] $$ - Formula o popolni verjetnosti - Popoln sistem dogodkov – hipotez: $\{H_1, \ldots H_n\}$ - Formula: $$ P[A] = \sum_{i=1}^n P[A|_{H_i}] P[H_i] $$- Bayesova formula $$ P[H_{k}|_{A}] = \frac{P[A|_{H_k}] P[H_k]}{\sum_{i=1}^n P[A|_{H_i}] P[H_i]} $$ 13
###Code
# -*- coding: utf-8 -*-
"""
@author: andrejk
"""
"""
A = User will churn (leave the provider)
Hypotheses
H1 = Costs
H2 = Service Quality
H3 = Other
"""
# Hypotheses and conditionals
# Probabilities of hypothese - based on real data
Pr_H1 = 0.6
Pr_H2 = 0.3
Pr_H3 = 0.1
# Conditional probabilities
Pr_AH1 = 0.03
Pr_AH2 = 0.01
Pr_AH3 = 0.02
# Total probability of event A
Pr_A = Pr_AH1*Pr_H1 + Pr_AH2*Pr_H2 + Pr_AH3*Pr_H3
# Conditionals - aposteriories
Pr_H1A = Pr_AH1*Pr_H1/Pr_A
Pr_H2A = Pr_AH2*Pr_H2/Pr_A
Pr_H3A = Pr_AH3*Pr_H3/Pr_A
# Report
print ('Probability of A:', Pr_A)
print ('Probability od H1 at A:', Pr_H1A)
print ('Probability od H2 at A:', Pr_H2A)
print ('Probability od H3 at A:', Pr_H3A)
###Output
Probability of A: 0.023
Probability od H1 at A: 0.7826086956521738
Probability od H2 at A: 0.13043478260869565
Probability od H3 at A: 0.08695652173913043
###Markdown
8. Elementi teorije verjetnosti in statistike 8.2. Verjetnostni prostor, slučajne spremenljivke ■ Momenti – matematično upanje in varianca- Matematično upanje - zvezna poreazdelitev $$ E(X) = \int_{-\infty}^\infty x p_X(x) dx $$ - diskretna porazdelitva $$ E(X) = \sum_{k} k p_k $$ - Momenti: $k$-ti moment glede na $a$: $$ a_k = E((X-a)^k) $$- Matematično upanje: prvi moment glede na $0$- Varianca in standardni odklon: varianca je drugi moment glede na upanje - drugi centralni moment $$ D(X) = \sigma^2(X) = E((X - E(X))^2) $$ 14 8. Elementi teorije verjetnosti in statistike 8.2. Verjetnostni prostor, slučajne spremenljivke ■ Zaporedje slučajnih spremenljivk- Zaporedje slučajnih spemenljivk$$ X_1, X_2, X_n, \ldots, X_n, \ldots $$ - Slušajni proces: indeks je čas - Namen v TK: analiza prometa, napovedovanje prometa, optimizacija čakalnih vrst, analiza obnašanja uporabnikov, ... 15 8. Elementi teorije verjetnosti in statistike 8.2. Verjetnostni prostor, slučajne spremenljivke ■ Pomembne porazdelitve- Bernoullijeva: zaporedje diskretnih dogodkov$$ p_k = {n\choose k} p^k (1-p)^{n-k} $$- Normalna: vsota neodvisnih prispevkov$$ p(x; a, \sigma) = \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(x-a)^2}{2\sigma^2}} $$- Chi kvadrat (𝜒^2): analiza neodvisnosti dogodkov$$ p(x; k) = \left\{ \matrix{\frac{1}{2^\frac{k}{2}\Gamma(\frac{k}{2})} x^{\frac{k}{2}-1} e^{-\frac{x}{2}}, & x\geq 0 \cr 0, \hfill & x < 0}\right. $$- Poissonova: Število neodvisnih dogodkov na časovno enoto:$$ p(k; \lambda) = \frac{\lambda^k}{k!} e^{-\lambda} $$- Eksponentna$$ p(t; \lambda) = \left\{\matrix{\lambda e^{-\lambda t}, & t \geq 0 \cr 0, \hfill & t < 0.}\right. $$ 16 8. Elementi teorije verjetnosti in statistike 8.2. Verjetnostni prostor, slučajne spremenljivke ■ Centralni limitni izrek- Zaporedje slučajnih spremenljivk $X_1, X_2, \ldots$, z enakimi končnimi variancami $D(X_n) = d$, in delnimi vsotami$$ S_n = X_1 + \cdots + X_n, $$potem velja$$ \frac{S_n - E(S_n)}{\sigma(S_n)} \quad \underset{n\to\infty}{\longrightarrow} \quad N(0,1), $$kjer je $N(0,1)$ standardna normalna porazdelitev - To je izvor normalne porazdelitve. - Tako „narava skriva porazdelitve“ 17 8. Elementi teorije verjetnosti in statistike 8.3. Testiranje hipotez 8.3. Testiranje hipotez■ Problem: ali je razlika slučajna■ Ničelna hipozeteza $H_0$, p-vrednost■ Stopnja tveganja $\alpha$, sklep glege $H_0$■ Določanje velikosti vzorca 18 8. Elementi teorije verjetnosti in statistike 8.3. Testiranje hipotez ■ Problem in rešitev Problem: - rezultat eksperimenta pri osnovni in izboljšani izvedbi sta $0.61$ in $0.63$. - Ali je razlika **slučajna** ali je **napredek realen**? Rešitev:- statistično testiranje hipotez 19 8. Elementi teorije verjetnosti in statistike 8.3. Testiranje hipotez ■ Ničelna hipozeteza, p-vrednost- Hipoteze: - ničelna hipoteza $H_0$ je privzetek "ni učinka" - alternativna hipotaza je ali njena negacija ali del negacije- p-vrednost je verjetnost, da je dobljen eksperimentalni rezultat toliko ali bolj oddaljen od ničelne hipoteze$$ p = P[x\;\mbox{toliko ali bolj oddaljeni od veljavne $H_0$}|_{H_0}] $$P-vrednost je verjetnost, da **ničlna hipoteza drži ob dobljenih eksperimentalnih rezultatih**- odlčitev na podlagi te verjetnosti- kako jo izračunati: obstajajo statististični testi, ki so paketi - ničelna hipoteza - izdelana enačba za p vrednost - privzetki / pogoji za uporabo testa - implementacija enačb za izračun p vrednosti je na voljo v več različnih jezikih 20 8. Elementi teorije verjetnosti in statistike 8.3. Testiranje hipotez ■ Stopnja tveganja $\alpha$ in odločitev- Osnovni pristop k odločitvi: če je verjetnost ničelne hipoteze (p-vrednost) premajhna, jo zavrnemo- Izberemo stopnjo tveganja $\alpha$ in$$ p \geq \alpha \qquad\Rightarrow\qquad H_0\;\mbox{potrdimo} $$ $$ p < \alpha \qquad\Rightarrow\qquad H_0\;\mbox{zavrnemo} $$ - pri sklepu lahko pride do napake, tega se ni mogoče izogniti - stopnje tveganja ne moremo postaviti na $0$ - Izid sklepanja analiziramo takole $\hat{H_0}$ $\neg\hat{H_0}$ $H_0$ OK Err. Type I. $\neg H_0$ Err. Type II. OK - napaka tipa I.: - zavrnemo ničelno hipotezo ko ta drži - verjetnost te napake je presenetljivo neodvisna od velikosti vzorca $n$ in je enaka stopnji tveganja: $$ P(Err. Type I.) = \alpha $$ - napaka tipa II: - sprejmemo ničelno hipotezo kot ta ne drži - verjetnost te napake je odvisna od velikosti vzorca, označimo $$ P(Err. Type II.) = \beta $$- moč testa: - moč testa je enaka $$ pw = 1 - \beta $$ - gre za občutljivost testa 21 8. Elementi teorije verjetnosti in statistike 8.3. Testiranje hipotez ■ Določanje velikosti vzorca- Potrebno velikost vzorca testa značilnosti določimo na podlagi dejstva, da je moč testa $pw$ odvisna od velikosti vzorca $n$. - Potrebujemo tudi **velikost učinka** (angl. effect size): - to je normirana mera za velikost odklona od ničelne hipoteze, torej za velikost razlike med testiranimi možnostmi - določimo jo za vsak tip statističnega testa posebej- Moč testa $pw\in [0, 1]$ narašča z velikostjo vzorca. Potrebno velikost vzorca tako določimo za - dano velikost učinka - zahtevano moč testa- Velikost vzorca in analizo dosežene moči lahko med drugim določimo z orodjem GPower - povezava na orodje http://www.gpower.hhu.de/en.html - primer odvisnosti dosežene statistične moči in velikosti vzorca, določene z orodjem GPower, podaja naslednja slika 22
###Code
## An example of t-test
import numpy as np
from scipy import stats
## Define 2 random distributions
#Sample Size
N = 30
# Standard deviations
s1 = 1
s2 = 1
s = 1
# Random samples
x1 = s1*np.random.randn(N)
x21 = s2*np.random.randn(N)
x22 = s2*np.random.randn(N) + 0.1*s
x23 = s2*np.random.randn(N) + 0.2*s
x24 = s2*np.random.randn(N) + 0.3*s
x25 = s2*np.random.randn(N) + 0.5*s
x26 = s2*np.random.randn(N) + 0.8*s
x27 = s2*np.random.randn(N) + 1.0*s
x28 = s2*np.random.randn(N) + 2.0*s
## Do the testing
t1, p1 = stats.ttest_ind(x1, x21)
print("P value is: " + str(p1))
t2, p2 = stats.ttest_ind(x1, x22)
print("P value is: " + str(p2))
t3, p3 = stats.ttest_ind(x1, x23)
print("P value is: " + str(p3))
t4, p4 = stats.ttest_ind(x1, x24)
print("P value is: " + str(p4))
t5, p5 = stats.ttest_ind(x1, x25)
print("P value is: " + str(p5))
t6, p6 = stats.ttest_ind(x1, x26)
print("P value is: " + str(p6))
t7, p7 = stats.ttest_ind(x1, x27)
print("P value is: " + str(p7))
t8, p8 = stats.ttest_ind(x1, x28)
print("P value is: " + str(p8))
###Output
P value is: 0.5985137835938157
P value is: 0.5731531797683456
P value is: 0.06090209382704392
P value is: 0.21041376280071786
P value is: 0.004189927025963031
P value is: 0.0020099139043966135
P value is: 0.00011539599918335764
P value is: 8.407636950440684e-11
|
DAY 401 ~ 500/DAY471_[BaekJoon] 내 학점을 구해줘 (Python).ipynb | ###Markdown
2021년 9월 2일 목요일 BaekJoon - 내 학점을 구해줘 (Python) 문제 : https://www.acmicpc.net/problem/10984 블로그 : https://somjang.tistory.com/entry/BaekJoon-10984%EB%B2%88-%EB%82%B4-%ED%95%99%EC%A0%90%EC%9D%84-%EA%B5%AC%ED%95%B4%EC%A4%98-Python Solution
###Code
def get_my_score(grade_score_list):
total_grade, total_score = 0, 0
for grade_score in grade_score_list:
grade, score = map(float, grade_score.split())
total_grade += grade
total_score += score * grade
total_score = total_score / total_grade
return f"{int(total_grade)} {round(total_score, 2)}"
if __name__ == "__main__":
for _ in range(int(input())):
grade_score_list = []
for _ in range(int(input())):
grade_score = input()
grade_score_list.append(grade_score)
print(get_my_score(grade_score_list))
###Output
_____no_output_____ |
3_find_best_kernel-logged_predictors.ipynb | ###Markdown
Import data
###Code
df = pd.read_csv('outputs/ala1_trials_clean.csv')
df = df.rename(columns={'project_name': 'basis', 'cluster__n_clusters': 'n', 'test_mean': 'y'}).\
loc[:, ['basis', 'y', 'n']]
###Output
_____no_output_____
###Markdown
Scale predictors
###Code
to_log = ['n']
for col in to_log:
df.loc[:, col] = np.log(df[col])
to_scale = ['n']
scaler = preprocessing.MinMaxScaler()
vars_scaled = pd.DataFrame(scaler.fit_transform(df.loc[:, to_scale]), columns=[x+'_s' for x in to_scale])
df = df.join(vars_scaled)
df.T
x = df.loc[df['basis']=='phipsi', 'n_s']
y = df.loc[df['basis']=='phipsi', 'y']
plt.scatter(x, y)
###Output
_____no_output_____
###Markdown
Create design matrix
###Code
y = df.loc[:, 'y']
X = df.loc[:, df.columns.difference(['y'])]
X_c = pt.dmatrix('~ 0 + n_s + C(basis)', data=df, return_type='dataframe')
X_c = X_c.rename(columns=lambda x: re.sub('C|\\(|\\)|\\[|\\]','',x))
###Output
_____no_output_____
###Markdown
Model fitting functions
###Code
def gamma(alpha, beta):
def g(x):
return pm.Gamma(x, alpha=alpha, beta=beta)
return g
def hcauchy(beta):
def g(x):
return pm.HalfCauchy(x, beta=beta)
return g
def fit_model_1(y, X, kernel_type='rbf'):
"""
function to return a pymc3 model
y : dependent variable
X : independent variables
prop_Xu : number of inducing varibles to use
X, y are dataframes. We'll use the column names.
"""
with pm.Model() as model:
# Covert arrays
X_a = X.values
y_a = y.values
X_cols = list(X.columns)
# Globals
prop_Xu = 0.1 # proportion of observations to use as inducing variables
l_prior = gamma(1, 0.05)
eta_prior = hcauchy(2)
sigma_prior = hcauchy(2)
# Kernels
# 3 way interaction
eta = eta_prior('eta')
cov = eta**2
for i in range(X_a.shape[1]):
var_lab = 'l_'+X_cols[i]
if kernel_type=='RBF':
cov = cov*pm.gp.cov.ExpQuad(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i])
if kernel_type=='Exponential':
cov = cov*pm.gp.cov.Exponential(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i])
if kernel_type=='M52':
cov = cov*pm.gp.cov.Matern52(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i])
if kernel_type=='M32':
cov = cov*pm.gp.cov.Matern32(X_a.shape[1], ls=l_prior(var_lab), active_dims=[i])
# Covariance model
cov_tot = cov
# Model
gp = pm.gp.MarginalSparse(cov_func=cov_tot, approx="FITC")
# Noise model
sigma_n =sigma_prior('sigma_n')
# Inducing variables
num_Xu = int(X_a.shape[0]*prop_Xu)
Xu = pm.gp.util.kmeans_inducing_points(num_Xu, X_a)
# Marginal likelihood
y_ = gp.marginal_likelihood('y_', X=X_a, y=y_a,Xu=Xu, noise=sigma_n)
mp = pm.find_MAP()
return gp, mp, model
###Output
_____no_output_____
###Markdown
Main testing loop This will loop through the kernels to get cross - validated MSLL and SMSE. Occaisionally a fold won't converge so the algo gets three attempt to restart (
###Code
# Inputs
kernels = ['RBF', 'M52', 'M32', 'Exponential' ]
# Outputs
pred_dfs = []
# iterator
max_restarts = 3
for i in range(len(kernels)):
print(kernels[i])
converged = False
n_restarts = 0
while (not converged) and (n_restarts < max_restarts):
# instantiate a new cv-er to ensure folds are different each loop through.
kf = StratifiedKFold(n_splits=10)
# loop through folds
for idx, (train_idx, test_idx) in enumerate(kf.split(X.values, X['basis'])):
print('\tfold: {}'.format(idx))
# subset dataframes for training and testin
y_train = y.iloc[train_idx]
X_train = X_c.iloc[train_idx, :]
y_test = y.iloc[test_idx]
X_test = X_c.iloc[test_idx, :]
try:
# Fit gp model
gp, mp, model = fit_model_1(y=y_train, X=X_train, kernel_type=kernels[i])
# Get predictions
with model:
# predict latent
mu, var = gp.predict(X_test.values, point=mp, diag=True,pred_noise=False)
sd_f = np.sqrt(var)
# predict target (includes noise)
_, var = gp.predict(X_test.values, point=mp, diag=True,pred_noise=True)
sd_y = np.sqrt(var)
# log results
res = pd.DataFrame({'f_pred': mu, 'sd_f': sd_f, 'sd_y': sd_y, 'y': y_test.values})
res.loc[:, 'kernel'] = kernels[i]
res.loc[:, 'fold_num'] = idx
pred_dfs.append(pd.concat([X_test.reset_index(), res.reset_index()], axis=1))
except:
# break without possibility of reaching convergence
n_restarts += 1
break
# convergence criterion - must have got this far on the last fold:
if idx == kf.n_splits-1:
converged = True
pred_dfs = pd.concat(pred_dfs)
###Output
RBF
fold: 0
###Markdown
Evaluate kernels
###Code
def ll(f_pred, sigma_pred, y_true):
# log predictive density
tmp = 0.5*np.log(2*np.pi*sigma_pred**2)
tmp += (f_pred-y_true)**2/(2*sigma_pred**2)
return tmp
null_mu = np.mean(y)
null_sd = np.std(y)
sll = ll(pred_dfs['f_pred'], pred_dfs['sd_y'], pred_dfs['y'])
sll = sll - ll(null_mu, null_sd, pred_dfs['y'])
pred_dfs['msll'] = sll
pred_dfs['smse'] = (pred_dfs['f_pred']-pred_dfs['y'])**2/np.var(y)
pred_dfs.to_pickle('outputs/kernel_cv_fits_logged.p')
msll = pred_dfs.groupby(['kernel'])['msll'].mean()
smse = pred_dfs.groupby(['kernel'])['smse'].mean()
summary = pd.DataFrame(smse).join(other=pd.DataFrame(msll), on=['kernel'], how='left')
summary.to_csv('outputs/kernel_cv_fits_logged_summary.csv')
summary
###Output
_____no_output_____ |
exercises_pytorch.ipynb | ###Markdown
PAISS Practical Deep-RL by Criteo Research (Pytorch version)
###Code
%pylab inline
from utils import RLEnvironment, RLDebugger
import random
import torch
import torch.optim as optim
import torch.nn as nn
import torch.nn.functional as F
env = RLEnvironment()
print(env.observation_space, env.action_space)
###Output
_____no_output_____
###Markdown
Random agent
###Code
class RandomAgent:
"""The world's simplest agent!"""
def __init__(self, action_space):
self.action_space = action_space
def get_action(self, state):
return self.action_space.sample()
###Output
_____no_output_____
###Markdown
Play loopNote that this Gym environment is considered as solved as soon as you find a policy which scores 200 on average.
###Code
env.run(RandomAgent(env.action_space), episodes=20, display_policy=True)
###Output
_____no_output_____
###Markdown
DQN Agent - OnlineHere is a keras code for training a simple DQN. It is presented first for the sake of clarity. Nevertheless, the trained network is immediatly used to collect the new training data, unless you are lucky you won't be able to find a way to solve the task. Just replace the `???` by some parameters which seems reasonnable to you ($\gamma>1$ is not reasonnable and big steps are prone to numerical instability) and watch the failure of the policy training.
###Code
class Model(nn.Module):
def __init__(self, input_dim, output_dim):
super(Model, self).__init__()
self.fc1 = nn.Linear(input_dim, ???)
self.fc2 = nn.Linear(???, output_dim)
def forward(self, x):
x = F.???(self.fc1(x)) # non-linear activation
return self.fc2(x)
class DQNAgent(RLDebugger):
def __init__(self, observation_space, action_space):
RLDebugger.__init__(self)
# get size of state and action
self.state_size = observation_space.shape[0]
self.action_size = action_space.n
# hyper parameters
self.gamma = ???
self.learning_rate = ???
self.build_model()
self.target_model = self.model
# approximate Q function using Neural Network
# state is input and Q Value of each action is output of network
def build_model(self):
self.model = Model(input_dim=self.state_size, output_dim=self.action_size)
self.optimizer = optim.Adam(self.model.parameters(), lr=self.learning_rate)
self.loss = nn.???Loss()
# 1/ You can try different losses. As an logcosh loss is a twice differenciable approximation of Huber loss
# 2/ From a theoretical perspective Learning rate should decay with time to guarantee convergence
def get_action(self, state):
state = torch.from_numpy(state).float()
q_value = self.model(state).detach().numpy()
best_action = np.argmax(q_value[0]) #The [0] is because keras outputs a set of predictions of size 1
return int(best_action)
# train the target network on the selected action and transition
def train_model(self, action, state, next_state, reward, done):
state = torch.from_numpy(state).float()
next_state = torch.from_numpy(next_state).float()
val = self.model(state)[0][action]
target = self.model(state)
target_val = self.target_model(next_state)
if done: #We are on a terminal state
target[0][action] = reward
else:
target[0][action] = reward + self.gamma * (torch.max(target_val))
# and do the model fit!
self.model.zero_grad()
loss = self.loss(val, target.detach()[0][action])
loss.backward()
self.optimizer.step()
self.record(action, state, target, target_val, loss, reward)
agent = DQNAgent(env.observation_space, env.action_space)
env.run(agent, episodes=500)
agent.plot_loss()
###Output
_____no_output_____
###Markdown
Let's try with a fixed initial position
###Code
agent = DQNAgent(env.observation_space, env.action_space)
env.run(agent, episodes=300, seed=0)
agent.plot_loss()
###Output
_____no_output_____
###Markdown
DQN Agent with ExplorationThis is our first agent which is going to solve the task. It will typically require to run a few hundred of episodes to collect the data. The difference with the previous agent is that you are going to add an exploration mechanism in order to take care of the data collection for the training. We advise to use an $\varepsilon_n$-greedy, meaning that the value of $\varepsilon$ is going to decay over time. Several kind of decays can be found in the litterature, a simple one is to use a mutiplicative update of $\varepsilon$ by a constant smaller than 1 as long as $\varepsilon$ is smaller than a small minimal rate (typically in the range 1%-5%).You need to:* Code your exploration (area are tagged in the code by some TODOs).* Tune the hyperparameters (including the ones from the previous section) in order to solve the task. This may be not so easy and will likely require more than 500 episodes and a final small value of epsilon. Next sessions will be about techniques to increase sample efficiency (i.e require less episodes).
###Code
class DQNAgentWithExploration(DQNAgent):
def __init__(self, observation_space, action_space):
super(DQNAgentWithExploration, self).__init__(observation_space, action_space)
# exploration schedule parameters
self.t = 0
self.epsilon = ??? # Designs the probability of taking a random action.
# Should be in range [0,1]. The closer to 0 the greedier.
# Hint: start close to 1 (exploration) and end close to zero (exploitation).
# decay epsilon
def update_epsilon(self):
# TODO write the code for your decay
self.t += 1
self.epsilon = ???
# get action from model using greedy policy
def get_action(self, state):
# exploration
if random.random() < self.epsilon:
return random.randrange(self.action_size)
state = torch.from_numpy(state).float()
q_value = self.model(state).detach().numpy()
best_action = np.argmax(q_value[0])
return int(best_action)
agent = DQNAgentWithExploration(env.observation_space, env.action_space)
env.run(agent, episodes=500, print_delay=50, seed=0)
agent.plot_state()
###Output
_____no_output_____
###Markdown
DQN Agent with Exploration and Experience ReplayWe are now going to save some samples in a limited memory in order to build minibatches during the training. The exploration policy remains the same than in the previous section. Storage is already coded you just need to modify the tagged section which is about building the mini-batch sent to the optimizer.
###Code
from collections import deque
class DQNAgentWithExplorationAndReplay(DQNAgentWithExploration):
def __init__(self, observation_space, action_space):
super(DQNAgentWithExplorationAndReplay, self).__init__(observation_space, action_space)
self.batch_size = ??? # Recommended value range [10, 1000]
# create replay memory using deque
self.memory = deque(maxlen=10000) # Recommended value range [10, 20000]
def create_minibatch(self):
# pick samples randomly from replay memory (using batch_size)
batch_size = min(self.batch_size, len(self.memory))
samples = random.sample(self.memory, batch_size)
states = np.array([_[0][0] for _ in samples])
actions = np.array([_[1] for _ in samples])
rewards = np.array([_[2] for _ in samples])
next_states = np.array([_[3][0] for _ in samples])
dones = np.array([_[4] for _ in samples])
return (states, actions, rewards, next_states, dones)
def train_model(self, action, state, next_state, reward, done):
# save sample <s,a,r,s'> to the replay memory
self.memory.append((state, action, reward, next_state, done))
if len(self.memory) >= self.batch_size:
states, actions, rewards, next_states, dones = self.create_minibatch()
states = torch.from_numpy(states).float()
next_states = torch.from_numpy(next_states).float()
vals = self.model(states).gather(1,torch.from_numpy(actions).view(-1,1))
targets = self.model(states)
targets_val = self.target_model(next_states).detach()
for i in range(self.batch_size):
# Approx Q Learning
if dones[i]:
targets[i][actions[i]] = rewards[i]
else:
targets[i][actions[i]] = rewards[i] + self.gamma * (torch.max(targets_val[i]))
# and do the model fit!
self.model.zero_grad()
loss = self.loss(vals, targets.detach().gather(1,torch.from_numpy(actions).view(-1,1)))
loss.backward()
self.optimizer.step()
for i in range(self.batch_size):
self.record(actions[i], states[i], targets[i], targets_val[i], loss / self.batch_size, rewards[i])
agent = DQNAgentWithExplorationAndReplay(env.observation_space, env.action_space)
env.run(agent, episodes=300, print_delay=50)
agent.plot_state()
agent.plot_bellman_residual()
###Output
_____no_output_____
###Markdown
Double DQN Agent with Exploration and Experience ReplayNow we want to have two identical networks and keep frozen for some timesteps the one which is in charge of the evaluation (*i.e* which is used to compute the targets).Note that you can find some variants where the target network is updated at each timestep but with a small fraction of the difference with the policy network.
###Code
class DoubleDQNAgentWithExplorationAndReplay(DQNAgentWithExplorationAndReplay):
def __init__(self, observation_space, action_space):
super(DoubleDQNAgentWithExplorationAndReplay, self).__init__(observation_space, action_space)
# TODO: initialize a second model
self.target_model = Model(input_dim=self.state_size, output_dim=self.action_size)
def update_target_model(self):
# copy weights from the model used for action selection to the model used for computing targets
self.target_model.load_state_dict(self.model.state_dict())
agent = DoubleDQNAgentWithExplorationAndReplay(env.observation_space, env.action_space)
env.run(agent, episodes=300, print_delay=10)
agent.plot_diagnostics()
###Output
_____no_output_____
###Markdown
To observe actual performance of the policy we should set $\varepsilon=0$
###Code
agent.epsilon = 0
agent.memory = deque(maxlen=1)
agent.batch_size = 1
env.run(agent, episodes=300, print_delay=33)
agent.plot_diagnostics()
###Output
_____no_output_____
###Markdown
Duelling DQN If time allows, adapt the description from http://torch.ch/blog/2016/04/30/dueling_dqn.html to our setting
###Code
class DuelingModel(nn.Module):
def __init__(self, input_dim, output_dim):
super(DuelingModel, self).__init__()
self.action_dim = output_dim
self.value_fc1 = nn.Linear(input_dim, ???)
self.value_fc2 = nn.Linear(???, 1)
self.advantage_fc1 = nn.Linear(input_dim, ???)
self.advantage_fc2 = nn.Linear(???, output_dim)
def forward(self, x):
latent_values = F.tanh(self.value_fc1(x))
value = self.value_fc2(latent_values)
value_repeat = torch.cat([value]*self.action_dim, 1)
latent_advantages = F.tanh(self.advantage_fc1(x))
advantage = self.advantage_fc2(latent_advantages)
q_values = advantage + value_repeat
return q_values
class DoubleDuelingDQNAgentWithExplorationAndReplay(DoubleDQNAgentWithExplorationAndReplay):
def __init__(self, observation_space, action_space):
super(DoubleDuelingDQNAgentWithExplorationAndReplay, self).__init__(observation_space, action_space)
self.target_model = DuelingModel(input_dim=self.state_size, output_dim=self.action_size)
def build_model(self):
self.model = DuelingModel(input_dim=self.state_size, output_dim=self.action_size)
self.optimizer = optim.???(self.model.parameters(), lr=self.learning_rate)
self.loss = nn.???Loss()
agent = DoubleDuelingDQNAgentWithExplorationAndReplay(env.observation_space, env.action_space)
env.run(agent, episodes=300, print_delay=50)
agent.plot_diagnostics()
###Output
_____no_output_____ |
Homework6_2.ipynb | ###Markdown
Homework 6 Problem 2 1. How to divide the 24 galaxies into groupsIn Hubble's orginal paper "A Relation between Distance and Radial Velocity among Extra-Galactic Nebulae" 1929, table 1 lists 24 nebulae that are used to plot his famous diagram. Looking at table 1, one realizes that a few nebulae have almost the same distance r, maybe it will be easier to simply group them together as a single data point. This would make sense if the nebulae actually belong to the same cluster of galaxies. I should mention that Hubble called the galaxies, nebulae, since people weren't sure exactly what they were. Nowadays we know they're galaxies, so from this point on, we will just call them galaxies. So let's group the galaxies broadly based on distance r fist. Then we check the positions of these galaxies in the sky. If they are not in roughly the same position in the sky, we can not group them together, since they must be separate galaxies but just happen to have similar distances to Earth. The positions of the galaxies can be found on this website: http://spider.seds.org/ngc/ngc.html. The positions of the galaxies are given by angular measurements in the sky. The right ascension tells the longitudinal position: how far to the East or West in the sky. The Declination tells how high up in the sky is the object located.(Image from wikipedia, https://en.wikipedia.org/wiki/Right_ascension. Attribution: Tfr000 (talk) 15:34, 15 June 2012 (UTC), CC BY-SA 3.0 , via Wikimedia Commons)Looking at table 1, all the galaxies have a catalogue number that we can put into http://spider.seds.org/ngc/ngc.html and find out their positions, except the first two. S.Mag. and L.Mag., what are they? I'm totally guessing here, they're probably the same group, given their similar name and similar distances. S.Mag. might have to do with Andromeda Galaxy, based on google search.Here is my grouping. The group index is the last column of the table: --- As could be seen, I divided the data into 13 groups. Some groups could be merged together, but not being an astronomer, I don't know how big a variation in the angular position is accpetable for grouping the data points together into a single cluster... So I guess I will just go with my 13 groups rather than Hubble's 9 groups. 2. Fitting the dataIn the previous section, we divided the data points into 13 groups, based on their distances and angular positions. I calculated average distance and velocity in each group. Thus we have the following 13 data points:---The following is my attempt to try to read in the data from the CSV file I generated named "Hubble2.csv", and then fit it. I used the code from the Jupyter notebook quake.ipynb to import the data. The code are as follows:
###Code
import numpy as np
import matplotlib.pyplot as plt
from least_squares import least_squares
# Make the plots a bit bigger to see
# NOTE: Must be done in a separate cell
plt.rcParams['figure.dpi'] = 100
# Import the distance data from Hubble's original paper
r = np.genfromtxt(fname='Hubble2.csv', usecols=(0),skip_header=1, delimiter=',')
# Let us check if we indeed imported the distances correctly
r
# Import the velocity data from Hubble's original paper
v = np.genfromtxt(fname='Hubble2.csv', usecols=(1),skip_header=1, delimiter=',')
# Let us check if we indeed imported the velocity correctly
v
###Output
_____no_output_____
###Markdown
---Now that we have correctly imported the data, we can fit the data. The python code least_square.py however is not my own :( I used the one already inside the data analysis folder. I think I understand the idea how the code works. Essentially, we had to minimized the least square function, and so we need to do differentiation and equate the differentials to zero. This results in a set of simultaneous equations, which we can solve to find the gradient and the intercept of our linear fit line. All these steps are done on paper, the code does not differentiate or solve the system of equations! The python code just calculates the results of our solutions. In Problem 1, I have done on paper the minimization of the chi square function. I arrived at the stage where we have a system of equations, but the solutions to the system of equations are incredibly hard to find. It involves a lot of algebra. If I could find those solutions I could then put them in a python code and let it calculate for me.
###Code
# Here I uses the the least_square.py code from the data analysis folder.
# It is the not the code I have written myself though :(
[a, b, sigma, sigma_a, sigma_b] = least_squares(r,v)
n = len(r) # number of galaxies
if n <= 2 :
print ('Error! Need at least two data points!')
exit()
# If we want to check our fitting result against numpy's fitting result, we can add the following line.
# p,cov = np.polyfit( r, v, 1, cov=True)
# Print out results
print (' Least squares fit of', n, 'data points')
print (' -----------------------------------')
print (" Hubble's constant slope b = {0:6.2f} +- {1:6.2f} km/s/Mpc".format( b, sigma_b))
print (" Intercept with r axis a = {0:6.2f} +- {1:6.2f} km/s".format( a, sigma_a))
print (' Estimated v error bar sigma =', round(sigma, 1), 'km/s')
# Again, If we want to check our fitting result against numpy's fitting result, we can add these line.
# print (" numpy's values: b = {0:6.2f} +- {1:6.2f} km/s/Mpc".format( p[0], np.sqrt(cov[0,0])))
# print (" a = {0:6.2f} +- {1:6.2f} km/s/Mpc".format( p[1], np.sqrt(cov[1,1])))
rvals = np.linspace(0., 2.0, 21)
f = a + b * rvals
fnp = p[1] + p[0] * rvals
plt.figure(1)
plt.scatter( r, v, label = "Data" )
plt.plot( rvals, f , label="Our fit")
# If we want to compare to numpy fitting result we can add the following line.
# plt.plot( rvals, fnp, label = "numpy fit")
plt.xlabel("Distance (Mpc)")
plt.ylabel("Velocity (km/s)")
plt.legend()
plt.show()
###Output
Least squares fit of 13 data points
-----------------------------------
Hubble's constant slope b = 386.59 +- 99.55 km/s/Mpc
Intercept with r axis a = 80.39 +- 98.69 km/s
Estimated v error bar sigma = 193.7 km/s
|
Identify Customer Segments.ipynb | ###Markdown
Project: Identify Customer SegmentsIn this project, you will apply unsupervised learning techniques to identify segments of the population that form the core customer base for a mail-order sales company in Germany. These segments can then be used to direct marketing campaigns towards audiences that will have the highest expected rate of returns. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.This notebook will help you complete this task by providing a framework within which you will perform your analysis steps. In each step of the project, you will see some text describing the subtask that you will perform, followed by one or more code cells for you to complete your work. **Feel free to add additional code and markdown cells as you go along so that you can explore everything in precise chunks.** The code cells provided in the base template will outline only the major tasks, and will usually not be enough to cover all of the minor tasks that comprise it.It should be noted that while there will be precise guidelines on how you should handle certain tasks in the project, there will also be places where an exact specification is not provided. **There will be times in the project where you will need to make and justify your own decisions on how to treat the data.** These are places where there may not be only one way to handle the data. In real-life tasks, there may be many valid ways to approach an analysis task. One of the most important things you can do is clearly document your approach so that other scientists can understand the decisions you've made.At the end of most sections, there will be a Markdown cell labeled **Discussion**. In these cells, you will report your findings for the completed section, as well as document the decisions that you made in your approach to each subtask. **Your project will be evaluated not just on the code used to complete the tasks outlined, but also your communication about your observations and conclusions at each stage.**
###Code
# import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# magic word for producing visualizations in notebook
%matplotlib inline
'''
Import note: The classroom currently uses sklearn version 0.19.
If you need to use an imputer, it is available in sklearn.preprocessing.Imputer,
instead of sklearn.impute as in newer versions of sklearn.
'''
###Output
_____no_output_____
###Markdown
Step 0: Load the DataThere are four files associated with this project (not including this one):- `Udacity_AZDIAS_Subset.csv`: Demographics data for the general population of Germany; 891211 persons (rows) x 85 features (columns).- `Udacity_CUSTOMERS_Subset.csv`: Demographics data for customers of a mail-order company; 191652 persons (rows) x 85 features (columns).- `Data_Dictionary.md`: Detailed information file about the features in the provided datasets.- `AZDIAS_Feature_Summary.csv`: Summary of feature attributes for demographics data; 85 features (rows) x 4 columnsEach row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. You will use this information to cluster the general population into groups with similar demographic properties. Then, you will see how the people in the customers dataset fit into those created clusters. The hope here is that certain clusters are over-represented in the customers data, as compared to the general population; those over-represented clusters will be assumed to be part of the core userbase. This information can then be used for further applications, such as targeting for a marketing campaign.To start off with, load in the demographics data for the general population into a pandas DataFrame, and do the same for the feature attributes summary. Note for all of the `.csv` data files in this project: they're semicolon (`;`) delimited, so you'll need an additional argument in your [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call to read in the data properly. Also, considering the size of the main dataset, it may take some time for it to load completely.Once the dataset is loaded, it's recommended that you take a little bit of time just browsing the general structure of the dataset and feature summary file. You'll be getting deep into the innards of the cleaning in the first major step of the project, so gaining some general familiarity can help you get your bearings.
###Code
# Load in the general demographics data.
azdias = pd.read_csv('Udacity_AZDIAS_Subset.csv',sep=';')
# Load in the feature summary file.
feat_info = pd.read_csv('AZDIAS_Feature_Summary.csv',sep=';')
np.shape(azdias)
# Check the structure of the data after it's loaded (e.g. print the number of
# rows and columns, print the first few rows).
azdias.head()
feat_info.head()
###Output
_____no_output_____
###Markdown
> **Tip**: Add additional cells to keep everything in reasonably-sized chunks! Keyboard shortcut `esc --> a` (press escape to enter command mode, then press the 'A' key) adds a new cell before the active cell, and `esc --> b` adds a new cell after the active cell. If you need to convert an active cell to a markdown cell, use `esc --> m` and to convert to a code cell, use `esc --> y`. Step 1: Preprocessing Step 1.1: Assess Missing DataThe feature summary file contains a summary of properties for each demographics data column. You will use this file to help you make cleaning decisions during this stage of the project. First of all, you should assess the demographics data in terms of missing data. Pay attention to the following points as you perform your analysis, and take notes on what you observe. Make sure that you fill in the **Discussion** cell with your findings and decisions at the end of each step that has one! Step 1.1.1: Convert Missing Value Codes to NaNsThe fourth column of the feature attributes summary (loaded in above as `feat_info`) documents the codes from the data dictionary that indicate missing or unknown data. While the file encodes this as a list (e.g. `[-1,0]`), this will get read in as a string object. You'll need to do a little bit of parsing to make use of it to identify and clean the data. Convert data that matches a 'missing' or 'unknown' value code into a numpy NaN value. You might want to see how much data takes on a 'missing' or 'unknown' code, and how much data is naturally missing, as a point of interest.**As one more reminder, you are encouraged to add additional cells to break up your analysis into manageable chunks.**
###Code
def unknown_col_correction(unknown_col):
unknown_values_corr=pd.DataFrame()
for x in unknown_col:
x=list(x)
x.remove('[')
x.remove(']')
while ',' in x:
x.remove(',')
unknown_values=[]
flag=0
for i in range(len(x)):
if flag==1:
flag=0
continue
if x[i]=='-':
unknown_values.append(int(x[i+1])*-1)
flag=1
elif x[i]=='X':
unknown_values.append("XX")
flag=1
else:
unknown_values.append(int(x[i]))
unknown_values_corr=unknown_values_corr.append(pd.Series([unknown_values]),ignore_index=True)
return unknown_values_corr
corrected_unknow_values=unknown_col_correction(feat_info.iloc[:,3])
feat_info['corrected_unknown']=corrected_unknow_values
feat_info.head()
# Identify missing or unknown data values and convert them to NaNs.
def missing_values_convert(df_input):
for i in range(len(feat_info)):
for unknown_value in feat_info.iloc[i,4]:
df_input.iloc[:,i][df_input.iloc[:,i]== unknown_value]=np.nan
return df_input
azdias=missing_values_convert(azdias)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:5: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""
###Markdown
Step 1.1.2: Assess Missing Data in Each ColumnHow much missing data is present in each column? There are a few columns that are outliers in terms of the proportion of values that are missing. You will want to use matplotlib's [`hist()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html) function to visualize the distribution of missing value counts to find these columns. Identify and document these columns. While some of these columns might have justifications for keeping or re-encoding the data, for this project you should just remove them from the dataframe. (Feel free to make remarks about these outlier columns in the discussion, however!)For the remaining features, are there any patterns in which columns have, or share, missing data?
###Code
# Perform an assessment of how much missing data there is in each column of the
# dataset.
missing_data = pd.DataFrame(azdias.isnull().sum().reset_index())
missing_data.columns = ['Column_name','Count_missing_value']
missing_data.hist(bins=85)
# features with no missing or unknow values
complete_features=missing_data[(missing_data['Count_missing_value']==0)]['Column_name']
complete_features
# features with missing values but not outliers
features_missing_data=missing_data[(missing_data['Count_missing_value']>0) & (missing_data['Count_missing_value']<200000) ]
features_missing_data
missing_data[ (missing_data['Count_missing_value']>200000) ]
outliers_features=missing_data[(missing_data['Count_missing_value']>200000)]['Column_name']
outliers_features
# Investigate patterns in the amount of missing data in each column.
missing_data[(missing_data['Count_missing_value']>0) & (missing_data['Count_missing_value']<200000) ].hist()
# Remove the outlier columns from the dataset. (You'll perform other data
# engineering tasks such as re-encoding and imputation later.)
azdias.drop(labels=outliers_features ,axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
Discussion 1.1.2: Assess Missing Data in Each Column(Double click this cell and replace this text with your own text, reporting your observations regarding the amount of missing data in each column. Are there any patterns in missing values? Which columns were removed from the dataset?)My observation regarding the number of missing data:The columns which have no missing or unknown are the Personality typology and the Financial typology and both are the core features, so they cannot be missing.Most of the missing values are in the range between 0 and 200,000 and they are centered around 100,000.The columns removed from the dataset are the following:* AGER_TYP* GEBURTSJAHR* TITEL_KZ* ALTER_HH* KK_KUNDENTYP* KBA05_BAUMAX Step 1.1.3: Assess Missing Data in Each RowNow, you'll perform a similar assessment for the rows of the dataset. How much data is missing in each row? As with the columns, you should see some groups of points that have a very different numbers of missing values. Divide the data into two subsets: one for data points that are above some threshold for missing values, and a second subset for points below that threshold.In order to know what to do with the outlier rows, we should see if the distribution of data values on columns that are not missing data (or are missing very little data) are similar or different between the two groups. Select at least five of these columns and compare the distribution of values.- You can use seaborn's [`countplot()`](https://seaborn.pydata.org/generated/seaborn.countplot.html) function to create a bar chart of code frequencies and matplotlib's [`subplot()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplot.html) function to put bar charts for the two subplots side by side.- To reduce repeated code, you might want to write a function that can perform this comparison, taking as one of its arguments a column to be compared.Depending on what you observe in your comparison, this will have implications on how you approach your conclusions later in the analysis. If the distributions of non-missing features look similar between the data with many missing values and the data with few or no missing values, then we could argue that simply dropping those points from the analysis won't present a major issue. On the other hand, if the data with many missing values looks very different from the data with few or no missing values, then we should make a note on those data as special. We'll revisit these data later on. **Either way, you should continue your analysis for now using just the subset of the data with few or no missing values.**
###Code
# How much data is missing in each row of the dataset?
azdias['number_missing_values']=azdias.isnull().sum(axis=1)
azdias['number_missing_values'].hist(bins=20)
# Write code to divide the data into two subsets based on the number of missing
# values in each row.
azdias_below_threshold=azdias[azdias['number_missing_values']<30]
azdias_above_threshold=azdias[azdias['number_missing_values']>30]
def distrubition_compare(features,input1,input2):
fig, axes = plt.subplots(5, 2, figsize=(10, 30))
for i,feature in enumerate(features):
sns.countplot(ax=axes[i,0],data=input1[feature],x=input1[feature])
axes[i,0].set_title(feature)
sns.countplot(ax=axes[i,1],data=input2[feature],x=input2[feature])
axes[i,1].set_title(feature)
# Compare the distribution of values for at least five columns where there are
# no or few missing values, between the two subsets.
no_missing_values_features=['SEMIO_SOZ','SEMIO_MAT','FINANZTYP','FINANZ_HAUSBAUER', 'SEMIO_TRADV']
distrubition_compare(no_missing_values_features,azdias_above_threshold,azdias_below_threshold)
###Output
_____no_output_____
###Markdown
Discussion 1.1.3: Assess Missing Data in Each Row(Double-click this cell and replace this text with your own text, reporting your observations regarding missing data in rows. Are the data with lots of missing values are qualitatively different from data with few or no missing values?)yes, the data with lots of missing values are concetated in one value, while the data with few missing values are evenly distrubeted around all the values. Step 1.2: Select and Re-Encode FeaturesChecking for missing data isn't the only way in which you can prepare a dataset for analysis. Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, you need to make a few encoding changes or additional assumptions to be able to make progress. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. Check the third column of the feature summary (`feat_info`) for a summary of types of measurement.- For numeric and interval data, these features can be kept without changes.- Most of the variables in the dataset are ordinal in nature. While ordinal values may technically be non-linear in spacing, make the simplifying assumption that the ordinal variables can be treated as being interval in nature (that is, kept without any changes).- Special handling may be necessary for the remaining two variable types: categorical, and 'mixed'.In the first two parts of this sub-step, you will perform an investigation of the categorical and mixed-type features and make a decision on each of them, whether you will keep, drop, or re-encode each. Then, in the last part, you will create a new data frame with only the selected and engineered columns.Data wrangling is often the trickiest part of the data analysis process, and there's a lot of it to be done here. But stick with it: once you're done with this step, you'll be ready to get to the machine learning parts of the project!
###Code
# How many features are there of each data type?
type_count=feat_info['type'].value_counts()
type_count
###Output
_____no_output_____
###Markdown
Step 1.2.1: Re-Encode Categorical FeaturesFor categorical data, you would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, perform one of the following:- For binary (two-level) categoricals that take numeric values, you can keep them without needing to do anything.- There is one binary variable that takes on non-numeric values. For this one, you need to re-encode the values as numbers or create a dummy variable.- For multi-level categoricals (three or more values), you can choose to encode the values using multiple dummy variables (e.g. via [OneHotEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html)), or (to keep things straightforward) just drop them from the analysis. As always, document your choices in the Discussion section.
###Code
# Assess categorical variables: which are binary, which are multi-level, and
# which one needs to be re-encoded?
cat_features=feat_info[(feat_info['type']== 'categorical')]
for i in range(len(outliers_features.values)):
cat_features=cat_features[cat_features['attribute']!=outliers_features.values[i]]
cat_feat_unique_values=pd.DataFrame(azdias_below_threshold[cat_features['attribute']].nunique())
cat_feat_unique_values=cat_feat_unique_values.reset_index()
cat_feat_unique_values.rename(columns={'index':'features', 0:'number_unique_values'},inplace=True)
binary_features= cat_feat_unique_values[cat_feat_unique_values['number_unique_values']==2]
multi_level_features= cat_feat_unique_values[cat_feat_unique_values['number_unique_values']>=3]
azdias_below_threshold.drop(labels=multi_level_features['features'],axis=1,inplace=True)
# Re-encode categorical variable(s) to be kept in the analysis.
feature_reencoded=azdias_below_threshold[binary_features['features']].select_dtypes(include=['object']).columns
azdias_below_threshold[feature_reencoded]=azdias_below_threshold[feature_reencoded].replace({'W': 1,'O':0})
###Output
/opt/conda/lib/python3.6/site-packages/pandas/core/frame.py:3140: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self[k1] = value[k2]
###Markdown
Discussion 1.2.1: Re-Encode Categorical Features(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding categorical features. Which ones did you keep, which did you drop, and what engineering steps did you perform?)I kept the following features :* ANREDE_KZ* GREEN_AVANTGARDE* SOHO_KZ* VERS_TYP* OST_WEST_KZRemoved the following:* CJT_GESAMTTYP* FINANZTYP* GFK_URLAUBERTYP* LP_FAMILIE_FEIN* LP_FAMILIE_GROB* LP_STATUS_FEIN* LP_STATUS_GROB* NATIONALITAET_KZ* SHOPPER_TYP* ZABEOTYP* GEBAEUDETYP* CAMEO_DEUG_2015* CAMEO_DEU_2015For the feature OST_WEST_KZ , it has non numeric value "W" and it was replaced with numeric value 1 instead and "O" was replaced with 0 Step 1.2.2: Engineer Mixed-Type FeaturesThere are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis. There are two in particular that deserve attention; the handling of the rest are up to your own choices:- "PRAEGENDE_JUGENDJAHRE" combines information on three dimensions: generation by decade, movement (mainstream vs. avantgarde), and nation (east vs. west). While there aren't enough levels to disentangle east from west, you should create two new variables to capture the other two dimensions: an interval-type variable for decade, and a binary variable for movement.- "CAMEO_INTL_2015" combines information on two axes: wealth and life stage. Break up the two-digit codes by their 'tens'-place and 'ones'-place digits into two new ordinal variables (which, for the purposes of this project, is equivalent to just treating them as their raw numeric values).- If you decide to keep or engineer new features around the other mixed-type features, make sure you note your steps in the Discussion section.Be sure to check `Data_Dictionary.md` for the details needed to finish these tasks.
###Code
mixed_features=feat_info[(feat_info['type']== 'mixed') ]
for i in range(len(outliers_features.values)):
mixed_features=mixed_features[mixed_features['attribute']!=outliers_features.values[i]]
# Investigate "PRAEGENDE_JUGENDJAHRE" and engineer two new variables.
azdias_below_threshold['PRAEGENDE_JUGENDJAHRE_age']= azdias_below_threshold['PRAEGENDE_JUGENDJAHRE'].replace({1:1,2:1,3:2,4:2,5:3,6:3,7:3,8:4,9:4,10:5,11:5,12:5,13:5,14:6,15:6})
azdias_below_threshold['PRAEGENDE_JUGENDJAHRE_movment']=azdias_below_threshold['PRAEGENDE_JUGENDJAHRE'].replace({1:0,3:0,5:0,8:0,10:0,12:0,14:0,2:1,4:1,6:1,7:1,9:1,11:1,13:1,15:1})
# Investigate "CAMEO_INTL_2015" and engineer two new variables.
unique_values=azdias_below_threshold['CAMEO_INTL_2015'].unique()
unique_values=unique_values.astype(float)
unique_values=np.delete(unique_values,np.argwhere(np.isnan(unique_values)))
unique_values=unique_values.astype(int)
azdias_below_threshold['CAMEO_INTL_2015_wealth']=azdias_below_threshold['CAMEO_INTL_2015']
azdias_below_threshold['CAMEO_INTL_2015_family_stage']=azdias_below_threshold['CAMEO_INTL_2015']
for unique_value in unique_values:
azdias_below_threshold['CAMEO_INTL_2015_wealth']=azdias_below_threshold['CAMEO_INTL_2015_wealth'].replace({str(unique_value):int(unique_value/10)})
azdias_below_threshold['CAMEO_INTL_2015_family_stage']=azdias_below_threshold['CAMEO_INTL_2015_family_stage'].replace({str(unique_value):int(str(unique_value)[1])})
azdias_below_threshold.drop(labels=mixed_features['attribute'],axis=1,inplace=True)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:7: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
import sys
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:8: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:11: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
# This is added back by InteractiveShellApp.init_path()
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:12: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
if sys.path[0] == '':
/opt/conda/lib/python3.6/site-packages/pandas/core/frame.py:3697: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
errors=errors)
###Markdown
Discussion 1.2.2: Engineer Mixed-Type Features(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding mixed-value features. Which ones did you keep, which did you drop, and what engineering steps did you perform?)The features that were kept PRAEGENDE_JUGENDJAHRE and CAMEO_INTL_2015, but after being splitted into 4 features.The following were removed: LP_LEBENSPHASE_FEIN* LP_LEBENSPHASE_GROB* WOHNLAGE* PLZ8_BAUMAX Step 1.2.3: Complete Feature SelectionIn order to finish this step up, you need to make sure that your data frame now only has the columns that you want to keep. To summarize, the dataframe should consist of the following:- All numeric, interval, and ordinal type columns from the original dataset.- Binary categorical features (all numerically-encoded).- Engineered features from other multi-level categorical features and mixed features.Make sure that for any new columns that you have engineered, that you've excluded the original columns from the final dataset. Otherwise, their values will interfere with the analysis later on the project. For example, you should not keep "PRAEGENDE_JUGENDJAHRE", since its values won't be useful for the algorithm: only the values derived from it in the engineered features you created should be retained. As a reminder, your data should only be from **the subset with few or no missing values**.
###Code
# If there are other re-engineering tasks you need to perform, make sure you
# take care of them here. (Dealing with missing data will come in step 2.1.)
# Do whatever you need to in order to ensure that the dataframe only contains
# the columns that should be passed to the algorithm functions.
###Output
_____no_output_____
###Markdown
Step 1.3: Create a Cleaning FunctionEven though you've finished cleaning up the general population demographics data, it's important to look ahead to the future and realize that you'll need to perform the same cleaning steps on the customer demographics data. In this substep, complete the function below to execute the main feature selection, encoding, and re-engineering steps you performed above. Then, when it comes to looking at the customer data in Step 3, you can just run this function on that DataFrame to get the trimmed dataset in a single step.
###Code
def removing_columns_rows(df,outliers_features ,row_threshold):
missing_data = pd.DataFrame(df.isnull().sum().reset_index())
missing_data.columns = ['Column_name','Count_missing_value']
#outliers_features=missing_data[(missing_data['Count_missing_value']>column_threshold)]['Column_name']
df.drop(labels=outliers_features ,axis=1,inplace=True)
df['number_missing_values']=df.isnull().sum(axis=1)
df=df[df['number_missing_values']<row_threshold]
return df
def removing_rencoding_features(df,outliers_features):
# categorical features re-encoding
cat_features=feat_info[(feat_info['type']== 'categorical')]
for i in range(len(outliers_features.values)):
cat_features=cat_features[cat_features['attribute']!=outliers_features.values[i]]
cat_feat_unique_values=pd.DataFrame(df[cat_features['attribute']].nunique())
cat_feat_unique_values=cat_feat_unique_values.reset_index()
cat_feat_unique_values.rename(columns={'index':'features', 0:'number_unique_values'},inplace=True)
binary_features= cat_feat_unique_values[cat_feat_unique_values['number_unique_values']==2]
multi_level_features= cat_feat_unique_values[cat_feat_unique_values['number_unique_values']>=3]
df=df.drop(labels=multi_level_features['features'],axis=1)
feature_reencoded=df[binary_features['features']].select_dtypes(include=['object']).columns
df[feature_reencoded]=df[feature_reencoded].replace({'W':1,'O':0})
# mixed features re-encoding
mixed_features=feat_info[(feat_info['type']== 'mixed') ]
for i in range(len(outliers_features.values)):
mixed_features=mixed_features[mixed_features['attribute']!=outliers_features.values[i]]
df['PRAEGENDE_JUGENDJAHRE_age']= df['PRAEGENDE_JUGENDJAHRE'].replace({1:1,2:1,3:2,4:2,5:3,6:3,7:3,8:4,9:4,10:5,11:5,12:5,13:5,14:6,15:6})
df['PRAEGENDE_JUGENDJAHRE_movment']=df['PRAEGENDE_JUGENDJAHRE'].replace({1:0,3:0,5:0,8:0,10:0,12:0,14:0,2:1,4:1,6:1,7:1,9:1,11:1,13:1,15:1})
unique_values=df['CAMEO_INTL_2015'].unique()
unique_values=unique_values.astype(float)
unique_values=np.delete(unique_values,np.argwhere(np.isnan(unique_values)))
unique_values=unique_values.astype(int)
df['CAMEO_INTL_2015_wealth']=df['CAMEO_INTL_2015']
df['CAMEO_INTL_2015_family_stage']=df['CAMEO_INTL_2015']
for unique_value in unique_values:
df['CAMEO_INTL_2015_wealth']=df['CAMEO_INTL_2015_wealth'].replace({str(unique_value):int(unique_value/10)})
df['CAMEO_INTL_2015_family_stage']=df['CAMEO_INTL_2015_family_stage'].replace({str(unique_value):int(str(unique_value)[1])})
df=df.drop(labels=mixed_features['attribute'],axis=1)
return df
def clean_data(df,row_threshold):
"""
Perform feature trimming, re-encoding, and engineering for demographics
data
INPUT: Demographics DataFrame
OUTPUT: Trimmed and cleaned demographics DataFrame
"""
# Put in code here to execute all main cleaning steps:
# convert missing value codes into NaNs, ...
df=missing_values_convert(df)
# remove selected columns and rows, ...
df_col_row_removed=removing_columns_rows(df,outliers_features ,row_threshold)
# select, re-encode, and engineer column values.
df_rencoded=removing_rencoding_features(df_col_row_removed,outliers_features)
# Return the cleaned dataframe.
return df_rencoded
###Output
_____no_output_____
###Markdown
Step 2: Feature Transformation Step 2.1: Apply Feature ScalingBefore we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. Starting from this part of the project, you'll want to keep an eye on the [API reference page for sklearn](http://scikit-learn.org/stable/modules/classes.html) to help you navigate to all of the classes and functions that you'll need. In this substep, you'll need to check the following:- sklearn requires that data not have missing values in order for its estimators to work properly. So, before applying the scaler to your data, make sure that you've cleaned the DataFrame of the remaining missing values. This can be as simple as just removing all data points with missing data, or applying an [Imputer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Imputer.html) to replace all missing values. You might also try a more complicated procedure where you temporarily remove missing values in order to compute the scaling parameters before re-introducing those missing values and applying imputation. Think about how much missing data you have and what possible effects each approach might have on your analysis, and justify your decision in the discussion section below.- For the actual scaling function, a [StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) instance is suggested, scaling each feature to mean 0 and standard deviation 1.- For these classes, you can make use of the `.fit_transform()` method to both fit a procedure to the data as well as apply the transformation to the data at the same time. Don't forget to keep the fit sklearn objects handy, since you'll be applying them to the customer demographics data towards the end of the project.
###Code
# If you've not yet cleaned the dataset of all NaN values, then investigate and
# do that now.
column_miising=pd.DataFrame(azdias_below_threshold.isna()).sum(axis=0)
rows_miising=pd.DataFrame(azdias_below_threshold.isna()).sum(axis=1)
column_miising.hist()
plt.figure()
rows_miising.hist()
from sklearn.preprocessing import Imputer
simple_imp=Imputer(missing_values=np.nan, strategy='most_frequent')
simple_imp_model=simple_imp.fit(azdias_below_threshold)
azdias_imputed=simple_imp_model.transform(azdias_below_threshold)
# Apply feature scaling to the general population demographics data.
from sklearn.preprocessing import StandardScaler
stand=StandardScaler()
stand.fit(azdias_imputed)
azdias_scaled=stand.transform(azdias_imputed)
###Output
_____no_output_____
###Markdown
Discussion 2.1: Apply Feature Scaling(Double-click this cell and replace this text with your own text, reporting your decisions regarding feature scaling.)* The missing value was replaced witht the most frequent value for each column and then standarization was applied on the data Step 2.2: Perform Dimensionality ReductionOn your scaled data, you are now ready to apply dimensionality reduction techniques.- Use sklearn's [PCA](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) class to apply principal component analysis on the data, thus finding the vectors of maximal variance in the data. To start, you should not set any parameters (so all components are computed) or set a number of components that is at least half the number of features (so there's enough features to see the general trend in variability).- Check out the ratio of variance explained by each principal component as well as the cumulative variance explained. Try plotting the cumulative or sequential values using matplotlib's [`plot()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html) function. Based on what you find, select a value for the number of transformed features you'll retain for the clustering part of the project.- Once you've made a choice for the number of components to keep, make sure you re-fit a PCA instance to perform the decided-on transformation.
###Code
# Apply PCA to the data.
from sklearn.decomposition import PCA
pca=PCA(n_components=50)
pca.fit_transform(azdias_scaled)
# Investigate the variance accounted for by each principal component.
num_components=len(pca.explained_variance_ratio_)
ind = np.arange(num_components)
vals = pca.explained_variance_ratio_
plt.figure(figsize=(25, 6))
ax = plt.subplot(111)
cumvals = np.cumsum(vals)
ax.bar(ind, vals)
ax.plot(ind, cumvals)
for i in range(num_components):
ax.annotate(r"%s%%" % ((str(vals[i]*100)[:4])), (ind[i]+0.15, vals[i]), va="bottom", ha="center", fontsize=8)
ax.xaxis.set_tick_params(width=0)
ax.yaxis.set_tick_params(width=2, length=12)
ax.set_xlabel("Principal Component")
ax.set_ylabel("Variance Explained (%)")
plt.title('Explained Variance Per Principal Component')
# Re-apply PCA to the data while selecting for number of components to retain.
pca=PCA(n_components=10)
pca_model=pca.fit(azdias_scaled)
azdias_pca=pca_model.transform(azdias_scaled)
###Output
_____no_output_____
###Markdown
Discussion 2.2: Perform Dimensionality Reduction(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding dimensionality reduction. How many principal components / transformed features are you retaining for the next step of the analysis?)* The number of principal components retain for the next step is 10 Step 2.3: Interpret Principal ComponentsNow that we have our transformed principal components, it's a nice idea to check out the weight of each variable on the first few components to see if they can be interpreted in some fashion.As a reminder, each principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature. If two features have large weights of the same sign (both positive or both negative), then increases in one tend expect to be associated with increases in the other. To contrast, features with different signs can be expected to show a negative correlation: increases in one variable should result in a decrease in the other.- To investigate the features, you should map each weight to their corresponding feature name, then sort the features according to weight. The most interesting features for each principal component, then, will be those at the beginning and end of the sorted list. Use the data dictionary document to help you understand these most prominent features, their relationships, and what a positive or negative value on the principal component might indicate.- You should investigate and interpret feature associations from the first three principal components in this substep. To help facilitate this, you should write a function that you can call at any time to print the sorted list of feature weights, for the *i*-th principal component. This might come in handy in the next step of the project, when you interpret the tendencies of the discovered clusters.
###Code
def get_feature_importance(pca_model,i):
features_names=azdias_below_threshold.columns.values
compnents_df=pd.DataFrame(pca_model.components_)
compnents_df.columns=features_names
compnents_df=compnents_df.transpose()
compnents_name=[]
for j in range(len(pca_model.components_)):
compnents_name=np.append(compnents_name,'compnent_'+str(j))
compnents_df.columns=compnents_name
sorted_df_component=compnents_df.sort_values(by=['compnent_'+str(i-1)],axis=0,ascending=False)
return sorted_df_component['compnent_'+str(i-1)]
# Map weights for the first principal component to corresponding feature names
# and then print the linked values, sorted by weight.
# HINT: Try defining a function here or in a new cell that you can reuse in the
# other cells.
first_component_weitghs=get_feature_importance(pca_model,1)
first_component_weitghs
# Map weights for the second principal component to corresponding feature names
# and then print the linked values, sorted by weight.
second_component_weitghs=get_feature_importance(pca_model,2)
second_component_weitghs
# Map weights for the third principal component to corresponding feature names
# and then print the linked values, sorted by weight.
third_component_weitghs=get_feature_importance(pca_model,3)
third_component_weitghs
###Output
_____no_output_____
###Markdown
Discussion 2.3: Interpret Principal Components(Double-click this cell and replace this text with your own text, reporting your observations from detailed investigation of the first few principal components generated. Can we interpret positive and negative values from them in a meaningful way?)* Yes, the postive and negative values arenegatively coorelated for example in teh first compnent, the heighest postive weights features are related for the dreamful, clutur and family orinted personalities while the the heighest negative weights features are for the retional, critical thinking personalities which shows that they are negatively coorelated as one of them increase the other will decrease. Step 3: Clustering Step 3.1: Apply Clustering to General PopulationYou've assessed and cleaned the demographics data, then scaled and transformed them. Now, it's time to see how the data clusters in the principal components space. In this substep, you will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep.- Use sklearn's [KMeans](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.htmlsklearn.cluster.KMeans) class to perform k-means clustering on the PCA-transformed data.- Then, compute the average difference from each point to its assigned cluster's center. **Hint**: The KMeans object's `.score()` method might be useful here, but note that in sklearn, scores tend to be defined so that larger is better. Try applying it to a small, toy dataset, or use an internet search to help your understanding.- Perform the above two steps for a number of different cluster counts. You can then see how the average distance decreases with an increasing number of clusters. However, each additional cluster provides a smaller net benefit. Use this fact to select a final number of clusters in which to group the data. **Warning**: because of the large size of the dataset, it can take a long time for the algorithm to resolve. The more clusters to fit, the longer the algorithm will take. You should test for cluster counts through at least 10 clusters to get the full picture, but you shouldn't need to test for a number of clusters above about 30.- Once you've selected a final number of clusters to use, re-fit a KMeans instance to perform the clustering operation. Make sure that you also obtain the cluster assignments for the general demographics data, since you'll be using them in the final Step 3.3.
###Code
from sklearn.cluster import KMeans
# Over a number of different cluster counts...
K=[10,15,20,25,30]
scores=[]
for k in K:
# run k-means clustering on the data and...
kmeans = KMeans(n_clusters=k)
model_k=kmeans.fit(azdias_pca)
labels=model_k.predict(azdias_pca)
# compute the average within-cluster distances.
score=model_k.score(azdias_pca)
scores=np.append(scores,score)
# Investigate the change in within-cluster distance across number of clusters.
# HINT: Use matplotlib's plot function to visualize this relationship.
plt.plot(K,-1*scores,linestyle='--', marker='o', color='b')
# Re-fit the k-means model with the selected number of clusters and obtain
# cluster predictions for the general population demographics data.
from sklearn.cluster import KMeans
best_k=30
kmeans = KMeans(n_clusters=best_k)
model_k=kmeans.fit(azdias_pca)
labels_demo=model_k.predict(azdias_pca)
###Output
_____no_output_____
###Markdown
Discussion 3.1: Apply Clustering to General Population(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding clustering. Into how many clusters have you decided to segment the population?)k used is 30 Step 3.2: Apply All Steps to the Customer DataNow that you have clusters and cluster centers for the general population, it's time to see how the customer data maps on to those clusters. Take care to not confuse this for re-fitting all of the models to the customer data. Instead, you're going to use the fits from the general population to clean, transform, and cluster the customer data. In the last step of the project, you will interpret how the general population fits apply to the customer data.- Don't forget when loading in the customers data, that it is semicolon (`;`) delimited.- Apply the same feature wrangling, selection, and engineering steps to the customer demographics using the `clean_data()` function you created earlier. (You can assume that the customer demographics data has similar meaning behind missing data patterns as the general demographics data.)- Use the sklearn objects from the general demographics data, and apply their transformations to the customers data. That is, you should not be using a `.fit()` or `.fit_transform()` method to re-fit the old objects, nor should you be creating new sklearn objects! Carry the data through the feature scaling, PCA, and clustering steps, obtaining cluster assignments for all of the data in the customer demographics data.
###Code
# Load in the customer demographics data.
customers = pd.read_csv('Udacity_CUSTOMERS_Subset.csv',sep=';')
# Apply preprocessing, feature transformation, and clustering from the general
# demographics onto the customer data, obtaining cluster predictions for the
# customer demographics data.
customers_cleared=clean_data(customers,30)
customer_imputed=simple_imp_model.transform(customers_cleared)
customers_stand=stand.transform(customer_imputed)
customers_pca=pca_model.transform(customers_stand)
labels_customers=model_k.predict(customers_pca)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:5: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
"""
###Markdown
Step 3.3: Compare Customer Data to Demographics DataAt this point, you have clustered data based on demographics of the general population of Germany, and seen how the customer data for a mail-order sales company maps onto those demographic clusters. In this final substep, you will compare the two cluster distributions to see where the strongest customer base for the company is.Consider the proportion of persons in each cluster for the general population, and the proportions for the customers. If we think the company's customer base to be universal, then the cluster assignment proportions should be fairly similar between the two. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population (e.g. 5% of persons are assigned to a cluster for the general population, but 15% of the customer data is closest to that cluster's centroid) then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data (e.g. only 2% of customers closest to a population centroid that captures 6% of the data) suggests that group of persons to be outside of the target demographics.Take a look at the following points in this step:- Compute the proportion of data points in each cluster for the general population and the customer data. Visualizations will be useful here: both for the individual dataset proportions, but also to visualize the ratios in cluster representation between groups. Seaborn's [`countplot()`](https://seaborn.pydata.org/generated/seaborn.countplot.html) or [`barplot()`](https://seaborn.pydata.org/generated/seaborn.barplot.html) function could be handy. - Recall the analysis you performed in step 1.1.3 of the project, where you separated out certain data points from the dataset if they had more than a specified threshold of missing values. If you found that this group was qualitatively different from the main bulk of the data, you should treat this as an additional data cluster in this analysis. Make sure that you account for the number of data points in this subset, for both the general population and customer datasets, when making your computations!- Which cluster or clusters are overrepresented in the customer dataset compared to the general population? Select at least one such cluster and infer what kind of people might be represented by that cluster. Use the principal component interpretations from step 2.3 or look at additional components to help you make this inference. Alternatively, you can use the `.inverse_transform()` method of the PCA and StandardScaler objects to transform centroids back to the original data space and interpret the retrieved values directly.- Perform a similar investigation for the underrepresented clusters. Which cluster or clusters are underrepresented in the customer dataset compared to the general population, and what kinds of people are typified by these clusters?
###Code
# Compare the proportion of data in each cluster for the customer data to the
# proportion of data in each cluster for the general population.
from collections import Counter
y=Counter(labels_demo)
x=Counter(labels_customers)
subjects_per_labels_demo=pd.DataFrame(index=list(y.keys()),data=list(y.values()),columns=['demo_data'])
subjects_per_labels_demo.sort_index(axis=0,inplace=True)
subjects_per_labels_customers=pd.DataFrame(index=list(x.keys()),data=list(x.values()),columns=['customers_data'])
subjects_per_labels_customers.sort_index(axis=0,inplace=True)
subjects_per_labels=pd.concat([subjects_per_labels_demo,subjects_per_labels_customers],axis=1)
subjects_per_labels['demo_data_prop']=100*(subjects_per_labels['demo_data']/sum(subjects_per_labels['demo_data']))
subjects_per_labels['customer_data_prop']=100*(subjects_per_labels['customers_data']/sum(subjects_per_labels['customers_data']))
subjects_per_labels.plot(kind='bar',figsize=(10,10),y=['demo_data_prop','customer_data_prop'])
# What kinds of people are part of a cluster that is overrepresented in the
# customer data compared to the general population?
customers_overrepresented_labels=[2,4,15,16,21,25]
overrepresented_subjects_index=np.where(np.in1d(labels_customers,customers_overrepresented_labels))[0]
overrepresented_subject_index_25=np.where(np.in1d(labels_customers,customers_overrepresented_labels[5]))[0]
overrespsented_comp=customers_pca[overrepresented_subject_index_25,:]
average=overrespsented_comp.mean(axis=0)
first_component_weitghs=get_feature_importance(pca_model,1)
first_component_weitghs
third_component_weitghs=get_feature_importance(pca_model,3)
third_component_weitghs
fourth_component_weitghs=get_feature_importance(pca_model,4)
fourth_component_weitghs
second_component_weitghs=get_feature_importance(pca_model,2)
second_component_weitghs
# What kinds of people are part of a cluster that is underrepresented in the
# customer data compared to the general population?
customers_underrepresented_labels=[0,1,3,5,6,7,17,18,19,20,24,29]
underrepresented_subjects_index=np.where(np.in1d(labels_customers,customers_underrepresented_labels))[0]
underrepresented_subject_index_5=np.where(np.in1d(labels_customers,customers_underrepresented_labels[3]))[0]
underrepresented_comp=customers_pca[underrepresented_subject_index_5,:]
average=underrepresented_comp.mean(axis=0)
###Output
_____no_output_____ |
module2/262_assignment_kaggle_challenge_2.ipynb | ###Markdown
Lambda School Data Science, Unit 2: Predictive Modeling Kaggle Challenge, Module 2 Assignment- [ ] Read [“Adopting a Hypothesis-Driven Workflow”](https://outline.com/5S5tsB), a blog post by a Lambda DS student about the Tanzania Waterpumps challenge.- [ ] Continue to participate in our Kaggle challenge.- [ ] Try Ordinal Encoding.- [ ] Try a Random Forest Classifier.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Do more exploratory data analysis, data cleaning, feature engineering, and feature selection.- [ ] Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/).- [ ] Get and plot your feature importances.- [ ] Make visualizations and share on Slack. ReadingTop recommendations in _**bold italic:**_ Decision Trees- A Visual Introduction to Machine Learning, [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/), and _**[Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)**_- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree — by hand — to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) Random Forests- [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 8: Tree-Based Methods- [Coloring with Random Forests](http://structuringtheunstructured.blogspot.com/2017/11/coloring-with-random-forests.html)- _**[Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/)**_ Categorical encoding for trees- [Are categorical variables getting lost in your random forests?](https://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/)- [Beyond One-Hot: An Exploration of Categorical Variables](http://www.willmcginnis.com/2015/11/29/beyond-one-hot-an-exploration-of-categorical-variables/)- _**[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)**_- _**[Coursera — How to Win a Data Science Competition: Learn from Top Kagglers — Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)**_- [Mean (likelihood) encodings: a comprehensive study](https://www.kaggle.com/vprokopev/mean-likelihood-encodings-a-comprehensive-study)- [The Mechanics of Machine Learning, Chapter 6: Categorically Speaking](https://mlbook.explained.ai/catvars.html) Imposter Syndrome- [Effort Shock and Reward Shock (How The Karate Kid Ruined The Modern World)](http://www.tempobook.com/2014/07/09/effort-shock-and-reward-shock/)- [How to manage impostor syndrome in data science](https://towardsdatascience.com/how-to-manage-impostor-syndrome-in-data-science-ad814809f068)- ["I am not a real data scientist"](https://brohrer.github.io/imposter_syndrome.html)- _**[Imposter Syndrome in Data Science](https://caitlinhudon.com/2018/01/19/imposter-syndrome-in-data-science/)**_
###Code
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# category_encoders, version >= 2.0
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade category_encoders pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git
!git pull origin master
# Change into directory for module
os.chdir('module2')
import pandas as pd
from sklearn.model_selection import train_test_split
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv('../data/tanzania/train_features.csv'),
pd.read_csv('../data/tanzania/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
###Output
_____no_output_____
###Markdown
Assignment- [ ] Read [“Adopting a Hypothesis-Driven Workflow”](https://outline.com/5S5tsB), a blog post by a Lambda DS student about the Tanzania Waterpumps challenge.- [ ] Continue to participate in our Kaggle challenge.- [ ] Try Ordinal Encoding.- [ ] Try a Random Forest Classifier.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo.
###Code
#copied from previous days assignment
import numpy as np
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.tree import DecisionTreeClassifier
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestClassifier
train, validate = train_test_split(train, train_size=0.80, test_size=0.20,
stratify=train['status_group'], random_state=42)
train.shape, validate.shape, test.shape
def cleaner(X):
# stop SettingWithCopyWarning
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace the zeros with nulls, and impute missing values later.
cols_with_zeros = ['longitude', 'latitude', 'construction_year', 'district_code']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
# quantity & quantity_group are duplicates, so drop one
# X = X.drop(columns='quantity_group')
X = X.drop(columns=['quantity_group', 'installer', 'extraction_type_group',
'extraction_type_class', 'payment_type', 'waterpoint_type_group'])
#removing columns negatively impacts validation accuracy
#convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X.date_recorded)
#create a new feature for pump_age
X['pump_age'] = X.date_recorded.dt.year - X.construction_year
# replace negative pump ages with nan
# which also decreased validation accuracy slightly
X['pump_age'] = X['pump_age'].replace([-7, -6, -5, -4, -3, -2, -1], np.nan)
# return the wrangled dataframe
return X
train = cleaner(train)
validate = cleaner(validate)
test = cleaner(test)
#exclude the target column
target = 'status_group'
# remove target and id columns
train_features = train.drop(columns=[target, 'id'])
# list of only the numeric features
numeric_features = train_features.select_dtypes(include='number').columns.tolist()
# Get a series with the cardinality of the categorical features
cardinality = train_features.select_dtypes(exclude='number').nunique()
# all categorical features with cardinality <= 50
categorical_features = cardinality[cardinality <= 50].index.tolist()
# Combine the lists
features = numeric_features + categorical_features
print(features)
# Arrange data into X features matrices and y target vectors
X_train = train[features]
y_train = train[target]
X_validate = validate[features]
y_validate = validate[target]
X_test = test[features]
random_forest = make_pipeline(
# ce.OneHotEncoder(use_cat_names=True),
# DecisionTreeClassifier(random_state=42)
ce.OrdinalEncoder(),
# SimpleImputer(),
SimpleImputer(strategy="most_frequent"),
# SimpleImputer(strategy="median"),
# IterativeImputer(), # lowered validation accuracy
StandardScaler(),
# RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
# RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1, min_samples_split=3), # 80.538
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1, min_samples_split=4), # 80.93
# RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1, min_samples_split=5), # 80.77
# I get best validation accuracy with no max depth but that over fits the
# training data
)
# Fit on train
random_forest.fit(X_train, y_train)
print('Train Accuracy', random_forest.score(X_train, y_train))
print('Validation Accuracy', random_forest.score(X_validate, y_validate))
model = random_forest.named_steps['randomforestclassifier']
encoder = random_forest.named_steps['ordinalencoder']
encoded_columns = encoder.fit_transform(X_train).columns
importances = pd.Series(model.feature_importances_, encoded_columns)
plt.figure(figsize=(10,30))
importances.sort_values().plot.barh();
test_pred = random_forest.predict(X_test)
submission = sample_submission.copy()
submission['status_group'] = test_pred
submission.to_csv('submission-03.csv', index=False)
random_forest.named_steps
X_train.population
###Output
_____no_output_____ |
5.2-(Colab)using-convnets-with-small-datasets.ipynb | ###Markdown
5.2 - Using convnets with small datasetsThis notebook contains the code sample found in Chapter 5, Section 2 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments. Training a convnet from scratch on a small datasetHaving to train an image classification model using only very little data is a common situation, which you likely encounter yourself in practice if you ever do computer vision in a professional context.Having "few" samples can mean anywhere from a few hundreds to a few tens of thousands of images. As a practical example, we will focus on classifying images as "dogs" or "cats", in a dataset containing 4000 pictures of cats and dogs (2000 cats, 2000 dogs). We will use 2000 pictures for training, 1000 for validation, and finally 1000 for testing.In this section, we will review one basic strategy to tackle this problem: training a new model from scratch on what little data we have. We will start by naively training a small convnet on our 2000 training samples, without any regularization, to set a baseline for what can be achieved. This will get us to a classification accuracy of 71%. At that point, our main issue will be overfitting. Then we will introduce *data augmentation*, a powerful technique for mitigating overfitting in computer vision. By leveraging data augmentation, we will improve our network to reach an accuracy of 82%.In the next section, we will review two more essential techniques for applying deep learning to small datasets: *doing feature extraction with a pre-trained network* (this will get us to an accuracy of 90% to 93%), and *fine-tuning a pre-trained network* (this will get us to our final accuracy of 95%). Together, these three strategies -- training a small model from scratch, doing feature extracting using a pre-trained model, and fine-tuning a pre-trained model -- will constitute your future toolbox for tackling the problem of doing computer vision with small datasets. The relevance of deep learning for small-data problemsYou will sometimes hear that deep learning only works when lots of data is available. This is in part a valid point: one fundamental characteristic of deep learning is that it is able to find interesting features in the training data on its own, without any need for manual feature engineering, and this can only be achieved when lots of training examples are available. This is especially true for problems where the input samples are very high-dimensional, like images.However, what constitutes "lots" of samples is relative -- relative to the size and depth of the network you are trying to train, for starters. It isn't possible to train a convnet to solve a complex problem with just a few tens of samples, but a few hundreds can potentially suffice if the model is small and well-regularized and if the task is simple. Because convnets learn local, translation-invariant features, they are very data-efficient on perceptual problems. Training a convnet from scratch on a very small image dataset will still yield reasonable results despite a relative lack of data, without the need for any custom feature engineering. You will see this in action in this section.But what's more, deep learning models are by nature highly repurposable: you can take, say, an image classification or speech-to-text model trained on a large-scale dataset then reuse it on a significantly different problem with only minor changes. Specifically, in the case of computer vision, many pre-trained models (usually trained on the ImageNet dataset) are now publicly available for download and can be used to bootstrap powerful vision models out of very little data. That's what we will do in the next section.For now, let's get started by getting our hands on the data. Downloading the dataThe cats vs. dogs dataset that we will use isn't packaged with Keras. It was made available by Kaggle.com as part of a computer vision competition in late 2013, back when convnets weren't quite mainstream. You can download the original dataset at: `https://www.kaggle.com/c/dogs-vs-cats/data` (you will need to create a Kaggle account if you don't already have one -- don't worry, the process is painless).The pictures are medium-resolution color JPEGs. They look like this: Unsurprisingly, the cats vs. dogs Kaggle competition in 2013 was won by entrants who used convnets. The best entries could achieve up to 95% accuracy. In our own example, we will get fairly close to this accuracy (in the next section), even though we will be training our models on less than 10% of the data that was available to the competitors.This original dataset contains 25,000 images of dogs and cats (12,500 from each class) and is 543MB large (compressed). After downloading and uncompressing it, we will create a new dataset containing three subsets: a training set with 1000 samples of each class, a validation set with 500 samples of each class, and finally a test set with 500 samples of each class.Here are a few lines of code to do this:
###Code
import os, shutil
# The path to the directory where the original
# dataset was uncompressed
# original_dataset_dir = '/content/KerasBookApplication/kaggle_original_data/train'
# The directory where we will
# store our smaller dataset
base_dir = '/content/KerasBookApplication/cats_and_dogs_small'
#os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
#os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
#os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
#os.mkdir(test_dir)
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
#os.mkdir(train_cats_dir)
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
#os.mkdir(train_dogs_dir)
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
#os.mkdir(validation_cats_dir)
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
#os.mkdir(validation_dogs_dir)
# Directory with our validation cat pictures
test_cats_dir = os.path.join(test_dir, 'cats')
#os.mkdir(test_cats_dir)
# Directory with our validation dog pictures
test_dogs_dir = os.path.join(test_dir, 'dogs')
#os.mkdir(test_dogs_dir)
'''
# Copy first 1000 cat images to train_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to validation_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to test_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy first 1000 dog images to train_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to validation_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to test_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_dogs_dir, fname)
shutil.copyfile(src, dst)
'''
###Output
_____no_output_____
###Markdown
As a sanity check, let's count how many pictures we have in each training split (train/validation/test):
###Code
print('total training cat images:', len(os.listdir(train_cats_dir)))
print('total training dog images:', len(os.listdir(train_dogs_dir)))
print('total validation cat images:', len(os.listdir(validation_cats_dir)))
print('total validation dog images:', len(os.listdir(validation_dogs_dir)))
print('total test cat images:', len(os.listdir(test_cats_dir)))
print('total test dog images:', len(os.listdir(test_dogs_dir)))
###Output
_____no_output_____
###Markdown
So we have indeed 2000 training images, and then 1000 validation images and 1000 test images. In each split, there is the same number of samples from each class: this is a balanced binary classification problem, which means that classification accuracy will be an appropriate measure of success. Building our networkWe've already built a small convnet for MNIST in the previous example, so you should be familiar with them. We will reuse the same general structure: our convnet will be a stack of alternated `Conv2D` (with `relu` activation) and `MaxPooling2D` layers.However, since we are dealing with bigger images and a more complex problem, we will make our network accordingly larger: it will have one more `Conv2D` + `MaxPooling2D` stage. This serves both to augment the capacity of the network, and to further reduce the size of the feature maps, so that they aren't overly large when we reach the `Flatten` layer. Here, since we start from inputs of size 150x150 (a somewhat arbitrary choice), we end up with feature maps of size 7x7 right before the `Flatten` layer.Note that the depth of the feature maps is progressively increasing in the network (from 32 to 128), while the size of the feature maps is decreasing (from 148x148 to 7x7). This is a pattern that you will see in almost all convnets.Since we are attacking a binary classification problem, we are ending the network with a single unit (a `Dense` layer of size 1) and a `sigmoid` activation. This unit will encode the probability that the network is looking at one class or the other.
###Code
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
###Output
_____no_output_____
###Markdown
Let's take a look at how the dimensions of the feature maps change with every successive layer:
###Code
model.summary()
###Output
_____no_output_____
###Markdown
For our compilation step, we'll go with the `RMSprop` optimizer as usual. Since we ended our network with a single sigmoid unit, we will use binary crossentropy as our loss (as a reminder, check out the table in Chapter 4, section 5 for a cheatsheet on what loss function to use in various situations).
###Code
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
###Output
_____no_output_____
###Markdown
Data preprocessingAs you already know by now, data should be formatted into appropriately pre-processed floating point tensors before being fed into our network. Currently, our data sits on a drive as JPEG files, so the steps for getting it into our network are roughly:* Read the picture files.* Decode the JPEG content to RBG grids of pixels.* Convert these into floating point tensors.* Rescale the pixel values (between 0 and 255) to the [0, 1] interval (as you know, neural networks prefer to deal with small input values).It may seem a bit daunting, but thankfully Keras has utilities to take care of these steps automatically. Keras has a module with image processing helper tools, located at `keras.preprocessing.image`. In particular, it contains the class `ImageDataGenerator` which allows to quickly set up Python generators that can automatically turn image files on disk into batches of pre-processed tensors. This is what we will use here.
###Code
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
###Output
_____no_output_____
###Markdown
Let's take a look at the output of one of these generators: it yields batches of 150x150 RGB images (shape `(20, 150, 150, 3)`) and binary labels (shape `(20,)`). 20 is the number of samples in each batch (the batch size). Note that the generator yields these batches indefinitely: it just loops endlessly over the images present in the target folder. For this reason, we need to `break` the iteration loop at some point.
###Code
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
###Output
_____no_output_____
###Markdown
Let's fit our model to the data using the generator. We do it using the `fit_generator` method, the equivalent of `fit` for data generators like ours. It expects as first argument a Python generator that will yield batches of inputs and targets indefinitely, like ours does. Because the data is being generated endlessly, the generator needs to know example how many samples to draw from the generator before declaring an epoch over. This is the role of the `steps_per_epoch` argument: after having drawn `steps_per_epoch` batches from the generator, i.e. after having run for `steps_per_epoch` gradient descent steps, the fitting process will go to the next epoch. In our case, batches are 20-sample large, so it will take 100 batches until we see our target of 2000 samples.When using `fit_generator`, one may pass a `validation_data` argument, much like with the `fit` method. Importantly, this argument is allowed to be a data generator itself, but it could be a tuple of Numpy arrays as well. If you pass a generator as `validation_data`, then this generator is expected to yield batches of validation data endlessly, and thus you should also specify the `validation_steps` argument, which tells the process how many batches to draw from the validation generator for evaluation.
###Code
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
###Output
_____no_output_____
###Markdown
It is good practice to always save your models after training:
###Code
model.save('cats_and_dogs_small_1.h5')
###Output
_____no_output_____
###Markdown
Let's plot the loss and accuracy of the model over the training and validation data during training:
###Code
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
These plots are characteristic of overfitting. Our training accuracy increases linearly over time, until it reaches nearly 100%, while our validation accuracy stalls at 70-72%. Our validation loss reaches its minimum after only five epochs then stalls, while the training loss keeps decreasing linearly until it reaches nearly 0.Because we only have relatively few training samples (2000), overfitting is going to be our number one concern. You already know about a number of techniques that can help mitigate overfitting, such as dropout and weight decay (L2 regularization). We are now going to introduce a new one, specific to computer vision, and used almost universally when processing images with deep learning models: *data augmentation*. Using data augmentationOverfitting is caused by having too few samples to learn from, rendering us unable to train a model able to generalize to new data. Given infinite data, our model would be exposed to every possible aspect of the data distribution at hand: we would never overfit. Data augmentation takes the approach of generating more training data from existing training samples, by "augmenting" the samples via a number of random transformations that yield believable-looking images. The goal is that at training time, our model would never see the exact same picture twice. This helps the model get exposed to more aspects of the data and generalize better.In Keras, this can be done by configuring a number of random transformations to be performed on the images read by our `ImageDataGenerator` instance. Let's get started with an example:
###Code
import keras
keras.__version__
import os, shutil
from keras.preprocessing.image import ImageDataGenerator
import matplotlib.pyplot as plt
# The path to the directory where the original
# dataset was uncompressed
#original_dataset_dir = 'D:/kaggle_original_data/train'
# The directory where we will
# store our smaller dataset
#base_dir = 'D:\cats_and_dogs_small'
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
test_dir = os.path.join(base_dir, 'test')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
# Directory with our validation cat pictures
test_cats_dir = os.path.join(test_dir, 'cats')
# Directory with our validation dog pictures
test_dogs_dir = os.path.join(test_dir, 'dogs')
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
###Output
_____no_output_____
###Markdown
These are just a few of the options available (for more, see the Keras documentation). Let's quickly go over what we just wrote:* `rotation_range` is a value in degrees (0-180), a range within which to randomly rotate pictures.* `width_shift` and `height_shift` are ranges (as a fraction of total width or height) within which to randomly translate pictures vertically or horizontally.* `shear_range` is for randomly applying shearing transformations.* `zoom_range` is for randomly zooming inside pictures.* `horizontal_flip` is for randomly flipping half of the images horizontally -- relevant when there are no assumptions of horizontal asymmetry (e.g. real-world pictures).* `fill_mode` is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.Let's take a look at our augmented images:
###Code
# This is module with image preprocessing utilities
from keras.preprocessing import image
fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)]
# We pick one image to "augment"
img_path = fnames[3]
# Read the image and resize it
img = image.load_img(img_path, target_size=(150, 150))
# Convert it to a Numpy array with shape (150, 150, 3)
x = image.img_to_array(img)
# Reshape it to (1, 150, 150, 3)
x = x.reshape((1,) + x.shape)
# The .flow() command below generates batches of randomly transformed images.
# It will loop indefinitely, so we need to `break` the loop at some point!
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
###Output
_____no_output_____
###Markdown
If we train a new network using this data augmentation configuration, our network will never see twice the same input. However, the inputs that it sees are still heavily intercorrelated, since they come from a small number of original images -- we cannot produce new information, we can only remix existing information. As such, this might not be quite enough to completely get rid of overfitting. To further fight overfitting, we will also add a Dropout layer to our model, right before the densely-connected classifier:
###Code
from keras import optimizers
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
###Output
_____no_output_____
###Markdown
Let's train our network using data augmentation and dropout:
###Code
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
###Output
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Epoch 1/100
100/100 [==============================] - 266s 3s/step - loss: 0.6935 - acc: 0.5041 - val_loss: 0.6819 - val_acc: 0.5146
Epoch 2/100
100/100 [==============================] - 247s 2s/step - loss: 0.6761 - acc: 0.5722 - val_loss: 0.6634 - val_acc: 0.5818
Epoch 3/100
100/100 [==============================] - 249s 2s/step - loss: 0.6588 - acc: 0.6153 - val_loss: 0.6294 - val_acc: 0.6250
Epoch 4/100
100/100 [==============================] - 247s 2s/step - loss: 0.6471 - acc: 0.6209 - val_loss: 0.6213 - val_acc: 0.6418
Epoch 5/100
100/100 [==============================] - 247s 2s/step - loss: 0.6351 - acc: 0.6259 - val_loss: 0.6580 - val_acc: 0.5984
Epoch 6/100
100/100 [==============================] - 248s 2s/step - loss: 0.6212 - acc: 0.6506 - val_loss: 0.6204 - val_acc: 0.6186
Epoch 7/100
100/100 [==============================] - 249s 2s/step - loss: 0.6073 - acc: 0.6644 - val_loss: 0.6205 - val_acc: 0.6383
Epoch 8/100
100/100 [==============================] - 248s 2s/step - loss: 0.6045 - acc: 0.6806 - val_loss: 0.5893 - val_acc: 0.6740
Epoch 9/100
100/100 [==============================] - 248s 2s/step - loss: 0.5896 - acc: 0.6875 - val_loss: 0.6914 - val_acc: 0.6166
Epoch 10/100
100/100 [==============================] - 247s 2s/step - loss: 0.5850 - acc: 0.6906 - val_loss: 0.5504 - val_acc: 0.7126
Epoch 11/100
100/100 [==============================] - 246s 2s/step - loss: 0.5748 - acc: 0.6959 - val_loss: 0.5566 - val_acc: 0.6972
Epoch 12/100
100/100 [==============================] - 247s 2s/step - loss: 0.5764 - acc: 0.6934 - val_loss: 0.5533 - val_acc: 0.7138
Epoch 13/100
100/100 [==============================] - 247s 2s/step - loss: 0.5660 - acc: 0.6991 - val_loss: 0.5432 - val_acc: 0.7255
Epoch 14/100
100/100 [==============================] - 247s 2s/step - loss: 0.5506 - acc: 0.7187 - val_loss: 0.5236 - val_acc: 0.7284
Epoch 15/100
100/100 [==============================] - 251s 3s/step - loss: 0.5543 - acc: 0.7212 - val_loss: 0.5537 - val_acc: 0.7081
Epoch 16/100
100/100 [==============================] - 244s 2s/step - loss: 0.5501 - acc: 0.7150 - val_loss: 0.5433 - val_acc: 0.7094
Epoch 17/100
100/100 [==============================] - 244s 2s/step - loss: 0.5502 - acc: 0.7197 - val_loss: 0.5219 - val_acc: 0.7329
Epoch 18/100
100/100 [==============================] - 249s 2s/step - loss: 0.5397 - acc: 0.7344 - val_loss: 0.5260 - val_acc: 0.7326
Epoch 19/100
100/100 [==============================] - 246s 2s/step - loss: 0.5362 - acc: 0.7281 - val_loss: 0.6347 - val_acc: 0.6662
Epoch 20/100
100/100 [==============================] - 244s 2s/step - loss: 0.5277 - acc: 0.7344 - val_loss: 0.4941 - val_acc: 0.7642
Epoch 21/100
100/100 [==============================] - 246s 2s/step - loss: 0.5303 - acc: 0.7381 - val_loss: 0.5494 - val_acc: 0.7214
Epoch 22/100
100/100 [==============================] - 245s 2s/step - loss: 0.5301 - acc: 0.7341 - val_loss: 0.5419 - val_acc: 0.7113
Epoch 23/100
100/100 [==============================] - 248s 2s/step - loss: 0.5142 - acc: 0.7428 - val_loss: 0.5238 - val_acc: 0.7316
Epoch 24/100
100/100 [==============================] - 245s 2s/step - loss: 0.5101 - acc: 0.7419 - val_loss: 0.5043 - val_acc: 0.7332
Epoch 25/100
100/100 [==============================] - 245s 2s/step - loss: 0.5281 - acc: 0.7322 - val_loss: 0.4949 - val_acc: 0.7532
Epoch 26/100
100/100 [==============================] - 248s 2s/step - loss: 0.5116 - acc: 0.7469 - val_loss: 0.5263 - val_acc: 0.7221
Epoch 27/100
100/100 [==============================] - 248s 2s/step - loss: 0.5098 - acc: 0.7453 - val_loss: 0.4634 - val_acc: 0.7771
Epoch 28/100
100/100 [==============================] - 247s 2s/step - loss: 0.4971 - acc: 0.7509 - val_loss: 0.4725 - val_acc: 0.7652
Epoch 29/100
100/100 [==============================] - 247s 2s/step - loss: 0.5023 - acc: 0.7500 - val_loss: 0.5573 - val_acc: 0.7255
Epoch 30/100
100/100 [==============================] - 248s 2s/step - loss: 0.4980 - acc: 0.7562 - val_loss: 0.4806 - val_acc: 0.7589
Epoch 31/100
100/100 [==============================] - 246s 2s/step - loss: 0.4949 - acc: 0.7619 - val_loss: 0.4768 - val_acc: 0.7590
Epoch 32/100
100/100 [==============================] - 245s 2s/step - loss: 0.4867 - acc: 0.7666 - val_loss: 0.5155 - val_acc: 0.7423
Epoch 33/100
100/100 [==============================] - 247s 2s/step - loss: 0.4873 - acc: 0.7606 - val_loss: 0.4927 - val_acc: 0.7608
Epoch 34/100
100/100 [==============================] - 244s 2s/step - loss: 0.4798 - acc: 0.7644 - val_loss: 0.5172 - val_acc: 0.7564
Epoch 35/100
100/100 [==============================] - 245s 2s/step - loss: 0.4727 - acc: 0.7666 - val_loss: 0.4527 - val_acc: 0.7868
Epoch 36/100
100/100 [==============================] - 246s 2s/step - loss: 0.4698 - acc: 0.7700 - val_loss: 0.4987 - val_acc: 0.7590
Epoch 37/100
100/100 [==============================] - 245s 2s/step - loss: 0.4737 - acc: 0.7703 - val_loss: 0.4722 - val_acc: 0.7640
Epoch 38/100
100/100 [==============================] - 244s 2s/step - loss: 0.4756 - acc: 0.7669 - val_loss: 0.4599 - val_acc: 0.7919
Epoch 39/100
100/100 [==============================] - 246s 2s/step - loss: 0.4727 - acc: 0.7769 - val_loss: 0.4800 - val_acc: 0.7525
Epoch 40/100
100/100 [==============================] - 247s 2s/step - loss: 0.4645 - acc: 0.7750 - val_loss: 0.4855 - val_acc: 0.7629
Epoch 41/100
100/100 [==============================] - 246s 2s/step - loss: 0.4706 - acc: 0.7763 - val_loss: 0.4516 - val_acc: 0.7874
Epoch 42/100
100/100 [==============================] - 252s 3s/step - loss: 0.4472 - acc: 0.7959 - val_loss: 0.4728 - val_acc: 0.7684
Epoch 43/100
100/100 [==============================] - 250s 2s/step - loss: 0.4460 - acc: 0.7888 - val_loss: 0.4674 - val_acc: 0.7622
Epoch 44/100
100/100 [==============================] - 251s 3s/step - loss: 0.4542 - acc: 0.7869 - val_loss: 0.5470 - val_acc: 0.7500
Epoch 45/100
100/100 [==============================] - 249s 2s/step - loss: 0.4506 - acc: 0.7841 - val_loss: 0.5237 - val_acc: 0.7610
Epoch 46/100
100/100 [==============================] - 251s 3s/step - loss: 0.4493 - acc: 0.7819 - val_loss: 0.5387 - val_acc: 0.7335
Epoch 47/100
100/100 [==============================] - 250s 3s/step - loss: 0.4458 - acc: 0.7931 - val_loss: 0.4678 - val_acc: 0.7648
Epoch 48/100
100/100 [==============================] - 247s 2s/step - loss: 0.4412 - acc: 0.7878 - val_loss: 0.4687 - val_acc: 0.7758
Epoch 49/100
100/100 [==============================] - 250s 2s/step - loss: 0.4328 - acc: 0.8012 - val_loss: 0.4692 - val_acc: 0.7735
Epoch 50/100
100/100 [==============================] - 251s 3s/step - loss: 0.4402 - acc: 0.7888 - val_loss: 0.4637 - val_acc: 0.7848
Epoch 51/100
100/100 [==============================] - 249s 2s/step - loss: 0.4476 - acc: 0.7859 - val_loss: 0.4361 - val_acc: 0.7925
Epoch 52/100
100/100 [==============================] - 249s 2s/step - loss: 0.4414 - acc: 0.7910 - val_loss: 0.4391 - val_acc: 0.8009
Epoch 53/100
100/100 [==============================] - 252s 3s/step - loss: 0.4388 - acc: 0.7963 - val_loss: 0.4531 - val_acc: 0.7887
Epoch 54/100
100/100 [==============================] - 250s 3s/step - loss: 0.4144 - acc: 0.8078 - val_loss: 0.4510 - val_acc: 0.7899
Epoch 55/100
100/100 [==============================] - 252s 3s/step - loss: 0.4425 - acc: 0.7931 - val_loss: 0.5142 - val_acc: 0.7627
Epoch 56/100
100/100 [==============================] - 251s 3s/step - loss: 0.4224 - acc: 0.8047 - val_loss: 0.4366 - val_acc: 0.7977
Epoch 57/100
100/100 [==============================] - 253s 3s/step - loss: 0.4293 - acc: 0.8025 - val_loss: 0.4692 - val_acc: 0.7777
Epoch 58/100
100/100 [==============================] - 250s 2s/step - loss: 0.4221 - acc: 0.7994 - val_loss: 0.5069 - val_acc: 0.7722
Epoch 59/100
100/100 [==============================] - 250s 3s/step - loss: 0.4096 - acc: 0.8169 - val_loss: 0.4243 - val_acc: 0.7932
Epoch 60/100
100/100 [==============================] - 248s 2s/step - loss: 0.4276 - acc: 0.7938 - val_loss: 0.4769 - val_acc: 0.7938
Epoch 61/100
100/100 [==============================] - 248s 2s/step - loss: 0.4176 - acc: 0.8050 - val_loss: 0.4739 - val_acc: 0.7970
Epoch 62/100
100/100 [==============================] - 250s 2s/step - loss: 0.3962 - acc: 0.8163 - val_loss: 0.4470 - val_acc: 0.7925
Epoch 63/100
100/100 [==============================] - 248s 2s/step - loss: 0.4166 - acc: 0.8056 - val_loss: 0.4288 - val_acc: 0.8086
Epoch 64/100
100/100 [==============================] - 247s 2s/step - loss: 0.4057 - acc: 0.8175 - val_loss: 0.4501 - val_acc: 0.7880
Epoch 65/100
100/100 [==============================] - 248s 2s/step - loss: 0.4043 - acc: 0.8138 - val_loss: 0.4248 - val_acc: 0.8077
Epoch 66/100
100/100 [==============================] - 246s 2s/step - loss: 0.4142 - acc: 0.8106 - val_loss: 0.5105 - val_acc: 0.7571
Epoch 67/100
100/100 [==============================] - 251s 3s/step - loss: 0.4039 - acc: 0.8141 - val_loss: 0.4281 - val_acc: 0.7944
Epoch 68/100
100/100 [==============================] - 248s 2s/step - loss: 0.3973 - acc: 0.8200 - val_loss: 0.4388 - val_acc: 0.8015
Epoch 69/100
100/100 [==============================] - 250s 3s/step - loss: 0.3945 - acc: 0.8238 - val_loss: 0.4082 - val_acc: 0.8198
Epoch 70/100
100/100 [==============================] - 252s 3s/step - loss: 0.3985 - acc: 0.8247 - val_loss: 0.4458 - val_acc: 0.7964
Epoch 71/100
100/100 [==============================] - 254s 3s/step - loss: 0.4009 - acc: 0.8131 - val_loss: 0.4552 - val_acc: 0.8020
Epoch 72/100
100/100 [==============================] - 252s 3s/step - loss: 0.3982 - acc: 0.8125 - val_loss: 0.4419 - val_acc: 0.7977
Epoch 73/100
100/100 [==============================] - 250s 2s/step - loss: 0.3857 - acc: 0.8197 - val_loss: 0.4454 - val_acc: 0.7964
Epoch 74/100
100/100 [==============================] - 250s 3s/step - loss: 0.3917 - acc: 0.8244 - val_loss: 0.5945 - val_acc: 0.7652
Epoch 75/100
100/100 [==============================] - 252s 3s/step - loss: 0.3928 - acc: 0.8247 - val_loss: 0.4436 - val_acc: 0.7912
Epoch 76/100
100/100 [==============================] - 251s 3s/step - loss: 0.3749 - acc: 0.8278 - val_loss: 0.5832 - val_acc: 0.7456
Epoch 77/100
100/100 [==============================] - 251s 3s/step - loss: 0.3939 - acc: 0.8272 - val_loss: 0.4629 - val_acc: 0.7912
Epoch 78/100
100/100 [==============================] - 252s 3s/step - loss: 0.3795 - acc: 0.8306 - val_loss: 0.4035 - val_acc: 0.8350
Epoch 79/100
100/100 [==============================] - 252s 3s/step - loss: 0.3894 - acc: 0.8266 - val_loss: 0.4676 - val_acc: 0.7796
Epoch 80/100
100/100 [==============================] - 249s 2s/step - loss: 0.3689 - acc: 0.8353 - val_loss: 0.4263 - val_acc: 0.8119
Epoch 81/100
100/100 [==============================] - 252s 3s/step - loss: 0.3758 - acc: 0.8391 - val_loss: 0.4286 - val_acc: 0.8192
Epoch 82/100
100/100 [==============================] - 252s 3s/step - loss: 0.3761 - acc: 0.8297 - val_loss: 0.4309 - val_acc: 0.8235
Epoch 83/100
100/100 [==============================] - 252s 3s/step - loss: 0.3719 - acc: 0.8347 - val_loss: 0.4521 - val_acc: 0.8115
Epoch 84/100
100/100 [==============================] - 251s 3s/step - loss: 0.3715 - acc: 0.8375 - val_loss: 0.4751 - val_acc: 0.7945
Epoch 85/100
100/100 [==============================] - 252s 3s/step - loss: 0.3859 - acc: 0.8187 - val_loss: 0.4186 - val_acc: 0.8115
Epoch 86/100
100/100 [==============================] - 252s 3s/step - loss: 0.3685 - acc: 0.8334 - val_loss: 0.5263 - val_acc: 0.7732
Epoch 87/100
100/100 [==============================] - 252s 3s/step - loss: 0.3616 - acc: 0.8416 - val_loss: 0.4270 - val_acc: 0.8179
Epoch 88/100
100/100 [==============================] - 253s 3s/step - loss: 0.3647 - acc: 0.8397 - val_loss: 0.3993 - val_acc: 0.8164
Epoch 89/100
100/100 [==============================] - 251s 3s/step - loss: 0.3733 - acc: 0.8325 - val_loss: 0.4176 - val_acc: 0.8106
Epoch 90/100
100/100 [==============================] - 254s 3s/step - loss: 0.3662 - acc: 0.8372 - val_loss: 0.5454 - val_acc: 0.7557
Epoch 91/100
100/100 [==============================] - 252s 3s/step - loss: 0.3611 - acc: 0.8441 - val_loss: 0.4190 - val_acc: 0.8189
Epoch 92/100
100/100 [==============================] - 253s 3s/step - loss: 0.3584 - acc: 0.8397 - val_loss: 0.4292 - val_acc: 0.8046
Epoch 93/100
100/100 [==============================] - 252s 3s/step - loss: 0.3741 - acc: 0.8272 - val_loss: 0.4161 - val_acc: 0.8164
Epoch 94/100
100/100 [==============================] - 252s 3s/step - loss: 0.3494 - acc: 0.8450 - val_loss: 0.4337 - val_acc: 0.8268
Epoch 95/100
100/100 [==============================] - 253s 3s/step - loss: 0.3528 - acc: 0.8453 - val_loss: 0.5829 - val_acc: 0.7448
Epoch 96/100
100/100 [==============================] - 252s 3s/step - loss: 0.3541 - acc: 0.8400 - val_loss: 0.5641 - val_acc: 0.7758
Epoch 97/100
100/100 [==============================] - 251s 3s/step - loss: 0.3449 - acc: 0.8466 - val_loss: 0.4646 - val_acc: 0.8135
Epoch 98/100
100/100 [==============================] - 255s 3s/step - loss: 0.3456 - acc: 0.8528 - val_loss: 0.4315 - val_acc: 0.8080
Epoch 99/100
100/100 [==============================] - 253s 3s/step - loss: 0.3425 - acc: 0.8516 - val_loss: 0.4561 - val_acc: 0.7906
Epoch 100/100
100/100 [==============================] - 254s 3s/step - loss: 0.3437 - acc: 0.8441 - val_loss: 0.4466 - val_acc: 0.8138
###Markdown
Let's save our model -- we will be using it in the section on convnet visualization.
###Code
model.save('cats_and_dogs_small_2.h5')
###Output
_____no_output_____
###Markdown
Let's plot our results again:
###Code
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____ |
examples/sketch_rnn/magenta_sketchrnn.ipynb | ###Markdown
Download model_config.json and edit 1 -> true and 0 -> false. next, upload and move it
###Code
!ls /tmp/sketch_rnn/models/aaron_sheep/layer_norm
from google.colab import files
files.download('/tmp/sketch_rnn/models/aaron_sheep/layer_norm/model_config.json')
uploaded = files.upload()
!mv model_config.json /tmp/sketch_rnn/models/aaron_sheep/layer_norm/model_config.json
[train_set, valid_set, test_set, hps_model, eval_hps_model, sample_hps_model] = load_env(data_dir, model_dir)
# construct the sketch-rnn model here:
reset_graph()
model = Model(hps_model)
eval_model = Model(eval_hps_model, reuse=True)
sample_model = Model(sample_hps_model, reuse=True)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
# loads the weights from checkpoint into our model
load_checkpoint(sess, model_dir)
def encode(input_strokes):
strokes = to_big_strokes(input_strokes).tolist()
strokes.insert(0, [0, 0, 1, 0, 0])
seq_len = [len(input_strokes)]
draw_strokes(to_normal_strokes(np.array(strokes)))
return sess.run(eval_model.batch_z, feed_dict={eval_model.input_data: [strokes], eval_model.sequence_lengths: seq_len})[0]
def decode(z_input=None, draw_mode=True, temperature=0.1, factor=0.2):
z = None
if z_input is not None:
z = [z_input]
sample_strokes, m = sample(sess, sample_model, seq_len=eval_model.hps.max_seq_len, temperature=temperature, z=z)
strokes = to_normal_strokes(sample_strokes)
if draw_mode:
draw_strokes(strokes, factor)
return strokes
# get a sample drawing from the test set, and render it to .svg
stroke = test_set.random_sample()
draw_strokes(stroke)
z = encode(stroke)
_ = decode(z, temperature=0.8) # convert z back to drawing at temperature of 0.8
stroke_list = []
for i in range(10):
stroke_list.append([decode(z, draw_mode=False, temperature=0.1*i+0.1), [0, i]])
stroke_grid = make_grid_svg(stroke_list)
draw_strokes(stroke_grid)
# get a sample drawing from the test set, and render it to .svg
z0 = z
_ = decode(z0)
stroke = test_set.random_sample()
z1 = encode(stroke)
_ = decode(z1)
z_list = [] # interpolate spherically between z0 and z1
N = 10
for t in np.linspace(0, 1, N):
z_list.append(slerp(z0, z1, t))
# for every latent vector in z_list, sample a vector image
reconstructions = []
for i in range(N):
reconstructions.append([decode(z_list[i], draw_mode=False), [0, i]])
stroke_grid = make_grid_svg(reconstructions)
draw_strokes(stroke_grid)
###Output
_____no_output_____ |
MLEveryday6.ipynb | ###Markdown
**ML** **day6** >今天的目标是 Pandas
###Code
import urllib.request #依旧使用Titanic.csv
url="https://raw.githubusercontent.com/marongkang/datasets/main/titanic.csv"
response=urllib.request.urlopen(url)
page=response.read()
f=open('titanic.csv','wb')
f.write(page)
!ls -1
#从网络获取Titanic数据集
#pandas
import pandas as pd
'''
def read_csv(filepath_or_buffer: FilePathOrBuffer, sep=',', delimiter=None,
header='infer', names=None, index_col=None, usecols=None, squeeze=False,
prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None,
true_values=None, false_values=None, skipinitialspace=False, skiprows=None,
skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True,
verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False,
keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False,
chunksize=None, compression='infer', thousands=None, decimal: str='.', lineterminator=None,
quotechar='"', quoting=csv.QUOTE_MINIMAL, doublequote=True, escapechar=None, comment=None,
encoding=None, dialect=None, error_bad_lines=True, warn_bad_lines=True, delim_whitespace=False,
low_memory=_c_parser_defaults['low_memory'], memory_map=False, float_precision=None)
可以传一大堆参数
'''
dataframe=pd.read_csv('titanic.csv',header=0) #header=0 指第0行为表头
dataframe.head() #获取前5行,可修改
#dataframe.head(n=10) #获取前十行
#至此数据集获取成功,下面进行数据分析
###Output
_____no_output_____
###Markdown
1. pclass: class of travel 2. name: full name of the passenger 3. sex: gender 4. age: numerical age 5. sibsp: of siblings/spouse aboard 6. parch: number of parents/child aboard 7. ticket: ticket number 8. fare: cost of the ticket 9. cabin: location of room 10. emarked: port that the passenger embarked at (C - Cherbourg, S - Southampton, Q= Queenstown) 11. survived: survial metric (0 - died, 1 - survived)
###Code
#描述性统计
dataframe.describe()
#inclue='all' 会对所有数据进行描述,否则只描述数值列
#dataframe.describe(include='all')
###Output
_____no_output_____
###Markdown
* count:数量统计,此列共有多少有效值* unipue:不同的值有多少个* mean:均值* std:标准差* min:最小值* 25%:四分之一分位数* 50%:二分之一分位数* 75%:四分之三分位数* max:最大值
###Code
#通过表头获取数据
print(dataframe['Age'][0:5])
print(dataframe['Ticket'][0:5])
#直方图
dataframe['Age'].hist()
#唯一值
dataframe['Embarked'].unique()
#选取数据
dataframe['Name'].head()
#筛选,Age==35
dataframe[dataframe['Age']==35.0]
#排序
'''
def sort_values(by, axis=0, ascending=True, inplace=False, kind='quicksort',
na_position='last', ignore_index=False, key: ValueKeyFunc=None)
'''
dataframe.sort_values('Age',ascending=False).head(n=10) #ascending=True 升序,False降序
#聚合与分组
group=dataframe.groupby('Survived')
print(type(group))
#group的shape为(2, 2),不是很理解
glist=list(group)
import numpy as np
print(np.shape(glist))
group.mean()
###Output
<class 'pandas.core.groupby.generic.DataFrameGroupBy'>
(2, 2)
|
notebooks/init.ipynb | ###Markdown
Generic Initialization
###Code
%matplotlib inline
import os
from pathlib import Path
import numpy as np
import datetime
import great_expectations as ge
# Pandas
import pandas as pd
import pandas_profiling
pd.set_option('display.max_rows',10)
pd.set_option('display.max_info_columns',20)
# IPython
from IPython.display import display, Markdown
from IPython.display import Image
# Display multiple outputs per input cell.
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# http://stackoverflow.com/questions/21971449/how-do-i-increase-the-cell-width-of-the-jupyter-ipython-notebook-in-my-browser
from IPython.core.display import display, Markdown, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
# Load external packages automatically anytime the code is changed
%load_ext autoreload
%autoreload 2
# Import Matplotlib
import matplotlib.pyplot as plt
# Import Seaborn
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib
font = {'family' : 'arial',
'weight' : 'bold',
'size' : 22}
matplotlib.rc('font', **font)
###Output
_____no_output_____
###Markdown
Project Initialization
###Code
from data.data import ExtractData, TransformData
from visualization.visualize import importance_plotting
from models import predict_model as pm
from zeetle.data import eda
from zeetle.visualization import visualize as zviz
from sklearn.model_selection import train_test_split
from sklearn.model_selection import (cross_val_score,
cross_val_score, cross_validate,
)
from yellowbrick.classifier import ConfusionMatrix
from sklearn import metrics
import matplotlib.pyplot as plt
import matplotlib
RANDOM_STATE = 42
###Output
_____no_output_____ |
ssd_keras/ssd7_training.ipynb | ###Markdown
SSD7 Training TutorialThis tutorial explains how to train an SSD7 on the Udacity road traffic datasets, and just generally how to use this SSD implementation.Disclaimer about SSD7:As you will see below, training SSD7 on the aforementioned datasets yields alright results, but I'd like to emphasize that SSD7 is not a carefully optimized network architecture. The idea was just to build a low-complexity network that is fast (roughly 127 FPS or more than 3 times as fast as SSD300 on a GTX 1070) for testing purposes. Would slightly different anchor box scaling factors or a slightly different number of filters in individual convolution layers make SSD7 significantly better at similar complexity? I don't know, I haven't tried.
###Code
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau, TerminateOnNaN, CSVLogger
from keras import backend as K
from keras.models import load_model
from math import ceil
import numpy as np
from matplotlib import pyplot as plt
from models.keras_ssd7 import build_model
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
from data_generator.data_augmentation_chain_variable_input_size import DataAugmentationVariableInputSize
from data_generator.data_augmentation_chain_constant_input_size import DataAugmentationConstantInputSize
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation
%matplotlib inline
###Output
Using TensorFlow backend.
###Markdown
1. Set the model configuration parametersThe cell below sets a number of parameters that define the model configuration. The parameters set here are being used both by the `build_model()` function that builds the model as well as further down by the constructor for the `SSDInputEncoder` object that is needed to to match ground truth and anchor boxes during the training.Here are just some comments on a few of the parameters, read the documentation for more details:* Set the height, width, and number of color channels to whatever you want the model to accept as image input. If your input images have a different size than you define as the model input here, or if your images have non-uniform size, then you must use the data generator's image transformations (resizing and/or cropping) so that your images end up having the required input size before they are fed to the model. to convert your images to the model input size during training. The SSD300 training tutorial uses the same image pre-processing and data augmentation as the original Caffe implementation, so take a look at that to see one possibility of how to deal with non-uniform-size images.* The number of classes is the number of positive classes in your dataset, e.g. 20 for Pascal VOC or 80 for MS COCO. Class ID 0 must always be reserved for the background class, i.e. your positive classes must have positive integers as their IDs in your dataset.* The `mode` argument in the `build_model()` function determines whether the model will be built with or without a `DecodeDetections` layer as its last layer. In 'training' mode, the model outputs the raw prediction tensor, while in 'inference' and 'inference_fast' modes, the raw predictions are being decoded into absolute coordinates and filtered via confidence thresholding, non-maximum suppression, and top-k filtering. The difference between latter two modes is that 'inference' uses the decoding procedure of the original Caffe implementation, while 'inference_fast' uses a faster, but possibly less accurate decoding procedure.* The reason why the list of scaling factors has 5 elements even though there are only 4 predictor layers in tSSD7 is that the last scaling factor is used for the second aspect-ratio-1 box of the last predictor layer. Refer to the documentation for details.* `build_model()` and `SSDInputEncoder` have two arguments for the anchor box aspect ratios: `aspect_ratios_global` and `aspect_ratios_per_layer`. You can use either of the two, you don't need to set both. If you use `aspect_ratios_global`, then you pass one list of aspect ratios and these aspect ratios will be used for all predictor layers. Every aspect ratio you want to include must be listed once and only once. If you use `aspect_ratios_per_layer`, then you pass a nested list containing lists of aspect ratios for each individual predictor layer. This is what the SSD300 training tutorial does. It's your design choice whether all predictor layers should use the same aspect ratios or whether you think that for your dataset, certain aspect ratios are only necessary for some predictor layers but not for others. Of course more aspect ratios means more predicted boxes, which in turn means increased computational complexity.* If `two_boxes_for_ar1 == True`, then each predictor layer will predict two boxes with aspect ratio one, one a bit smaller, the other one a bit larger.* If `clip_boxes == True`, then the anchor boxes will be clipped so that they lie entirely within the image boundaries. It is recommended not to clip the boxes. The anchor boxes form the reference frame for the localization prediction. This reference frame should be the same at every spatial position.* In the matching process during the training, the anchor box offsets are being divided by the variances. Leaving them at 1.0 for each of the four box coordinates means that they have no effect. Setting them to less than 1.0 spreads the imagined anchor box offset distribution for the respective box coordinate.* `normalize_coords` converts all coordinates from absolute coordinate to coordinates that are relative to the image height and width. This setting has no effect on the outcome of the training.
###Code
img_height = 720 # Height of the input images
img_width = 1280 # Width of the input images
img_channels = 3 # Number of color channels of the input images
intensity_mean = 127.5 # Set this to your preference (maybe `None`). The current settings transform the input pixel values to the interval `[-1,1]`.
intensity_range = 127.5 # Set this to your preference (maybe `None`). The current settings transform the input pixel values to the interval `[-1,1]`.
n_classes = 5 # Number of positive classes
scales = [0.08, 0.16, 0.32, 0.64, 0.96] # An explicit list of anchor box scaling factors. If this is passed, it will override `min_scale` and `max_scale`.
aspect_ratios = [0.5, 1.0, 2.0] # The list of aspect ratios for the anchor boxes
two_boxes_for_ar1 = True # Whether or not you want to generate two anchor boxes for aspect ratio 1
steps = None # In case you'd like to set the step sizes for the anchor box grids manually; not recommended
offsets = None # In case you'd like to set the offsets for the anchor box grids manually; not recommended
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
variances = [1.0, 1.0, 1.0, 1.0] # The list of variances by which the encoded target coordinates are scaled
normalize_coords = True # Whether or not the model is supposed to use coordinates relative to the image size
###Output
_____no_output_____
###Markdown
2. Build or load the modelYou will want to execute either of the two code cells in the subsequent two sub-sections, not both. 2.1 Create a new modelIf you want to create a new model, this is the relevant section for you. If you want to load a previously saved model, skip ahead to section 2.2.The code cell below does the following things:1. It calls the function `build_model()` to build the model.2. It optionally loads some weights into the model.3. It then compiles the model for the training. In order to do so, we're defining an optimizer (Adam) and a loss function (SSDLoss) to be passed to the `compile()` method.`SSDLoss` is a custom Keras loss function that implements the multi-task log loss for classification and smooth L1 loss for localization. `neg_pos_ratio` and `alpha` are set as in the paper.
###Code
# 1: Build the Keras model
K.clear_session() # Clear previous models from memory.
model = build_model(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_global=aspect_ratios,
aspect_ratios_per_layer=None,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=intensity_mean,
divide_by_stddev=intensity_range)
# 2: Optional: Load some weights
#model.load_weights('./ssd7_weights.h5', by_name=True)
# 3: Instantiate an Adam optimizer and the SSD loss function and compile the model
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
###Output
_____no_output_____
###Markdown
2.2 Load a saved modelIf you have previously created and saved a model and would now like to load it, simply execute the next code cell. The only thing you need to do is to set the path to the saved model HDF5 file that you would like to load.The SSD model contains custom objects: Neither the loss function, nor the anchor box or detection decoding layer types are contained in the Keras core library, so we need to provide them to the model loader.This next code cell assumes that you want to load a model that was created in 'training' mode. If you want to load a model that was created in 'inference' or 'inference_fast' mode, you'll have to add the `DecodeDetections` or `DecodeDetectionsFast` layer type to the `custom_objects` dictionary below.
###Code
# TODO: Set the path to the `.h5` file of the model to be loaded.
model_path = 'ssd7.h5'
# We need to create an SSDLoss object in order to pass that to the model loader.
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
K.clear_session() # Clear previous models from memory.
model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes,
'compute_loss': ssd_loss.compute_loss})
###Output
_____no_output_____
###Markdown
3. Set up the data generators for the trainingThe code cells below set up data generators for the training and validation datasets to train the model. You will have to set the file paths to your dataset. Depending on the annotations format of your dataset, you might also have to switch from the CSV parser to the XML or JSON parser, or you might have to write a new parser method in the `DataGenerator` class that can handle whatever format your annotations are in. The [README](https://github.com/pierluigiferrari/ssd_keras/blob/master/README.md) of this repository provides a summary of the design of the `DataGenerator`, which should help you in case you need to write a new parser or adapt one of the existing parsers to your needs.Note that the generator provides two options to speed up the training. By default, it loads the individual images for a batch from disk. This has two disadvantages. First, for compressed image formats like JPG, this is a huge computational waste, because every image needs to be decompressed again and again every time it is being loaded. Second, the images on disk are likely not stored in a contiguous block of memory, which may also slow down the loading process. The first option that `DataGenerator` provides to deal with this is to load the entire dataset into memory, which reduces the access time for any image to a negligible amount, but of course this is only an option if you have enough free memory to hold the whole dataset. As a second option, `DataGenerator` provides the possibility to convert the dataset into a single HDF5 file. This HDF5 file stores the images as uncompressed arrays in a contiguous block of memory, which dramatically speeds up the loading time. It's not as good as having the images in memory, but it's a lot better than the default option of loading them from their compressed JPG state every time they are needed. Of course such an HDF5 dataset may require significantly more disk space than the compressed images. You can later load these HDF5 datasets directly in the constructor.Set the batch size to to your preference and to what your GPU memory allows, it's not the most important hyperparameter. The Caffe implementation uses a batch size of 32, but smaller batch sizes work fine, too.The `DataGenerator` itself is fairly generic. I doesn't contain any data augmentation or bounding box encoding logic. Instead, you pass a list of image transformations and an encoder for the bounding boxes in the `transformations` and `label_encoder` arguments of the data generator's `generate()` method, and the data generator will then apply those given transformations and the encoding to the data. Everything here is preset already, but if you'd like to learn more about the data generator and its data augmentation capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.The image processing chain defined further down in the object named `data_augmentation_chain` is just one possibility of what a data augmentation pipeline for unform-size images could look like. Feel free to put together other image processing chains, you can use the `DataAugmentationConstantInputSize` class as a template. Or you could use the original SSD data augmentation pipeline by instantiting an `SSDDataAugmentation` object and passing that to the generator instead. This procedure is not exactly efficient, but it evidently produces good results on multiple datasets.An `SSDInputEncoder` object, `ssd_input_encoder`, is passed to both the training and validation generators. As explained above, it matches the ground truth labels to the model's anchor boxes and encodes the box coordinates into the format that the model needs. Note:The example setup below was used to train SSD7 on two road traffic datasets released by [Udacity](https://github.com/udacity/self-driving-car/tree/master/annotations) with around 20,000 images in total and 5 object classes (car, truck, pedestrian, bicyclist, traffic light), although the vast majority of the objects are cars. The original datasets have a constant image size of 1200x1920 RGB. I consolidated the two datasets, removed a few bad samples (although there are probably many more), and resized the images to 300x480 RGB, i.e. to one sixteenth of the original image size. In case you'd like to train a model on the same dataset, you can download the consolidated and resized dataset I used [here](https://drive.google.com/open?id=1tfBFavijh4UTG4cGqIKwhcklLXUDuY0D) (about 900 MB).
###Code
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation.
# Optional: If you have enough memory, consider loading the images into memory for the reasons explained above.
train_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(load_images_into_memory=False, hdf5_dataset_path=None)
# 2: Parse the image and label lists for the training and validation datasets.
# TODO: Set the paths to your dataset here.
# Images
images_dir = 'training_images'
# [Ajinkya]: add training and validation set for xml
training_set_filename = 'training_set_filename.txt'
validation_set_filename = 'validation_set_filename.txt'
annotation_dir = 'training_annotations'
# Ground truth
# train_labels_filename = '../../datasets/udacity_driving_datasets/labels_train.csv'
# val_labels_filename = '../../datasets/udacity_driving_datasets/labels_val.csv'
# train_dataset.parse_csv(images_dir=images_dir,
# labels_filename=train_labels_filename,
# input_format=['image_name', 'xmin', 'xmax', 'ymin', 'ymax', 'class_id'], # This is the order of the first six columns in the CSV file that contains the labels for your dataset. If your labels are in XML format, maybe the XML parser will be helpful, check the documentation.
# include_classes='all')
# val_dataset.parse_csv(images_dir=images_dir,
# labels_filename=val_labels_filename,
# input_format=['image_name', 'xmin', 'xmax', 'ymin', 'ymax', 'class_id'],
# include_classes='all')
# [Ajinkya]: Using the XML parser instead
train_dataset.parse_xml(images_dirs=images_dir,
image_set_filenames=training_set_filename,
annotations_dirs=annotation_dir,
classes=['background', 'cone'],
include_classes='all',
exclude_truncated=False,
exclude_difficult=False,
ret=False,
verbose=True)
val_dataset.parse_xml(images_dirs=images_dir,
image_set_filenames=validation_set_filename,
annotations_dirs=annotation_dir,
classes=['background', 'cone'],
include_classes='all',
exclude_truncated=False,
exclude_difficult=False,
ret=False,
verbose=True)
# Optional: Convert the dataset into an HDF5 dataset. This will require more disk space, but will
# speed up the training. Doing this is not relevant in case you activated the `load_images_into_memory`
# option in the constructor, because in that cas the images are in memory already anyway. If you don't
# want to create HDF5 datasets, comment out the subsequent two function calls.
# train_dataset.create_hdf5_dataset(file_path='dataset_udacity_traffic_train.h5',
# resize=False,
# variable_image_size=True,
# verbose=True)
# val_dataset.create_hdf5_dataset(file_path='dataset_udacity_traffic_val.h5',
# resize=False,
# variable_image_size=True,
# verbose=True)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
# 3: Set the batch size.
batch_size = 16
# 4: Define the image processing chain.
data_augmentation_chain = DataAugmentationConstantInputSize(random_brightness=(-48, 48, 0.5),
random_contrast=(0.5, 1.8, 0.5),
random_saturation=(0.5, 1.8, 0.5),
random_hue=(18, 0.5),
random_flip=0.5,
random_translate=((0.03,0.5), (0.03,0.5), 0.5),
random_scale=(0.5, 2.0, 0.5),
n_trials_max=3,
clip_boxes=True,
overlap_criterion='area',
bounds_box_filter=(0.3, 1.0),
bounds_validator=(0.5, 1.0),
n_boxes_min=1,
background=(0,0,0))
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
predictor_sizes = [model.get_layer('classes4').output_shape[1:3],
model.get_layer('classes5').output_shape[1:3],
model.get_layer('classes6').output_shape[1:3],
model.get_layer('classes7').output_shape[1:3]]
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_global=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.3,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
train_generator = train_dataset.generate(batch_size=batch_size,
shuffle=True,
transformations=[data_augmentation_chain],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
val_generator = val_dataset.generate(batch_size=batch_size,
shuffle=False,
transformations=[],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
###Output
_____no_output_____
###Markdown
4. Set the remaining training parameters and train the modelWe've already chosen an optimizer and a learning rate and set the batch size above, now let's set the remaining training parameters.I'll set a few Keras callbacks below, one for early stopping, one to reduce the learning rate if the training stagnates, one to save the best models during the training, and one to continuously stream the training history to a CSV file after every epoch. Logging to a CSV file makes sense, because if we didn't do that, in case the training terminates with an exception at some point or if the kernel of this Jupyter notebook dies for some reason or anything like that happens, we would lose the entire history for the trained epochs. Feel free to add more callbacks if you want TensorBoard summaries or whatever.
###Code
# Define model callbacks.
# TODO: Set the filepath under which you want to save the weights.
model_checkpoint = ModelCheckpoint(filepath='ssd7_epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
csv_logger = CSVLogger(filename='ssd7_training_log.csv',
separator=',',
append=True)
early_stopping = EarlyStopping(monitor='val_loss',
min_delta=0.0,
patience=10,
verbose=1)
reduce_learning_rate = ReduceLROnPlateau(monitor='val_loss',
factor=0.2,
patience=8,
verbose=1,
epsilon=0.001,
cooldown=0,
min_lr=0.00001)
callbacks = [model_checkpoint,
csv_logger,
early_stopping,
reduce_learning_rate]
###Output
_____no_output_____
###Markdown
I'll set one epoch to consist of 1,000 training steps I'll arbitrarily set the number of epochs to 20 here. This does not imply that 20,000 training steps is the right number. Depending on the model, the dataset, the learning rate, etc. you might have to train much longer to achieve convergence, or maybe less.Instead of trying to train a model to convergence in one go, you might want to train only for a few epochs at a time.In order to only run a partial training and resume smoothly later on, there are a few things you should note:1. Always load the full model if you can, rather than building a new model and loading previously saved weights into it. Optimizers like SGD or Adam keep running averages of past gradient moments internally. If you always save and load full models when resuming a training, then the state of the optimizer is maintained and the training picks up exactly where it left off. If you build a new model and load weights into it, the optimizer is being initialized from scratch, which, especially in the case of Adam, leads to small but unnecessary setbacks every time you resume the training with previously saved weights.2. You should tell `fit_generator()` which epoch to start from, otherwise it will start with epoch 0 every time you resume the training. Set `initial_epoch` to be the next epoch of your training. Note that this parameter is zero-based, i.e. the first epoch is epoch 0. If you had trained for 10 epochs previously and now you'd want to resume the training from there, you'd set `initial_epoch = 10` (since epoch 10 is the eleventh epoch). Furthermore, set `final_epoch` to the last epoch you want to run. To stick with the previous example, if you had trained for 10 epochs previously and now you'd want to train for another 10 epochs, you'd set `initial_epoch = 10` and `final_epoch = 20`.3. Callbacks like `ModelCheckpoint` or `ReduceLROnPlateau` are stateful, so you might want ot save their state somehow if you want to pick up a training exactly where you left off.
###Code
# TODO: Set the epochs to train for.
# If you're resuming a previous training, set `initial_epoch` and `final_epoch` accordingly.
initial_epoch = 0
final_epoch = 20
steps_per_epoch = 1000
history = model.fit_generator(generator=train_generator,
steps_per_epoch=steps_per_epoch,
epochs=final_epoch,
callbacks=callbacks,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
###Output
_____no_output_____
###Markdown
Let's look at how the training and validation loss evolved to check whether our training is going in the right direction:
###Code
plt.figure(figsize=(20,12))
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.legend(loc='upper right', prop={'size': 24});
###Output
_____no_output_____
###Markdown
The validation loss has been decreasing at a similar pace as the training loss, indicating that our model has been learning effectively over the last 30 epochs. We could try to train longer and see if the validation loss can be decreased further. Once the validation loss stops decreasing for a couple of epochs in a row, that's when we will want to stop training. Our final weights will then be the weights of the epoch that had the lowest validation loss. 5. Make predictionsNow let's make some predictions on the validation dataset with the trained model. For convenience we'll use the validation generator which we've already set up above. Feel free to change the batch size.You can set the `shuffle` option to `False` if you would like to check the model's progress on the same image(s) over the course of the training.
###Code
# 1: Set the generator for the predictions.
predict_generator = val_dataset.generate(batch_size=1,
shuffle=True,
transformations=[],
label_encoder=None,
returns={'processed_images',
'processed_labels',
'filenames'},
keep_images_without_gt=False)
# 2: Generate samples
batch_images, batch_labels, batch_filenames = next(predict_generator)
i = 0 # Which batch item to look at
print("Image:", batch_filenames[i])
print()
print("Ground truth boxes:\n")
print(batch_labels[i])
# 3: Make a prediction
y_pred = model.predict(batch_images)
###Output
_____no_output_____
###Markdown
Now let's decode the raw predictions in `y_pred`.Had we created the model in 'inference' or 'inference_fast' mode, then the model's final layer would be a `DecodeDetections` layer and `y_pred` would already contain the decoded predictions, but since we created the model in 'training' mode, the model outputs raw predictions that still need to be decoded and filtered. This is what the `decode_detections()` function is for. It does exactly what the `DecodeDetections` layer would do, but using Numpy instead of TensorFlow (i.e. on the CPU instead of the GPU).`decode_detections()` with default argument values follows the procedure of the original SSD implementation: First, a very low confidence threshold of 0.01 is applied to filter out the majority of the predicted boxes, then greedy non-maximum suppression is performed per class with an intersection-over-union threshold of 0.45, and out of what is left after that, the top 200 highest confidence boxes are returned. Those settings are for precision-recall scoring purposes though. In order to get some usable final predictions, we'll set the confidence threshold much higher, e.g. to 0.5, since we're only interested in the very confident predictions.
###Code
# 4: Decode the raw prediction `y_pred`
y_pred_decoded = decode_detections(y_pred,
confidence_thresh=0.5,
iou_threshold=0.45,
top_k=200,
normalize_coords=normalize_coords,
img_height=img_height,
img_width=img_width)
np.set_printoptions(precision=2, suppress=True, linewidth=90)
print("Predicted boxes:\n")
print(' class conf xmin ymin xmax ymax')
print(y_pred_decoded[i])
###Output
_____no_output_____
###Markdown
Finally, let's draw the predicted boxes onto the image. Each predicted box says its confidence next to the category name. The ground truth boxes are also drawn onto the image in green for comparison.
###Code
# 5: Draw the predicted boxes onto the image
plt.figure(figsize=(20,12))
plt.imshow(batch_images[i])
current_axis = plt.gca()
colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist() # Set the colors for the bounding boxes
classes = ['background', 'car', 'truck', 'pedestrian', 'bicyclist', 'light'] # Just so we can print class names onto the image instead of IDs
# Draw the ground truth boxes in green (omit the label for more clarity)
for box in batch_labels[i]:
xmin = box[1]
ymin = box[2]
xmax = box[3]
ymax = box[4]
label = '{}'.format(classes[int(box[0])])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2))
#current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':'green', 'alpha':1.0})
# Draw the predicted boxes in blue
for box in y_pred_decoded[i]:
xmin = box[-4]
ymin = box[-3]
xmax = box[-2]
ymax = box[-1]
color = colors[int(box[0])]
label = '{}: {:.2f}'.format(classes[int(box[0])], box[1])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':color, 'alpha':1.0})
###Output
_____no_output_____ |
V4/v4_exercises_material/solutions/1_Text_Analysis/3_Word_Count_Gutenberg.ipynb | ###Markdown
Init Connection
###Code
%load_ext sql
%sql hive://hadoop@localhost:10000/text
###Output
_____no_output_____
###Markdown
Saving the result to a new table
###Code
%%sql
CREATE TABLE word_gutenberg
AS select lower(word) as word from (
select explode(sentence) word from (
select explode(sentences(trim(line))) sentence from raw_gutenberg where line != ''
) sentence_table
) word_table
###Output
* hive://hadoop@localhost:10000/text
Done.
###Markdown
Word Count
###Code
%%sql
CREATE TABLE word_count_gutenberg
AS
SELECT
word, count(word) as count
FROM
word_gutenberg
GROUP BY
word
ORDER BY
count DESC
%sql select * from word_count_gutenberg where word in ('he', 'she', 'it')
%sql select * from word_count_gutenberg limit 10
###Output
* hive://hadoop@localhost:10000/text
Done.
###Markdown
Comparing Gutenberg WordCount with OEC Rank for the Top 20 WordsFrom Wikipedia [100 most common words](https://en.wikipedia.org/wiki/Most_common_words_in_English) Can you compare our findings with the ones listed here (from wikipedia)|word|place|| ----------- | ----------- ||the|1||be|2||to|3||of|4||and|5||a|6||in|7||that|9||have|9||i|10||it|11||for|12||not| 13||on|14||with|15||he|16||as| 17||you|18||do|19||at|20|
###Code
%%sql gutenberg_top_20 <<
SELECT *, ROW_NUMBER() OVER () AS gutenberg_place FROM (
SELECT word FROM word_count_gutenberg LIMIT 20
) ranked_words
%%sql oec_top_20 <<
select explode(map
('the',1,'be',2,'to',3,'of',4,'and',5,'a',6,'in',7,'that',9,'have',9,'i',10,'it',11,'for',12,'not', 13,'on',14,'with',15,'he',16,'as', 17,'you',18,'do',19,'at',20)
) as (word,oec_place)
df_gutenberg = gutenberg_top_20.DataFrame()
df_oec = oec_top_20.DataFrame()
df_gutenberg.merge(
right = df_oec,
how="outer",
)
###Output
_____no_output_____ |
workflow.ipynb | ###Markdown
ENRON Person of Interest Identifierby Fernando Maletski IntroductionThe famous ENRON scandal was the largest bankruptcy reorganization in the United States at the time it was publicized, October 2001. Due to the Federal investigation, a significant amount of confidential information was released to the public, including tens of thousands of emails and detailed financial data.The objective of this project is to use this large dataset to create a machine learning model that correctly identifiers the Persons of Interest (POI) based on the data made public. Workspace Setup
###Code
import sys
import numpy as np
import pandas as pd
import pickle
import matplotlib
import operator
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
matplotlib.style.use('ggplot') #Set a decent style
matplotlib.rcParams['image.cmap'] = 'bwr' #Diverging colors
with open("final_project_dataset_py3.pkl", "rb") as data_file:
data_dict = pickle.load(data_file)
###Output
_____no_output_____
###Markdown
EDA and Feature EngineeringIn this section we will explore the dataset, explain features and clean issues, such as missing values and outliers.
###Code
len(sorted(data_dict.keys()))
###Output
_____no_output_____
###Markdown
There are 146 datapoints, each of them should represent a person whose records were made public, the key of this dictionary is their name in this format: LAST NAME FIRST NAME (MIDDLE INICIAL). As this is a small dataset, it is possible to check each persons name for inconsistencies:
###Code
persons = sorted(data_dict.keys())
for person in persons:
print(person)
###Output
ALLEN PHILLIP K
BADUM JAMES P
BANNANTINE JAMES M
BAXTER JOHN C
BAY FRANKLIN R
BAZELIDES PHILIP J
BECK SALLY W
BELDEN TIMOTHY N
BELFER ROBERT
BERBERIAN DAVID
BERGSIEKER RICHARD P
BHATNAGAR SANJAY
BIBI PHILIPPE A
BLACHMAN JEREMY M
BLAKE JR. NORMAN P
BOWEN JR RAYMOND M
BROWN MICHAEL
BUCHANAN HAROLD G
BUTTS ROBERT H
BUY RICHARD B
CALGER CHRISTOPHER F
CARTER REBECCA C
CAUSEY RICHARD A
CHAN RONNIE
CHRISTODOULOU DIOMEDES
CLINE KENNETH W
COLWELL WESLEY
CORDES WILLIAM R
COX DAVID
CUMBERLAND MICHAEL S
DEFFNER JOSEPH M
DELAINEY DAVID W
DERRICK JR. JAMES V
DETMERING TIMOTHY J
DIETRICH JANET R
DIMICHELE RICHARD G
DODSON KEITH
DONAHUE JR JEFFREY M
DUNCAN JOHN H
DURAN WILLIAM D
ECHOLS JOHN B
ELLIOTT STEVEN
FALLON JAMES B
FASTOW ANDREW S
FITZGERALD JAY L
FOWLER PEGGY
FOY JOE
FREVERT MARK A
FUGH JOHN L
GAHN ROBERT S
GARLAND C KEVIN
GATHMANN WILLIAM D
GIBBS DANA R
GILLIS JOHN
GLISAN JR BEN F
GOLD JOSEPH
GRAMM WENDY L
GRAY RODNEY
HAEDICKE MARK E
HANNON KEVIN P
HAUG DAVID L
HAYES ROBERT E
HAYSLETT RODERICK J
HERMANN ROBERT J
HICKERSON GARY J
HIRKO JOSEPH
HORTON STANLEY C
HUGHES JAMES A
HUMPHREY GENE E
IZZO LAWRENCE L
JACKSON CHARLENE R
JAEDICKE ROBERT
KAMINSKI WINCENTY J
KEAN STEVEN J
KISHKILL JOSEPH G
KITCHEN LOUISE
KOENIG MARK E
KOPPER MICHAEL J
LAVORATO JOHN J
LAY KENNETH L
LEFF DANIEL P
LEMAISTRE CHARLES
LEWIS RICHARD
LINDHOLM TOD A
LOCKHART EUGENE E
LOWRY CHARLES P
MARTIN AMANDA K
MCCARTY DANNY J
MCCLELLAN GEORGE
MCCONNELL MICHAEL S
MCDONALD REBECCA
MCMAHON JEFFREY
MENDELSOHN JOHN
METTS MARK
MEYER JEROME J
MEYER ROCKFORD G
MORAN MICHAEL P
MORDAUNT KRISTINA M
MULLER MARK S
MURRAY JULIA H
NOLES JAMES L
OLSON CINDY K
OVERDYKE JR JERE C
PAI LOU L
PEREIRA PAULO V. FERRAZ
PICKERING MARK R
PIPER GREGORY F
PIRO JIM
POWERS WILLIAM
PRENTICE JAMES
REDMOND BRIAN L
REYNOLDS LAWRENCE
RICE KENNETH D
RIEKER PAULA H
SAVAGE FRANK
SCRIMSHAW MATTHEW
SHANKMAN JEFFREY A
SHAPIRO RICHARD S
SHARP VICTORIA T
SHELBY REX
SHERRICK JEFFREY B
SHERRIFF JOHN R
SKILLING JEFFREY K
STABLER FRANK
SULLIVAN-SHAKLOVITZ COLLEEN
SUNDE MARTIN
TAYLOR MITCHELL S
THE TRAVEL AGENCY IN THE PARK
THORN TERENCE H
TILNEY ELIZABETH A
TOTAL
UMANOFF ADAM S
URQUHART JOHN A
WAKEHAM JOHN
WALLS JR ROBERT H
WALTERS GARETH W
WASAFF GEORGE
WESTFAHL RICHARD K
WHALEY DAVID A
WHALLEY LAWRENCE G
WHITE JR THOMAS E
WINOKUR JR. HERBERT S
WODRASKA JOHN
WROBEL BRUCE
YEAGER F SCOTT
YEAP SOON
###Markdown
There are 2 problematic datapoins, TOTAL and THE TRAVEL AGENCY IN THE PARK.While TOTAL is self explanatory and safe to be removed, THE TRAVEL AGENCY IN THE PARK is actually a company (http://www.businesstravelnews.com/More-News/Enron-s-Agency-Changes-Name-Reaffirms-Corp-Commitment).Taking a closer look to it:
###Code
data_dict['THE TRAVEL AGENCY IN THE PARK']
###Output
_____no_output_____
###Markdown
With most of its features being missing and due to the fact it is not a person, much less a Person of Interest, this datapoint should be removed, along with TOTAL.
###Code
data_dict.pop('TOTAL')
data_dict.pop('THE TRAVEL AGENCY IN THE PARK')
len(sorted(data_dict.keys()))
###Output
_____no_output_____
###Markdown
Now the dataset has 144 person in it. The values of the dictionary are another dictionary that follows this schema (key: value): feature: value.Extracting the list of features:
###Code
feature_list = sorted(data_dict['ALLEN PHILLIP K'])
print(len(feature_list))
feature_list
###Output
21
###Markdown
We have 20 features and the hand coded Person of Interest (poi) label. Testing to see if all the datapoints have the same features:
###Code
count = 0
for person, data in data_dict.items():
for feature, value in data.items():
if feature not in feature_list:
print(person, feature)
else:
count += 1
total_count = len(feature_list) * len(data_dict.keys())
print('{} of {} found'.format(count, total_count))
###Output
3024 of 3024 found
###Markdown
This is all the features of the dataset, the structure supports a table schema. So it's possible to convert this dataset to an exploration friendly format, a pandas DataFrame:
###Code
df = pd.DataFrame(data_dict)
df = df.transpose()
df.head()
###Output
_____no_output_____
###Markdown
Replacing 'NaN' string with np.NaN for compatibility with numeric methods:
###Code
df.replace('NaN', np.NaN, inplace=True)
###Output
_____no_output_____
###Markdown
To check is there is a person in the dataset with all their values missing (as the POI label is hand coded, it may not be missing):
###Code
checknull = df.T.isnull().sum() >= 20
checknull.any()
df[checknull].T
###Output
_____no_output_____
###Markdown
This datapoint has no values, with the exception of the poi label, it brings no information and should be removed.
###Code
df.drop('LOCKHART EUGENE E', inplace=True)
len(df)
###Output
_____no_output_____
###Markdown
The analysis will proceed with the final count of 143 persons. Here's a print from a random person to have an idea of the information from each datapoint:
###Code
df.iloc[12]
###Output
_____no_output_____
###Markdown
An overview:
###Code
total_dps = len(df)
poi_dps = df.poi.sum()
print('Total Data Points: {:>3}'.format(total_dps))
print('Total POI : {:>3}'.format(poi_dps))
###Output
Total Data Points: 143
Total POI : 18
###Markdown
There is 2 classes of features, finance related and email related:* **financial features:** ['salary', 'deferral_payments', 'total_payments', 'loan_advances', 'bonus', 'restricted_stock_deferred', 'deferred_income', 'total_stock_value', 'expenses', 'exercised_stock_options', 'other', 'long_term_incentive', 'restricted_stock', 'director_fees'] (all units are in US dollars)* **email features:** ['to_messages', 'email_address', 'from_poi_to_this_person', 'from_messages', 'from_this_person_to_poi', 'shared_receipt_with_poi'] (units are generally number of emails messages; notable exception is ‘email_address’, which is a text string)
###Code
financial_features = ['salary', 'deferral_payments', 'total_payments', 'loan_advances', 'bonus',
'restricted_stock_deferred', 'deferred_income', 'total_stock_value', 'expenses',
'exercised_stock_options', 'other', 'long_term_incentive', 'restricted_stock',
'director_fees']
email_features = ['to_messages', 'email_address', 'from_poi_to_this_person', 'from_messages',
'from_this_person_to_poi', 'shared_receipt_with_poi']
###Output
_____no_output_____
###Markdown
Email Features
###Code
print(email_features)
###Output
['to_messages', 'email_address', 'from_poi_to_this_person', 'from_messages', 'from_this_person_to_poi', 'shared_receipt_with_poi']
###Markdown
Missing Values
###Code
print_list = []
for feature in email_features:
title = feature
count = df[feature].count()
missing = total_dps - count
poi_count = len(df.query(feature+' != "NaN" and poi==True'))
pct_missing = 100*missing/total_dps
print_list.append((title, count, missing, poi_count, pct_missing))
print('{:>30}: {:<8} {:<8} {:<10} {:<8}'.format('Title', 'Count', 'Missing', 'POI Count', '% Missing'))
for (title, count, missing, poi_count, pct_missing) in sorted(print_list, key=operator.itemgetter(4),
reverse=True):
print('{:>30}: {:<8} {:<8} {:<10} {:<8.2f}'.format(title, count, missing, poi_count, pct_missing))
###Output
Title: Count Missing POI Count % Missing
to_messages: 86 57 14 39.86
from_poi_to_this_person: 86 57 14 39.86
from_messages: 86 57 14 39.86
from_this_person_to_poi: 86 57 14 39.86
shared_receipt_with_poi: 86 57 14 39.86
email_address: 111 32 18 22.38
###Markdown
There's no email feature with a relatively high amount of missing values, so they are valid. New Features The first approach we can take is to see if POIs communicate to each other a lot, using the features from_poi_to_this_person and from_this_person_to_poi:
###Code
plt.scatter(df.from_poi_to_this_person, df.from_this_person_to_poi, c=df.poi, alpha=0.5)
###Output
_____no_output_____
###Markdown
It is a good idea, but there are people who sends a lot of emails and those that don't, so, engineering 2 new features, from_poi_ratio and to_poi_ratio may help:
###Code
df['from_poi_ratio'] = df.from_poi_to_this_person/df.to_messages
df['to_poi_ratio'] = df.from_this_person_to_poi/df.from_messages
plt.scatter(df.from_poi_ratio, df.to_poi_ratio, c=df.poi, alpha=0.5)
###Output
_____no_output_____
###Markdown
Good, these features will help to filter a lot of people. Using the same line of thought with the feature "shared_receipt_with_poi" doesn't help too much:
###Code
plt.scatter(df.shared_receipt_with_poi/df.to_messages, df.to_poi_ratio, c=df.poi, alpha=0.5)
selected_email_features = ['from_poi_ratio', 'to_poi_ratio']
###Output
_____no_output_____
###Markdown
Financial FeaturesThere is a lot of financial features:
###Code
print(financial_features)
print(len(financial_features))
###Output
['salary', 'deferral_payments', 'total_payments', 'loan_advances', 'bonus', 'restricted_stock_deferred', 'deferred_income', 'total_stock_value', 'expenses', 'exercised_stock_options', 'other', 'long_term_incentive', 'restricted_stock', 'director_fees']
14
###Markdown
Missing Values If there is features with too much missing values, they won't help with the classification.
###Code
print_list = []
for feature in financial_features:
title = feature
count = df[feature].count()
missing = total_dps - count
poi_count = len(df.query(feature+' != "NaN" and poi==True'))
pct_missing = 100*missing/total_dps
print_list.append((title, count, missing, poi_count, pct_missing))
print('{:>30}: {:<8} {:<8} {:<10} {:<8}'.format('Title', 'Count', 'Missing', 'POI Count', '% Missing'))
for (title, count, missing, poi_count, pct_missing) in sorted(print_list, key=operator.itemgetter(4),
reverse=True):
print('{:>30}: {:<8} {:<8} {:<10} {:<8.2f}'.format(title, count, missing, poi_count, pct_missing))
###Output
Title: Count Missing POI Count % Missing
loan_advances: 3 140 1 97.90
director_fees: 16 127 0 88.81
restricted_stock_deferred: 17 126 0 88.11
deferral_payments: 38 105 5 73.43
deferred_income: 48 95 11 66.43
long_term_incentive: 65 78 12 54.55
bonus: 81 62 16 43.36
other: 91 52 18 36.36
salary: 94 49 17 34.27
expenses: 94 49 18 34.27
exercised_stock_options: 101 42 12 29.37
restricted_stock: 109 34 17 23.78
total_payments: 123 20 18 13.99
total_stock_value: 125 18 18 12.59
###Markdown
Features with a high amount of missing values and low POI count won't be useful. Removing them:
###Code
features_to_remove = ['loan_advances', 'director_fees', 'restricted_stock_deferred']
for feature in features_to_remove:
financial_features.remove(feature)
print(financial_features)
len(financial_features)
###Output
['salary', 'deferral_payments', 'total_payments', 'bonus', 'deferred_income', 'total_stock_value', 'expenses', 'exercised_stock_options', 'other', 'long_term_incentive', 'restricted_stock']
###Markdown
Exploration
###Code
for feature in financial_features:
plt.hist(df[feature].dropna(),20)
plt.title(feature)
plt.show()
###Output
_____no_output_____
###Markdown
With the exception of salary, all features are skewed. Using it as a basis for scatterplots to have an idea of the POI/non-POI distribution:
###Code
for feature in financial_features[1:]:
plt.scatter(np.sqrt(df.salary), df[feature], c=df.poi, alpha=0.5)
plt.title(feature)
plt.show()
###Output
_____no_output_____
###Markdown
Pre Selected Features
###Code
features = selected_email_features+financial_features
print(features)
len(features)
###Output
['from_poi_ratio', 'to_poi_ratio', 'salary', 'deferral_payments', 'total_payments', 'bonus', 'deferred_income', 'total_stock_value', 'expenses', 'exercised_stock_options', 'other', 'long_term_incentive', 'restricted_stock']
###Markdown
Feature Scaling and Handling of Missing Values A few of the chosen models to test, namely SVMs, will benefit from feature scaling as the features are of varying magnitudes.The MinMaxScaler is a simple yet effective way to bring all the features to comparable values, between 0 and 1.From now on, missing values (NaN) will be replaced by 0.
###Code
df.fillna(0, inplace=True)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
df[features] = scaler.fit_transform(df[features])
df.head()
###Output
_____no_output_____
###Markdown
Feature Selection Evaluation Metrics The dataset is very unbalanced towards non-POI:
###Code
print('POI: {} | Total: {}'.format(len(df), np.sum(df.poi==True)))
###Output
POI: 143 | Total: 18
###Markdown
Using precision, or F1, generates warnings because a lot of the times they end up dividing by 0. Ignoring warnings for now on:
###Code
import warnings
warnings.filterwarnings('ignore')
print('Accuracy if predicted all non-POI: {:0.6f}'.format((143-18)/143))
###Output
Accuracy if predicted all non-POI: 0.874126
###Markdown
Ideally, the classifier should be more accurate than 0.8759, while having high recall and precision. Due to the imbalanced nature of the dataset (way more non-POI than POI), using just accuracy, or even F1, results in poor detection performance.The objective here is fraud detection! A model that is accurate but doesn't detect a lot of POI is not a good one.There is a metric specifically created to deal with highly imbalanced classes, called Matthews correlation coefficient:The Matthews correlation coefficient is used in machine learning as a measure of the quality of binary (two-class) classifications. It takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes.[source: Wikipedia | http://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html | https://en.wikipedia.org/wiki/Matthews_correlation_coefficient]The MCC is the chosen metric in this project for parameter tuning and evaluation. Preparing the metrics for iteration using GridSearchCV:
###Code
from sklearn.metrics import matthews_corrcoef
from sklearn.metrics import make_scorer
mcc = make_scorer(matthews_corrcoef)
scorers = {'mcc': mcc, 'accuracy': 'accuracy', 'f1': 'f1',
'recall': 'recall', 'precision': 'precision'}
from sklearn.model_selection import GridSearchCV
def print_summary(clf):
print(clf.best_estimator_)
mcc = clf.cv_results_['mean_test_mcc'][clf.best_index_]
print('MCC: {:0.4f}'.format(mcc))
f1 = clf.cv_results_['mean_test_f1'][clf.best_index_]
print('F1: {:0.4f}'.format(f1))
pre = clf.cv_results_['mean_test_precision'][clf.best_index_]
print('Precision: {:0.4f}'.format(pre))
rec = clf.cv_results_['mean_test_recall'][clf.best_index_]
print('Recall: {:0.4f}'.format(rec))
acc = clf.cv_results_['mean_test_accuracy'][clf.best_index_]
print('Accuracy: {:0.4f}'.format(acc))
return (str(clf.best_estimator_).split('(')[0], mcc, f1, pre, rec, acc)
###Output
_____no_output_____
###Markdown
Validation Strategy With the class imbalance present in the dataset, a stratified solution of cross-validation is needed. Scikit-learn provides us with 2:* StratifiedKFold http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html* StratifiedShuffleSplit http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.htmlBoth will preserve the percentage of samples for each class. The key difference is the splitting method.StratifiedKFold will split the dataset k times, and use k-1 folds for training and the remaining for testing. The process is repeated k times.StratifiedShuffleSplit will shuffle the dataset and split it n_splits times respecting to the chosen test_size.While both are valid ways of cross validation, due to the small size of the dataset, StratifiedShuffleSplit provides less chance of overfitting.
###Code
from sklearn.model_selection import StratifiedShuffleSplit
cv = StratifiedShuffleSplit(n_splits=50, test_size=0.3, random_state=42)
###Output
_____no_output_____
###Markdown
The random state is set to 42 for test–retest reliability. Model Pre-Selection As per http://scikit-learn.org/stable/tutorial/machine_learning_map/index.html, the workflow should be:* Linear SVC* KN Classifier* SVC (other kernels)* Ensemble Classifiers + Random Forest + Adaboost
###Code
from sklearn.svm import SVC, LinearSVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
###Output
_____no_output_____
###Markdown
Both ensemble classifiers use Decision Trees as a base, so it makes sense to add it to the pre-selection too. Adaboost, in particular, sometimes benefits greatly from a tuned DecisionTreeClassifier as its base.
###Code
from sklearn.tree import DecisionTreeClassifier
###Output
_____no_output_____
###Markdown
Testing methodology To test the pre-selected models and features, a solid testing method must be chosen. Scikit-learn has GridSerchCV, the main function of this object is actually parameter tuning, but passing an empty dictionary as the parameters turn it into a robust testing method, that handles dataset splitting in accordance with a selected cross-validator, with the added bonus of an easy to use parallel processing, drasticaly speeding up the process.
###Code
from sklearn.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
Feature Selection While exploring and selecting them by hand is a valid approach, so is using statistics to do it and testing to do it. This code will print the p_values and ANOVA F-scores of each feature (NaN is filled with 0):FORMAT: p_value : feature : F-score
###Code
from sklearn.feature_selection import SelectKBest
selector = SelectKBest()
selector.fit(df[features], df.poi)
features_ranked = []
print('{:>30} :{:^30}: {}'.format('p_value', 'Feature', 'F-score'))
print('')
for (feature, score, pvalue) in sorted(zip(features, selector.scores_, selector.pvalues_),
key=operator.itemgetter(1), reverse=True):
features_ranked.append(feature)
print('{:>30} :{:^30}: {}'.format(pvalue, feature, score))
###Output
p_value : Feature : F-score
1.8182048777865317e-06 : exercised_stock_options : 24.815079733218194
2.4043152760437106e-06 : total_stock_value : 24.182898678566872
1.10129873239521e-05 : bonus : 20.792252047181538
3.4782737683651706e-05 : salary : 18.289684043404513
8.388953356704216e-05 : to_poi_ratio : 16.40971254803579
0.0009220367084670714 : deferred_income : 11.458476579280697
0.001994181245353672 : long_term_incentive : 9.922186013189839
0.002862802957909168 : restricted_stock : 9.212810621977086
0.0035893261725152385 : total_payments : 8.772777730091681
0.01475819996537172 : expenses : 6.094173310638967
0.042581747012345836 : other : 4.1874775069953785
0.07911610566379423 : from_poi_ratio : 3.128091748156737
0.636281647458697 : deferral_payments : 0.2246112747360051
###Markdown
SelectKBest provides a good way to choose the right features for a machine learning model. However, using just univariate statistics for feature selection doesn't take into account feature interaction, the ideal k is hard to pinpoint without further testing.Ideally, testing every single feature combination would yield the best result, but it is both time and processing power expensive to do so.The next best thing is to rank the features as it is done above, and recursively remove the lowest ranked one to test its value.For the testing, two classifiers will be used, the simplest SVM: LinearSVC, and the base of the ensemble classifiers: DecisionTreeClassifier. Both will be run using stock parameter for now, with the exception of: * class_weight='balanced' - highly beneficial for imbalanced datasets;* and random_state=42 - for test–retest reliability.
###Code
def test_k(features_list, k):
features = df[features_list[0:k]].values
labels = df.poi.values
parameters = {'class_weight': ['balanced'], 'random_state': [42]}
clf1 = GridSearchCV(LinearSVC(), parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=0)
clf1.fit(features, labels)
mcc1 = clf1.cv_results_['mean_test_mcc'][clf1.best_index_]
f11 = clf1.cv_results_['mean_test_f1'][clf1.best_index_]
pre1 = clf1.cv_results_['mean_test_precision'][clf1.best_index_]
rec1 = clf1.cv_results_['mean_test_recall'][clf1.best_index_]
acc1 = clf1.cv_results_['mean_test_accuracy'][clf1.best_index_]
clf2 = GridSearchCV(DecisionTreeClassifier(), parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=0)
clf2.fit(features, labels)
mcc2 = clf2.cv_results_['mean_test_mcc'][clf2.best_index_]
f12 = clf2.cv_results_['mean_test_f1'][clf2.best_index_]
pre2 = clf2.cv_results_['mean_test_precision'][clf2.best_index_]
rec2 = clf2.cv_results_['mean_test_recall'][clf2.best_index_]
acc2 = clf2.cv_results_['mean_test_accuracy'][clf2.best_index_]
results = [features_list[k-1], k, mcc1, f11, pre1, rec1, acc1, mcc2, f12, pre2, rec2, acc2]
return results
test_results = []
for k in range(1,14):
print('Testing k={}'.format(k))
result = test_k(features_ranked, k)
test_results.append(result)
print('Finished')
result = pd.DataFrame(test_results, columns=['feature added', 'k', 'svc_mcc', 'svc_f1', 'svc_pre', 'svc_rec', 'svc_acc',
'dt_mcc', 'dt_f1', 'dt_pre', 'dt_rec', 'dt_acc'])
result
plt.plot(result['k'], result['svc_mcc'], 'o-', result['k'], result['dt_mcc'], 'o-')
plt.title('MCC')
plt.legend(['SVM', 'Decision Tree'])
plt.show()
plt.plot(result['k'], result['svc_rec'], 'o-', result['k'], result['dt_rec'], 'o-')
plt.title('Recall')
plt.legend(['SVM', 'Decision Tree'])
plt.show()
plt.plot(result['k'], result['svc_pre'], 'o-', result['k'], result['dt_pre'], 'o-')
plt.title('Precision')
plt.legend(['SVM', 'Decision Tree'])
plt.show()
###Output
_____no_output_____
###Markdown
In these plots, it becomes evident that the best value for k is 5, for both algorithms and the MCC and Recall metrics. However, there's a one interesting observations:* The addition of features 2, 4, 6, 7, 8, 9 and (to a lesser extent) 11 appear to decrease the performance across the board (with few exceptions). What if removing those features yields better results?
###Code
hand_picked = features_ranked.copy()
j=0
for i in [1,3,5,6,7,8,10]:
hand_picked.remove(features_ranked[i])
hand_picked
test_results = []
for k in range(1,7):
print('Testing k={}'.format(k))
result = test_k(hand_picked, k)
test_results.append(result)
print('Finished')
result_hp = pd.DataFrame(test_results, columns=['feature added', 'k', 'svc_mcc', 'svc_f1', 'svc_pre', 'svc_rec', 'svc_acc',
'dt_mcc', 'dt_f1', 'dt_pre', 'dt_rec', 'dt_acc'])
result_hp
plt.plot(result_hp['k'], result_hp['svc_mcc'], 'o-', result_hp['k'], result_hp['dt_mcc'], 'o-')
plt.title('MCC')
plt.legend(['SVM', 'Decision Tree'])
plt.show()
plt.plot(result_hp['k'], result_hp['svc_rec'], 'o-', result_hp['k'], result_hp['dt_rec'], 'o-')
plt.title('Recall')
plt.legend(['SVM', 'Decision Tree'])
plt.show()
plt.plot(result_hp['k'], result_hp['svc_pre'], 'o-', result_hp['k'], result_hp['dt_pre'], 'o-')
plt.title('Precision')
plt.legend(['SVM', 'Decision Tree'])
plt.show()
###Output
_____no_output_____
###Markdown
This approach was extremely beneficial for the Decision Tree algorithm. The SVC suffered a bit, but not enough to not use the hand picked features for the rest of the project.
###Code
selector = SelectKBest(k=6)
filtered = selector.fit_transform(df[hand_picked], df.poi)
selected_features = []
for (feature, selected) in zip(hand_picked, selector.get_support()):
if selected:
selected_features.append(feature)
selected = pd.DataFrame(filtered, columns = selected_features)
corr = selected.corr()
sns.heatmap(corr, xticklabels=corr.columns,
yticklabels=corr.columns, cmap='RdBu',
vmin = -1.0, vmax = 1.0, annot = True)
features = df[hand_picked].values
labels = df.poi.values
###Output
_____no_output_____
###Markdown
Model Selection Everything is in order to start the tests. Each model will be run in 3 times (2 if the model has already reached its best performance):* First run: General range of parameters of different magnitudes* Second run: Specific parameter range* Third run: Fine tuningAfterwards a summary of findings is presented. Linear SVM Classifier First run
###Code
parameters = {'C': [1,2,3,5,10,15,20,50,100,200,300,400,500,1000,2500,5000,10000],
'class_weight': [None, 'balanced']}
bclf = LinearSVC(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
lsvm = print_summary(clf)
###Output
LinearSVC(C=200, class_weight='balanced', dual=True, fit_intercept=True,
intercept_scaling=1, loss='squared_hinge', max_iter=1000,
multi_class='ovr', penalty='l2', random_state=42, tol=0.0001,
verbose=0)
MCC: 0.3045
F1: 0.3827
Precision: 0.3649
Recall: 0.4720
Accuracy: 0.8181
###Markdown
Second run
###Code
parameters = {'C': [100,110,120,130,140,150,160,170,180,190,200,
210,220,230,240,250,260,270,280,290,300],
'class_weight': [None, 'balanced']}
bclf = LinearSVC(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
lsvm = print_summary(clf)
###Output
LinearSVC(C=270, class_weight='balanced', dual=True, fit_intercept=True,
intercept_scaling=1, loss='squared_hinge', max_iter=1000,
multi_class='ovr', penalty='l2', random_state=42, tol=0.0001,
verbose=0)
MCC: 0.3185
F1: 0.3863
Precision: 0.3890
Recall: 0.4440
Accuracy: 0.8400
###Markdown
Third run
###Code
parameters = {'C': [260,261,262,263,264,265,266,267,268,269,270,
271,272,273,274,275,276,277,278,279,280],
'class_weight': [None, 'balanced']}
bclf = LinearSVC(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
lsvm = print_summary(clf)
###Output
LinearSVC(C=269, class_weight='balanced', dual=True, fit_intercept=True,
intercept_scaling=1, loss='squared_hinge', max_iter=1000,
multi_class='ovr', penalty='l2', random_state=42, tol=0.0001,
verbose=0)
MCC: 0.3242
F1: 0.3897
Precision: 0.3931
Recall: 0.4600
Accuracy: 0.8349
###Markdown
SummaryThis model is limited, even tuning the parameters to a wide range of values can't increase it's performance to an acceptable level. KNeighbors Classifier First run
###Code
parameters = {'n_neighbors': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'weights': ['uniform', 'distance'],
'algorithm' : ['ball_tree', 'kd_tree', 'brute'],
'leaf_size': [1,2,5,10,20,30,40,50],
'p': [1,2]
}
bclf = KNeighborsClassifier()
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
knc = print_summary(clf)
###Output
KNeighborsClassifier(algorithm='ball_tree', leaf_size=1, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=3, p=2,
weights='distance')
MCC: 0.2595
F1: 0.2798
Precision: 0.4807
Recall: 0.2120
Accuracy: 0.8823
###Markdown
Second run
###Code
parameters = {'n_neighbors': [1, 2, 3, 4, 5],
'weights': ['distance'],
'algorithm' : ['ball_tree'],
'leaf_size': [1,2,3,4,5,6,7,8,9,10],
'p': [1,2]
}
bclf = KNeighborsClassifier()
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
knc = print_summary(clf)
###Output
KNeighborsClassifier(algorithm='ball_tree', leaf_size=1, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=3, p=2,
weights='distance')
MCC: 0.2595
F1: 0.2798
Precision: 0.4807
Recall: 0.2120
Accuracy: 0.8823
###Markdown
SummaryWhile it achieved a higher accuracy, it came with the cost of much lower recall. The relatively good precision might be an asset. SVM Classifier (other kernels) First run
###Code
parameters = {'kernel': ['poly', 'rbf', 'sigmoid'],
'C': [1,2,3,5,10,15,20,50,100,200,300,400,500,1000,2500,5000,10000],
'gamma': [0.0001, 0.001, 0.01, 0.1, 1, 10, 25, 50],
'class_weight': [None, 'balanced']}
bclf = SVC(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
svm = print_summary(clf)
###Output
SVC(C=15, cache_size=200, class_weight='balanced', coef0=0.0,
decision_function_shape='ovr', degree=3, gamma=10, kernel='poly',
max_iter=-1, probability=False, random_state=42, shrinking=True,
tol=0.001, verbose=False)
MCC: 0.3421
F1: 0.4107
Precision: 0.4054
Recall: 0.4600
Accuracy: 0.8521
###Markdown
Second run
###Code
parameters = {'kernel': ['poly', 'rbf', 'sigmoid'],
'C': [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
'gamma': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,
21,22,23,24,25,26,27,28,29,30],
'class_weight': ['balanced']}
bclf = SVC(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
svm = print_summary(clf)
###Output
SVC(C=6, cache_size=200, class_weight='balanced', coef0=0.0,
decision_function_shape='ovr', degree=3, gamma=3, kernel='sigmoid',
max_iter=-1, probability=False, random_state=42, shrinking=True,
tol=0.001, verbose=False)
MCC: 0.3540
F1: 0.4022
Precision: 0.2679
Recall: 0.8280
Accuracy: 0.7140
###Markdown
Third run
###Code
parameters = {'kernel': ['sigmoid'],
'C': [5.1,5.2,5.3,5.4,5.5,5.6,5.7,5.8,5.9,6,
6.1,6.2,6.3,6.4,6.5,6.6,6.7,6.8,6.9,7],
'gamma': [2.1,2.2,2.3,2.4,2.5,2.6,2.7,2.8,2.9,3,
3.1,3.2,3.3,3.4,3.5,3.6,3.7,3.8,3.9,4],
'class_weight': ['balanced']}
bclf = SVC(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
svm = print_summary(clf)
###Output
SVC(C=5.4, cache_size=200, class_weight='balanced', coef0=0.0,
decision_function_shape='ovr', degree=3, gamma=3.9, kernel='sigmoid',
max_iter=-1, probability=False, random_state=42, shrinking=True,
tol=0.001, verbose=False)
MCC: 0.3776
F1: 0.4163
Precision: 0.2766
Recall: 0.8680
Accuracy: 0.7149
###Markdown
SummaryThis model was able to get a high recall score. However, it came with the price of lower accuracy and abysmal precision. Decision Trees First run
###Code
parameters = {'criterion': ['gini', 'entropy'],
'max_features': ['auto', 'sqrt', 'log2', None],
'min_samples_leaf': [1,2,5,10,15,20,30],
'class_weight': [None, 'balanced']}
bclf = DecisionTreeClassifier(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
dt = print_summary(clf)
###Output
DecisionTreeClassifier(class_weight='balanced', criterion='entropy',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=20, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=42,
splitter='best')
MCC: 0.4682
F1: 0.5258
Precision: 0.4432
Recall: 0.6960
Accuracy: 0.8428
###Markdown
Second run
###Code
parameters = {'criterion': ['gini', 'entropy'],
'max_features': ['auto', 'sqrt', 'log2', None],
'min_samples_leaf': [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,
31,32,33,34,35,36,37,38,39,40],
'class_weight': [None, 'balanced']}
bclf = DecisionTreeClassifier(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
dt = print_summary(clf)
###Output
DecisionTreeClassifier(class_weight='balanced', criterion='entropy',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=19, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=42,
splitter='best')
MCC: 0.4798
F1: 0.5365
Precision: 0.4583
Recall: 0.6880
Accuracy: 0.8544
###Markdown
SummaryThis model has the best overall performance. This would be the model of choice, because it presents the best balance between precision and recall, if choosing was necessary, but it is not. More on that later. Ensemble Classifiers: Random Forest First run
###Code
parameters = {'n_estimators': [2,5,10,15,20,50],
'criterion': ['gini', 'entropy'],
'max_features': ['auto', 'sqrt', 'log2', None],
'min_samples_leaf': [1,2,5,10,15,20,30,40,50],
'class_weight': [None, 'balanced', 'balanced_subsample']}
bclf = RandomForestClassifier(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
rf = print_summary(clf)
###Output
RandomForestClassifier(bootstrap=True, class_weight='balanced_subsample',
criterion='entropy', max_depth=None, max_features='auto',
max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=10,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=50, n_jobs=1, oob_score=False, random_state=42,
verbose=0, warm_start=False)
MCC: 0.4227
F1: 0.4838
Precision: 0.4178
Recall: 0.6360
Accuracy: 0.8409
###Markdown
Second run
###Code
parameters = {'n_estimators': [20,25,30,35,40,45,50,55,60,65,70,75,80,85,90,95,100],
'criterion': ['entropy'],
'max_features': ['auto'],
'min_samples_leaf': [5,6,7,8,9,10,11,12,13,14,15],
'class_weight': [None, 'balanced', 'balanced_subsample']}
bclf = RandomForestClassifier(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
rf = print_summary(clf)
###Output
RandomForestClassifier(bootstrap=True, class_weight='balanced',
criterion='entropy', max_depth=None, max_features='auto',
max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=10,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=85, n_jobs=1, oob_score=False, random_state=42,
verbose=0, warm_start=False)
MCC: 0.4385
F1: 0.4997
Precision: 0.4233
Recall: 0.6480
Accuracy: 0.8484
###Markdown
Third run
###Code
parameters = {'n_estimators': [80,81,82,83,84,85,86,87,88,90],
'criterion': ['entropy'],
'max_features': ['auto'],
'min_samples_leaf': [5,6,7,8,9,10,11,12,13,14,15],
'class_weight': [None, 'balanced', 'balanced_subsample']}
bclf = RandomForestClassifier(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
rf = print_summary(clf)
###Output
RandomForestClassifier(bootstrap=True, class_weight='balanced',
criterion='entropy', max_depth=None, max_features='auto',
max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=10,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=85, n_jobs=1, oob_score=False, random_state=42,
verbose=0, warm_start=False)
MCC: 0.4385
F1: 0.4997
Precision: 0.4233
Recall: 0.6480
Accuracy: 0.8484
###Markdown
SummaryThe performance is worse than using just 1 Decision Tree. Ensemble Classifiers: Adaboost First run
###Code
parameters = {'base_estimator': [DecisionTreeClassifier(criterion='entropy', class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy', class_weight='balanced',
max_depth=1), #Stumps
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=19,
class_weight='balanced')],
'n_estimators': [2,5,10,20,30,40,50,60,70,80,90,100,200,300,400,500],
'learning_rate': [0.5,1,1.5,2],
'algorithm': ['SAMME','SAMME.R']
}
bclf = AdaBoostClassifier(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
ada = print_summary(clf)
###Output
AdaBoostClassifier(algorithm='SAMME',
base_estimator=DecisionTreeClassifier(class_weight='balanced', criterion='entropy',
max_depth=None, max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=19, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=None,
splitter='best'),
learning_rate=0.5, n_estimators=5, random_state=42)
MCC: 0.4395
F1: 0.4996
Precision: 0.4335
Recall: 0.6600
Accuracy: 0.8363
###Markdown
Second run
###Code
parameters = {'base_estimator': [DecisionTreeClassifier(criterion='entropy', class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy', class_weight='balanced',
max_depth=1), #Stumps
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=19,
class_weight='balanced')],
'n_estimators': [1,2,3,4,5,6,7,8,9,10],
'learning_rate': [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1,1.1,1.2,1.3,
1.4,1.5,1.6,1.7,1.8,1.9,2.0],
'algorithm': ['SAMME']
}
bclf = AdaBoostClassifier(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
ada = print_summary(clf)
###Output
AdaBoostClassifier(algorithm='SAMME',
base_estimator=DecisionTreeClassifier(class_weight='balanced', criterion='entropy',
max_depth=None, max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=19, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=None,
splitter='best'),
learning_rate=0.4, n_estimators=4, random_state=42)
MCC: 0.4698
F1: 0.5240
Precision: 0.4536
Recall: 0.6840
Accuracy: 0.8502
###Markdown
Third Run
###Code
parameters = {'base_estimator': [DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=1,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=2,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=3,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=4,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=5,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=6,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=7,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=8,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=9,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=10,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=11,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=12,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=13,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=14,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=15,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=16,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=17,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=18,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=19,
class_weight='balanced'),
DecisionTreeClassifier(criterion='entropy',
min_samples_leaf=20,
class_weight='balanced')],
'n_estimators': [1,2,3,4,5,6,7,8,9,10],
'learning_rate': [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1,1.1,1.2,1.3,
1.4,1.5,1.6,1.7,1.8,1.9,2.0],
'algorithm': ['SAMME']
}
bclf = AdaBoostClassifier(random_state=42)
clf = GridSearchCV(bclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=1)
clf.fit(features, labels)
ada = print_summary(clf)
###Output
AdaBoostClassifier(algorithm='SAMME',
base_estimator=DecisionTreeClassifier(class_weight='balanced', criterion='entropy',
max_depth=None, max_features=None, max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=19, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=None,
splitter='best'),
learning_rate=0.4, n_estimators=4, random_state=42)
MCC: 0.4698
F1: 0.5240
Precision: 0.4536
Recall: 0.6840
Accuracy: 0.8502
###Markdown
SummaryWhile the performance is better than the Random Forest, both are worse than using just a properly calibrated Decision Tree. Chosen Model: Voting Classifier After exhaustively testing and parameter tuning, here are the models ranked, by the Matthews Correlation Coefficient:
###Code
models = sorted([lsvm, knc, svm, dt, rf, ada], key=operator.itemgetter(1), reverse=True)
print('{:>25}{:^10}{:^10}{:^10}{:^10}{:^10}'.format('Classifier', 'MCC', 'F1', 'Precision',
'Recall', 'Accuracy'))
print('')
for (name, mcc, f1, pre, rec, acc) in models:
print('{:>25}{:^10.4f}{:^10.4f}{:^10.4f}{:^10.4f}{:^10.4f}'.format(name.split('Classifier')[0],
mcc, f1, pre, rec, acc))
###Output
Classifier MCC F1 Precision Recall Accuracy
DecisionTree 0.4798 0.5365 0.4583 0.6880 0.8544
AdaBoost 0.4698 0.5240 0.4536 0.6840 0.8502
RandomForest 0.4385 0.4997 0.4233 0.6480 0.8484
SVC 0.3776 0.4163 0.2766 0.8680 0.7149
LinearSVC 0.3242 0.3897 0.3931 0.4600 0.8349
KNeighbors 0.2595 0.2798 0.4807 0.2120 0.8823
###Markdown
The top 3 models are all based around Decision Trees, and the best performance is obtained by the single Decision Tree Classifier.However, there are useful features in other models. Combined, the following models will result in the best classifier:* KNeighbors : Best precision and accuracy* Decision Tree: Best F1 (balance between precision and recall)* SVC: Best recallUsing a voting classifier enables the models to achieve a performance that none of them could on their own. Each classifier have one vote, and the predicted class is determined my the majority.http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html
###Code
from sklearn.ensemble import VotingClassifier
clf1 = DecisionTreeClassifier(class_weight='balanced', criterion='entropy',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=19, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=False, random_state=42,
splitter='best')
clf2 = SVC(C=5.4, cache_size=200, class_weight='balanced', coef0=0.0,
decision_function_shape='ovr', degree=3, gamma=3.9, kernel='sigmoid',
max_iter=-1, probability=False, random_state=42, shrinking=True,
tol=0.001, verbose=False)
clf3 = KNeighborsClassifier(algorithm='ball_tree', leaf_size=1, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=3, p=2,
weights='distance')
eclf = VotingClassifier(estimators=[('dt', clf1), ('svc', clf2), ('kn', clf3)], voting='hard')
parameters = {} # Using GridSearchCV just for CV
clf = GridSearchCV(eclf, parameters, scoring=scorers,
n_jobs=10, cv=cv, refit='mcc', verbose=0)
clf.fit(features, labels)
vc = print_summary(clf)
###Output
VotingClassifier(estimators=[('dt', DecisionTreeClassifier(class_weight='balanced', criterion='entropy',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_impurity_split=None,
min_samples_leaf=19, min_samples_split=2,
min_weight_f...wski',
metric_params=None, n_jobs=1, n_neighbors=3, p=2,
weights='distance'))],
flatten_transform=None, n_jobs=1, voting='hard', weights=None)
MCC: 0.5277
F1: 0.5762
Precision: 0.5261
Recall: 0.6720
Accuracy: 0.8870
###Markdown
Building your own container as Algorithm / Model PackageWith Amazon SageMaker, you can package your own algorithms that can than be trained and deployed in the SageMaker environment. This notebook will guide you through an example that shows you how to build a Docker container for SageMaker and use it for training and inference.This is an extension of the [scikit-bring-your-own notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb). We append specific steps that help you create a new Algorithm / Model Package SageMaker entities, which can be sold on AWS MarketplaceBy packaging an algorithm in a container, you can bring almost any code to the Amazon SageMaker environment, regardless of programming language, environment, framework, or dependencies. 1. [Building your own algorithm container](Building-your-own-algorithm-container) 1. [When should I build my own algorithm container?](When-should-I-build-my-own-algorithm-container?) 1. [Permissions](Permissions) 1. [The example](The-example) 1. [The presentation](The-presentation)1. [Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker](Part-1:-Packaging-and-Uploading-your-Algorithm-for-use-with-Amazon-SageMaker) 1. [An overview of Docker](An-overview-of-Docker) 1. [How Amazon SageMaker runs your Docker container](How-Amazon-SageMaker-runs-your-Docker-container) 1. [Running your container during training](Running-your-container-during-training) 1. [The input](The-input) 1. [The output](The-output) 1. [Running your container during hosting](Running-your-container-during-hosting) 1. [The parts of the sample container](The-parts-of-the-sample-container) 1. [The Dockerfile](The-Dockerfile) 1. [Building and registering the container](Building-and-registering-the-container) 1. [Testing your algorithm on your local machine or on an Amazon SageMaker notebook instance](Testing-your-algorithm-on-your-local-machine-or-on-an-Amazon-SageMaker-notebook-instance)1. [Part 2: Training and Hosting your Algorithm in Amazon SageMaker](Part-2:-Training-and-Hosting-your-Algorithm-in-Amazon-SageMaker) 1. [Set up the environment](Set-up-the-environment) 1. [Create the session](Create-the-session) 1. [Upload the data for training](Upload-the-data-for-training) 1. [Create an estimator and fit the model](Create-an-estimator-and-fit-the-model) 1. [Run a Batch Transform Job](Batch-Transform-Job) 1. [Deploy the model](Deploy-the-model) 1. [Optional cleanup](Cleanup-Endpoint)1. [Part 3: Package your resources as an Amazon SageMaker Algorithm](Part-3---Package-your-resources-as-an-Amazon-SageMaker-Algorithm) 1. [Algorithm Definition](Algorithm-Definition)1. [Part 4: Package your resources as an Amazon SageMaker ModelPackage](Part-4---Package-your-resources-as-an-Amazon-SageMaker-ModelPackage) 1. [Model Package Definition](Model-Package-Definition)1. [Debugging Creation Issues](Debugging-Creation-Issues)1. [List on AWS Marketplace](List-on-AWS-Marketplace) When should I build my own algorithm container?You may not need to create a container to bring your own code to Amazon SageMaker. When you are using a framework (such as Apache MXNet or TensorFlow) that has direct support in SageMaker, you can simply supply the Python code that implements your algorithm using the SDK entry points for that framework. This set of frameworks is continually expanding, so we recommend that you check the current list if your algorithm is written in a common machine learning environment.Even if there is direct SDK support for your environment or framework, you may find it more effective to build your own container. If the code that implements your algorithm is quite complex on its own or you need special additions to the framework, building your own container may be the right choice.If there isn't direct SDK support for your environment, don't worry. You'll see in this walk-through that building your own container is quite straightforward. PermissionsRunning this notebook requires permissions in addition to the normal `SageMakerFullAccess` permissions. This is because we'll creating new repositories in Amazon ECR. The easiest way to add these permissions is simply to add the managed policy `AmazonEC2ContainerRegistryFullAccess` to the role that you used to start your notebook instance. There's no need to restart your notebook instance when you do this, the new permissions will be available immediately. The exampleHere, we'll show how to package a simple Python example which showcases the [decision tree][] algorithm from the widely used [scikit-learn][] machine learning package. The example is purposefully fairly trivial since the point is to show the surrounding structure that you'll want to add to your own code so you can train and host it in Amazon SageMaker.The ideas shown here will work in any language or environment. You'll need to choose the right tools for your environment to serve HTTP requests for inference, but good HTTP environments are available in every language these days.In this example, we use a single image to support training and hosting. This is easy because it means that we only need to manage one image and we can set it up to do everything. Sometimes you'll want separate images for training and hosting because they have different requirements. Just separate the parts discussed below into separate Dockerfiles and build two images. Choosing whether to have a single image or two images is really a matter of which is more convenient for you to develop and manage.If you're only using Amazon SageMaker for training or hosting, but not both, there is no need to build the unused functionality into your container.[scikit-learn]: http://scikit-learn.org/stable/[decision tree]: http://scikit-learn.org/stable/modules/tree.html The presentationThis presentation is divided into two parts: _building_ the container and _using_ the container. Part 1: Packaging and Uploading your Algorithm for use with Amazon SageMaker An overview of DockerIf you're familiar with Docker already, you can skip ahead to the next section.For many data scientists, Docker containers are a new concept, but they are not difficult, as you'll see here. Docker provides a simple way to package arbitrary code into an _image_ that is totally self-contained. Once you have an image, you can use Docker to run a _container_ based on that image. Running a container is just like running a program on the machine except that the container creates a fully self-contained environment for the program to run. Containers are isolated from each other and from the host environment, so the way you set up your program is the way it runs, no matter where you run it.Docker is more powerful than environment managers like conda or virtualenv because (a) it is completely language independent and (b) it comprises your whole operating environment, including startup commands, environment variable, etc.In some ways, a Docker container is like a virtual machine, but it is much lighter weight. For example, a program running in a container can start in less than a second and many containers can run on the same physical machine or virtual machine instance.Docker uses a simple file called a `Dockerfile` to specify how the image is assembled. We'll see an example of that below. You can build your Docker images based on Docker images built by yourself or others, which can simplify things quite a bit.Docker has become very popular in the programming and devops communities for its flexibility and well-defined specification of the code to be run. It is the underpinning of many services built in the past few years, such as [Amazon ECS].Amazon SageMaker uses Docker to allow users to train and deploy arbitrary algorithms.In Amazon SageMaker, Docker containers are invoked in a certain way for training and a slightly different way for hosting. The following sections outline how to build containers for the SageMaker environment.Some helpful links:* [Docker home page](http://www.docker.com)* [Getting started with Docker](https://docs.docker.com/get-started/)* [Dockerfile reference](https://docs.docker.com/engine/reference/builder/)* [`docker run` reference](https://docs.docker.com/engine/reference/run/)[Amazon ECS]: https://aws.amazon.com/ecs/ How Amazon SageMaker runs your Docker containerBecause you can run the same image in training or hosting, Amazon SageMaker runs your container with the argument `train` or `serve`. How your container processes this argument depends on the container:* In the example here, we don't define an `ENTRYPOINT` in the Dockerfile so Docker will run the command `train` at training time and `serve` at serving time. In this example, we define these as executable Python scripts, but they could be any program that we want to start in that environment.* If you specify a program as an `ENTRYPOINT` in the Dockerfile, that program will be run at startup and its first argument will be `train` or `serve`. The program can then look at that argument and decide what to do.* If you are building separate containers for training and hosting (or building only for one or the other), you can define a program as an `ENTRYPOINT` in the Dockerfile and ignore (or verify) the first argument passed in. Running your container during trainingWhen Amazon SageMaker runs training, your `train` script is run just like a regular Python program. A number of files are laid out for your use, under the `/opt/ml` directory: /opt/ml |-- input | |-- config | | |-- hyperparameters.json | | `-- resourceConfig.json | `-- data | `-- | `-- |-- model | `-- `-- output `-- failure The input* `/opt/ml/input/config` contains information to control how your program runs. `hyperparameters.json` is a JSON-formatted dictionary of hyperparameter names to values. These values will always be strings, so you may need to convert them. `resourceConfig.json` is a JSON-formatted file that describes the network layout used for distributed training. Since scikit-learn doesn't support distributed training, we'll ignore it here.* `/opt/ml/input/data//` (for File mode) contains the input data for that channel. The channels are created based on the call to CreateTrainingJob but it's generally important that channels match what the algorithm expects. The files for each channel will be copied from S3 to this directory, preserving the tree structure indicated by the S3 key structure. * `/opt/ml/input/data/_` (for Pipe mode) is the pipe for a given epoch. Epochs start at zero and go up by one each time you read them. There is no limit to the number of epochs that you can run, but you must close each pipe before reading the next epoch. The output* `/opt/ml/model/` is the directory where you write the model that your algorithm generates. Your model can be in any format that you want. It can be a single file or a whole directory tree. SageMaker will package any files in this directory into a compressed tar archive file. This file will be available at the S3 location returned in the `DescribeTrainingJob` result.* `/opt/ml/output` is a directory where the algorithm can write a file `failure` that describes why the job failed. The contents of this file will be returned in the `FailureReason` field of the `DescribeTrainingJob` result. For jobs that succeed, there is no reason to write this file as it will be ignored. Running your container during hostingHosting has a very different model than training because hosting is reponding to inference requests that come in via HTTP. In this example, we use our recommended Python serving stack to provide robust and scalable serving of inference requests:This stack is implemented in the sample code here and you can mostly just leave it alone. Amazon SageMaker uses two URLs in the container:* `/ping` will receive `GET` requests from the infrastructure. Your program returns 200 if the container is up and accepting requests.* `/invocations` is the endpoint that receives client inference `POST` requests. The format of the request and the response is up to the algorithm. If the client supplied `ContentType` and `Accept` headers, these will be passed in as well. The container will have the model files in the same place they were written during training: /opt/ml `-- model `-- The parts of the sample containerIn the `container` directory are all the components you need to package the sample algorithm for Amazon SageMager: . |-- Dockerfile |-- build_and_push.sh `-- decision_trees |-- nginx.conf |-- predictor.py |-- serve |-- train `-- wsgi.pyLet's discuss each of these in turn:* __`Dockerfile`__ describes how to build your Docker container image. More details below.* __`build_and_push.sh`__ is a script that uses the Dockerfile to build your container images and then pushes it to ECR. We'll invoke the commands directly later in this notebook, but you can just copy and run the script for your own algorithms.* __`decision_trees`__ is the directory which contains the files that will be installed in the container.* __`local_test`__ is a directory that shows how to test your new container on any computer that can run Docker, including an Amazon SageMaker notebook instance. Using this method, you can quickly iterate using small datasets to eliminate any structural bugs before you use the container with Amazon SageMaker. We'll walk through local testing later in this notebook.In this simple application, we only install five files in the container. You may only need that many or, if you have many supporting routines, you may wish to install more. These five show the standard structure of our Python containers, although you are free to choose a different toolset and therefore could have a different layout. If you're writing in a different programming language, you'll certainly have a different layout depending on the frameworks and tools you choose.The files that we'll put in the container are:* __`nginx.conf`__ is the configuration file for the nginx front-end. Generally, you should be able to take this file as-is.* __`predictor.py`__ is the program that actually implements the Flask web server and the decision tree predictions for this app. You'll want to customize the actual prediction parts to your application. Since this algorithm is simple, we do all the processing here in this file, but you may choose to have separate files for implementing your custom logic.* __`serve`__ is the program started when the container is started for hosting. It simply launches the gunicorn server which runs multiple instances of the Flask app defined in `predictor.py`. You should be able to take this file as-is.* __`train`__ is the program that is invoked when the container is run for training. You will modify this program to implement your training algorithm.* __`wsgi.py`__ is a small wrapper used to invoke the Flask app. You should be able to take this file as-is.In summary, the two files you will probably want to change for your application are `train` and `predictor.py`. The DockerfileThe Dockerfile describes the image that we want to build. You can think of it as describing the complete operating system installation of the system that you want to run. A Docker container running is quite a bit lighter than a full operating system, however, because it takes advantage of Linux on the host machine for the basic operations. For the Python science stack, we will start from a standard Ubuntu installation and run the normal tools to install the things needed by scikit-learn. Finally, we add the code that implements our specific algorithm to the container and set up the right environment to run under.Along the way, we clean up extra space. This makes the container smaller and faster to start.Let's look at the Dockerfile for the example:
###Code
!cat container/Dockerfile
###Output
# Build an image that can do training and inference in SageMaker
# This is a Python 2 image that uses the nginx, gunicorn, flask stack
# for serving inferences in a stable way.
FROM ubuntu:18.04
MAINTAINER Amazon AI <[email protected]>
RUN apt-get -y update && apt-get install -y --no-install-recommends \
wget \
python \
nginx \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Here we get all python packages.
# There's substantial overlap between scipy and numpy that we eliminate by
# linking them together. Likewise, pip leaves the install caches populated which uses
# a significant amount of space. These optimizations save a fair amount of space in the
# image, which reduces start up time.
RUN wget https://bootstrap.pypa.io/get-pip.py && python get-pip.py && \
pip install numpy scipy scikit-learn pandas flask gevent gunicorn && \
rm -rf /root/.cache
# Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard
# output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE
# keeps Python from writing the .pyc files which are unnecessary in this case. We also update
# PATH so that the train and serve programs are found when the container is invoked.
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PATH="/opt/program:${PATH}"
# Set up the program in the image
COPY decision_trees /opt/program
WORKDIR /opt/program
###Markdown
Building and registering the containerThe following shell code shows how to build the container image using `docker build` and push the container image to ECR using `docker push`. This code is also available as the shell script `container/build-and-push.sh`, which you can run as `build-and-push.sh decision_trees_sample` to build the image `decision_trees_sample`. This code looks for an ECR repository in the account you're using and the current default region (if you're using an Amazon SageMaker notebook instance, this will be the region where the notebook instance was created). If the repository doesn't exist, the script will create it.
###Code
%%sh
# The name of our algorithm
algorithm_name="decisiontrees"
cd container
chmod +x decision_trees/train
chmod +x decision_trees/serve
account=$(aws sts get-caller-identity --query Account --output text)
# Get the region defined in the current configuration (default to us-west-2 if none defined)
region=$(aws configure get region)
# specifically setting to us-east-2 since during the pre-release period, we support only that region.
region=${region:-eu-west-1}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null
fi
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${algorithm_name} .
docker tag ${algorithm_name} ${fullname}
aws ecr get-login-password \
--region ${region} \
| docker login \
--username AWS \
--password-stdin ${account}.dkr.ecr.${region}.amazonaws.com
###Output
Sending build context to Docker daemon 51.71kB
Step 1/9 : FROM ubuntu:18.04
---> 72300a873c2c
Step 2/9 : MAINTAINER Amazon AI <[email protected]>
---> Using cache
---> 964992c9d672
Step 3/9 : RUN apt-get -y update && apt-get install -y --no-install-recommends wget python nginx ca-certificates && rm -rf /var/lib/apt/lists/*
---> Using cache
---> c7d874ac8fd5
Step 4/9 : RUN wget https://bootstrap.pypa.io/get-pip.py && python get-pip.py && pip install numpy scipy scikit-learn pandas flask gevent gunicorn && rm -rf /root/.cache
---> Using cache
---> 7ffad4e512e7
Step 5/9 : ENV PYTHONUNBUFFERED=TRUE
---> Using cache
---> b57c951f0cd9
Step 6/9 : ENV PYTHONDONTWRITEBYTECODE=TRUE
---> Using cache
---> f356f77cfaa6
Step 7/9 : ENV PATH="/opt/program:${PATH}"
---> Using cache
---> b85c935db183
Step 8/9 : COPY decision_trees /opt/program
---> Using cache
---> 7f8724b1dcfc
Step 9/9 : WORKDIR /opt/program
---> Using cache
---> 1d319e477f05
Successfully built 1d319e477f05
Successfully tagged decisiontrees:latest
Login Succeeded
###Markdown
Testing your algorithm on your local machine or on an Amazon SageMaker notebook instanceWhile you're first packaging an algorithm use with Amazon SageMaker, you probably want to test it yourself to make sure it's working right. In the directory `container/local_test`, there is a framework for doing this. It includes three shell scripts for running and using the container and a directory structure that mimics the one outlined above.The scripts are:* `train_local.sh`: Run this with the name of the image and it will run training on the local tree. You'll want to modify the directory `test_dir/input/data/...` to be set up with the correct channels and data for your algorithm. Also, you'll want to modify the file `input/config/hyperparameters.json` to have the hyperparameter settings that you want to test (as strings).* `serve_local.sh`: Run this with the name of the image once you've trained the model and it should serve the model. It will run and wait for requests. Simply use the keyboard interrupt to stop it.* `predict.sh`: Run this with the name of a payload file and (optionally) the HTTP content type you want. The content type will default to `text/csv`. For example, you can run `$ ./predict.sh payload.csv text/csv`.The directories as shipped are set up to test the decision trees sample algorithm presented here. Part 2: Training, Batch Inference and Hosting your Algorithm in Amazon SageMakerOnce you have your container packaged, you can use it to train and serve models. Let's do that with the algorithm we made above. Set up the environmentHere we specify a bucket to use and the role that will be used for working with Amazon SageMaker.
###Code
# S3 prefix
common_prefix = "DEMO-scikit-byo-iris"
training_input_prefix = common_prefix + "/training-input-data"
batch_inference_input_prefix = common_prefix + "/batch-inference-input-data"
import os
import sagemaker
###Output
_____no_output_____
###Markdown
Create the sessionThe session remembers our connection parameters to Amazon SageMaker. We'll use it to perform all of our SageMaker operations.
###Code
import sagemaker as sage
sess = sage.Session()
###Output
_____no_output_____
###Markdown
Upload the data for trainingWhen training large models with huge amounts of data, you'll typically use big data tools, like Amazon Athena, AWS Glue, or Amazon EMR, to create your data in S3. For the purposes of this example, we're using some the classic [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set), which we have included. We can use use the tools provided by the Amazon SageMaker Python SDK to upload the data to a default bucket.
###Code
TRAINING_WORKDIR = "data/training"
training_input = sess.upload_data(TRAINING_WORKDIR, key_prefix=training_input_prefix)
print ("Training Data Location " + training_input)
###Output
Training Data Location s3://sagemaker-eu-west-1-252328296877/DEMO-scikit-byo-iris/training-input-data
###Markdown
Create an estimator and fit the modelIn order to use Amazon SageMaker to fit our algorithm, we'll create an `Estimator` that defines how to use the container to train. This includes the configuration we need to invoke SageMaker training:* The __container name__. This is constructed as in the shell commands above.* The __role__. As defined above.* The __instance count__ which is the number of machines to use for training.* The __instance type__ which is the type of machine to use for training.* The __output path__ determines where the model artifact will be written.* The __session__ is the SageMaker session object that we defined above.Then we use fit() on the estimator to train against the data that we uploaded above.
###Code
account = sess.boto_session.client('sts').get_caller_identity()['Account']
region = sess.boto_session.region_name
image = '{}.dkr.ecr.{}.amazonaws.com/decision-trees:latest'.format(account, region)
role = "arn:aws:iam::252328296877:role/Sagemaker-notebook"
account, region, image
tree = sage.estimator.Estimator(image,
role,
1,
'ml.c4.2xlarge',
output_path="s3://{}/output".format(sess.default_bucket()),
sagemaker_session=sess)
tree.fit(training_input)
###Output
2020-08-15 17:30:16 Starting - Starting the training job...
2020-08-15 17:30:18 Starting - Launching requested ML instances......
2020-08-15 17:31:21 Starting - Preparing the instances for training...
2020-08-15 17:32:11 Downloading - Downloading input data...
2020-08-15 17:32:46 Training - Training image download completed. Training in progress..[34mStarting the training.[0m
[34mvalidation-accuracy: 0.96[0m
[34mTraining complete.[0m
2020-08-15 17:32:57 Uploading - Uploading generated training model
2020-08-15 17:32:57 Completed - Training job completed
Training seconds: 46
Billable seconds: 46
###Markdown
Batch Transform JobNow let's use the model built to run a batch inference job and verify it works. Batch Transform Input PreparationThe snippet below is removing the "label" column (column indexed at 0) and retaining the rest to be batch transform's input. NOTE: This is the same training data, which is a no-no from a statistical/ML science perspective. But the aim of this notebook is to demonstrate how things work end-to-end.
###Code
import pandas as pd
## Remove first column that contains the label
shape=pd.read_csv(TRAINING_WORKDIR + "/iris.csv", header=None).drop([0], axis=1)
TRANSFORM_WORKDIR = "data/transform"
shape.to_csv(TRANSFORM_WORKDIR + "/batchtransform_test.csv", index=False, header=False)
transform_input = sess.upload_data(TRANSFORM_WORKDIR, key_prefix=batch_inference_input_prefix) + "/batchtransform_test.csv"
print("Transform input uploaded to " + transform_input)
###Output
Transform input uploaded to s3://sagemaker-eu-west-1-252328296877/DEMO-scikit-byo-iris/batch-inference-input-data/batchtransform_test.csv
###Markdown
Run Batch TransformNow that our batch transform input is setup, we run the transformation job next
###Code
transformer = tree.transformer(instance_count=1, instance_type='ml.m4.xlarge')
transformer.transform(transform_input, content_type='text/csv')
transformer.wait()
print("Batch Transform output saved to " + transformer.output_path)
###Output
.....................[32m2020-08-15T17:39:38.338:[sagemaker logs]: MaxConcurrentTransforms=1, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34mStarting the inference server with 4 workers.[0m
[34m2020/08/15 17:39:37 [crit] 10#10: *1 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[35mStarting the inference server with 4 workers.[0m
[35m2020/08/15 17:39:37 [crit] 10#10: *1 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[34m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[34m2020/08/15 17:39:37 [crit] 10#10: *3 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[34m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Starting gunicorn 19.10.0[0m
[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Listening at: unix:/tmp/gunicorn.sock (9)[0m
[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Using worker: gevent[0m
[34m[2020-08-15 17:39:37 +0000] [14] [INFO] Booting worker with pid: 14[0m
[34m[2020-08-15 17:39:37 +0000] [15] [INFO] Booting worker with pid: 15[0m
[34m[2020-08-15 17:39:37 +0000] [17] [INFO] Booting worker with pid: 17[0m
[34m[2020-08-15 17:39:37 +0000] [18] [INFO] Booting worker with pid: 18[0m
[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "GET /ping HTTP/1.1" 200 1 "-" "Go-http-client/1.1"[0m
[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "GET /execution-parameters HTTP/1.1" 404 2 "-" "Go-http-client/1.1"[0m
[34mInvoked with 150 records[0m
[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "POST /invocations HTTP/1.1" 200 1400 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[35m2020/08/15 17:39:37 [crit] 10#10: *3 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[35m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Starting gunicorn 19.10.0[0m
[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Listening at: unix:/tmp/gunicorn.sock (9)[0m
[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Using worker: gevent[0m
[35m[2020-08-15 17:39:37 +0000] [14] [INFO] Booting worker with pid: 14[0m
[35m[2020-08-15 17:39:37 +0000] [15] [INFO] Booting worker with pid: 15[0m
[35m[2020-08-15 17:39:37 +0000] [17] [INFO] Booting worker with pid: 17[0m
[35m[2020-08-15 17:39:37 +0000] [18] [INFO] Booting worker with pid: 18[0m
[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "GET /ping HTTP/1.1" 200 1 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "GET /execution-parameters HTTP/1.1" 404 2 "-" "Go-http-client/1.1"[0m
[35mInvoked with 150 records[0m
[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "POST /invocations HTTP/1.1" 200 1400 "-" "Go-http-client/1.1"[0m
[32m2020-08-15T17:39:38.338:[sagemaker logs]: MaxConcurrentTransforms=1, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
[34mStarting the inference server with 4 workers.[0m
[34m2020/08/15 17:39:37 [crit] 10#10: *1 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[35mStarting the inference server with 4 workers.[0m
[35m2020/08/15 17:39:37 [crit] 10#10: *1 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[34m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[34m2020/08/15 17:39:37 [crit] 10#10: *3 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[34m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Starting gunicorn 19.10.0[0m
[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Listening at: unix:/tmp/gunicorn.sock (9)[0m
[34m[2020-08-15 17:39:37 +0000] [9] [INFO] Using worker: gevent[0m
[34m[2020-08-15 17:39:37 +0000] [14] [INFO] Booting worker with pid: 14[0m
[34m[2020-08-15 17:39:37 +0000] [15] [INFO] Booting worker with pid: 15[0m
[34m[2020-08-15 17:39:37 +0000] [17] [INFO] Booting worker with pid: 17[0m
[34m[2020-08-15 17:39:37 +0000] [18] [INFO] Booting worker with pid: 18[0m
[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "GET /ping HTTP/1.1" 200 1 "-" "Go-http-client/1.1"[0m
[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "GET /execution-parameters HTTP/1.1" 404 2 "-" "Go-http-client/1.1"[0m
[34mInvoked with 150 records[0m
[34m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "POST /invocations HTTP/1.1" 200 1400 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[35m2020/08/15 17:39:37 [crit] 10#10: *3 connect() to unix:/tmp/gunicorn.sock failed (2: No such file or directory) while connecting to upstream, client: 169.254.255.130, server: , request: "GET /ping HTTP/1.1", upstream: "http://unix:/tmp/gunicorn.sock:/ping", host: "169.254.255.131:8080"[0m
[35m169.254.255.130 - - [15/Aug/2020:17:39:37 +0000] "GET /ping HTTP/1.1" 502 182 "-" "Go-http-client/1.1"[0m
[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Starting gunicorn 19.10.0[0m
[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Listening at: unix:/tmp/gunicorn.sock (9)[0m
[35m[2020-08-15 17:39:37 +0000] [9] [INFO] Using worker: gevent[0m
[35m[2020-08-15 17:39:37 +0000] [14] [INFO] Booting worker with pid: 14[0m
[35m[2020-08-15 17:39:37 +0000] [15] [INFO] Booting worker with pid: 15[0m
[35m[2020-08-15 17:39:37 +0000] [17] [INFO] Booting worker with pid: 17[0m
[35m[2020-08-15 17:39:37 +0000] [18] [INFO] Booting worker with pid: 18[0m
[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "GET /ping HTTP/1.1" 200 1 "-" "Go-http-client/1.1"[0m
[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "GET /execution-parameters HTTP/1.1" 404 2 "-" "Go-http-client/1.1"[0m
[35mInvoked with 150 records[0m
[35m169.254.255.130 - - [15/Aug/2020:17:39:38 +0000] "POST /invocations HTTP/1.1" 200 1400 "-" "Go-http-client/1.1"[0m
Batch Transform output saved to s3://sagemaker-eu-west-1-252328296877/decision-trees-2020-08-15-17-36-12-879
###Markdown
Inspect the Batch Transform Output in S3
###Code
from urllib.parse import urlparse
parsed_url = urlparse(transformer.output_path)
bucket_name = parsed_url.netloc
file_key = '{}/{}.out'.format(parsed_url.path[1:], "batchtransform_test.csv")
s3_client = sess.boto_session.client('s3')
response = s3_client.get_object(Bucket = sess.default_bucket(), Key = file_key)
response_bytes = response['Body'].read().decode('utf-8')
print(response_bytes)
###Output
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
###Markdown
Deploy the modelDeploying the model to Amazon SageMaker hosting just requires a `deploy` call on the fitted model. This call takes an instance count, instance type, and optionally serializer and deserializer functions. These are used when the resulting predictor is created on the endpoint.
###Code
from sagemaker.serializers import CSVSerializer
model = tree.create_model()
predictor = tree.deploy(1, 'ml.m4.xlarge', serializer=CSVSerializer())
###Output
-----------!
###Markdown
Choose some data and use it for a predictionIn order to do some predictions, we'll extract some of the data we used for training and do predictions against it. This is, of course, bad statistical practice, but a good way to see how the mechanism works.
###Code
shape=pd.read_csv(TRAINING_WORKDIR + "/iris.csv", header=None)
import itertools
a = [50*i for i in range(3)]
b = [40+i for i in range(10)]
indices = [i+j for i,j in itertools.product(a,b)]
test_data=shape.iloc[indices[:-1]]
test_X=test_data.iloc[:,1:]
test_y=test_data.iloc[:,0]
###Output
_____no_output_____
###Markdown
Prediction is as easy as calling predict with the predictor we got back from deploy and the data we want to do predictions with. The serializers take care of doing the data conversions for us.
###Code
print(predictor.predict(test_X.values).decode('utf-8'))
###Output
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
###Markdown
Cleanup EndpointWhen you're done with the endpoint, you'll want to clean it up.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Part 3 - Package your resources as an Amazon SageMaker Algorithm(If you looking to sell a pretrained model (ModelPackage), please skip to Part 4.)Now that you have verified that the algorithm code works for training, live inference and batch inference in the above sections, you can start packaging it up as an Amazon SageMaker Algorithm. Region LimitationSeller onboarding is limited to us-east-2 region (CMH) only. The client we are creating below will be hard-coded to talk to our us-east-2 endpoint only.
###Code
import boto3
smmp = boto3.client('sagemaker', region_name='us-east-2', endpoint_url="https://sagemaker.us-east-2.amazonaws.com")
###Output
_____no_output_____
###Markdown
Algorithm DefinitionSageMaker Algorithm is comprised of 2 parts:1. A training image2. An inference image (optional)The key requirement is that the training and inference images (if provided) remain compatible with each other. Specifically, the model artifacts generated by the code in training image should be readable and compatible with the code in inference image. You can reuse the same image to perform both training and inference or you can choose to separate them. This sample notebook has already created a single algorithm image that perform both training and inference. This image has also been pushed to your ECR registry at {{image}}. You need to provide the following details as part of this algorithm specification: Training SpecificationYou specify details pertinent to your training algorithm in this section. Supported Hyper-parametersThis section captures the hyper-parameters your algorithm supports, their names, types, if they are required, default values, valid ranges etc. This serves both as documentation for buyers and is used by Amazon SageMaker to perform validations of buyer requests in the synchronous request path.Please Note: While this section is optional, we strongly recommend you provide comprehensive information here to leverage our validations and serve as documentation. Additionally, without this being specified, customers cannot leverage your algorithm for Hyper-parameter tuning.*** NOTE: The code below has hyper-parameters hard-coded in the json present in src/training_specification.py. Until we have better functionality to customize it, please update the json in that file appropriately***
###Code
from src.training_specification import TrainingSpecification
from src.training_channels import TrainingChannels
from src.metric_definitions import MetricDefinitions
from src.tuning_objectives import TuningObjectives
import json
training_specification = TrainingSpecification().get_training_specification_dict(
ecr_image=image,
supports_gpu=True,
supported_channels=[
TrainingChannels("training", description="Input channel that provides training data", supported_content_types=["text/csv"])],
supported_metrics=[MetricDefinitions("validation:accuracy", "validation-accuracy: (\\S+)")],
supported_tuning_job_objective_metrics=[TuningObjectives("Maximize", "validation:accuracy")]
)
print(json.dumps(training_specification, indent=2, sort_keys=True))
###Output
_____no_output_____
###Markdown
Inference SpecificationYou specify details pertinent to your inference code in this section.
###Code
from src.inference_specification import InferenceSpecification
import json
inference_specification = InferenceSpecification().get_inference_specification_dict(
ecr_image=image,
supports_gpu=True,
supported_content_types=["text/csv"],
supported_mime_types=["text/csv"])
print(json.dumps(inference_specification, indent=4, sort_keys=True))
###Output
_____no_output_____
###Markdown
Validation SpecificationIn order to provide confidence to the sellers (and buyers) that the products work in Amazon SageMaker before listing them on AWS Marketplace, SageMaker needs to perform basic validations. The product can be listed in AWS Marketplace only if this validation process succeeds. This validation process uses the validation profile and sample data provided by you to run the following validations:1. Create a training job in your account to verify your training image works with SageMaker.2. Once the training job completes successfully, create a Model in your account using the algorithm's inference image and the model artifacts produced as part of the training job we ran. 3. Create a transform job in your account using the above Model to verify your inference image works with SageMaker
###Code
from src.algorithm_validation_specification import AlgorithmValidationSpecification
import json
validation_specification = AlgorithmValidationSpecification().get_algo_validation_specification_dict(
validation_role = role,
training_channel_name = "training",
training_input = training_input,
batch_transform_input = transform_input,
content_type = "text/csv",
instance_type = "ml.c4.xlarge",
output_s3_location = 's3://{}/{}'.format(sess.default_bucket(), common_prefix))
print(json.dumps(validation_specification, indent=4, sort_keys=True))
###Output
_____no_output_____
###Markdown
Putting it all togetherNow we put all the pieces together in the next cell and create an Amazon SageMaker Algorithm
###Code
import json
import time
algorithm_name = "scikit-decision-trees-" + str(round(time.time()))
create_algorithm_input_dict = {
"AlgorithmName" : algorithm_name,
"AlgorithmDescription" : "Decision trees using Scikit",
"CertifyForMarketplace" : True
}
create_algorithm_input_dict.update(training_specification)
create_algorithm_input_dict.update(inference_specification)
create_algorithm_input_dict.update(validation_specification)
print(json.dumps(create_algorithm_input_dict, indent=4, sort_keys=True))
print ("Now creating an algorithm in SageMaker")
smmp.create_algorithm(**create_algorithm_input_dict)
###Output
_____no_output_____
###Markdown
Describe the algorithmThe next cell describes the Algorithm and waits until it reaches a terminal state (Completed or Failed)
###Code
import time
import json
while True:
response = smmp.describe_algorithm(AlgorithmName=algorithm_name)
status = response["AlgorithmStatus"]
print (status)
if (status == "Completed" or status == "Failed"):
print (response["AlgorithmStatusDetails"])
break
time.sleep(5)
###Output
_____no_output_____
###Markdown
Part 4 - Package your resources as an Amazon SageMaker ModelPackageIn this section, we will see how you can package your artifacts (ECR image and the trained artifact from your previous training job) into a ModelPackage. Once you complete this, you can list your product as a pretrained model in the AWS Marketplace. Model Package DefinitionA Model Package is a reusable model artifacts abstraction that packages all ingredients necessary for inference. It consists of an inference specification that defines the inference image to use along with an optional model weights location. Region LimitationSeller onboarding is limited to us-east-2 region (CMH) only. The client we are creating below will be hard-coded to talk to our us-east-2 endpoint only. (Note: You may have previous done this step in Part 3. Repeating here to keep Part 4 self contained.)
###Code
smmp = boto3.client('sagemaker', region_name='us-east-2', endpoint_url="https://sagemaker.us-east-2.amazonaws.com")
###Output
_____no_output_____
###Markdown
Inference SpecificationYou specify details pertinent to your inference code in this section.
###Code
from src.inference_specification import InferenceSpecification
import json
modelpackage_inference_specification = InferenceSpecification().get_inference_specification_dict(
ecr_image=image,
supports_gpu=True,
supported_content_types=["text/csv"],
supported_mime_types=["text/csv"])
# Specify the model data resulting from the previously completed training job
modelpackage_inference_specification["InferenceSpecification"]["Containers"][0]["ModelDataUrl"]=tree.model_data
print(json.dumps(modelpackage_inference_specification, indent=4, sort_keys=True))
###Output
_____no_output_____
###Markdown
Validation SpecificationIn order to provide confidence to the sellers (and buyers) that the products work in Amazon SageMaker before listing them on AWS Marketplace, SageMaker needs to perform basic validations. The product can be listed in the AWS Marketplace only if this validation process succeeds. This validation process uses the validation profile and sample data provided by you to run the following validations:* Create a transform job in your account using the above Model to verify your inference image works with SageMaker.
###Code
from src.modelpackage_validation_specification import ModelPackageValidationSpecification
import json
modelpackage_validation_specification = ModelPackageValidationSpecification().get_validation_specification_dict(
validation_role = role,
batch_transform_input = transform_input,
content_type = "text/csv",
instance_type = "ml.c4.xlarge",
output_s3_location = 's3://{}/{}'.format(sess.default_bucket(), common_prefix))
print(json.dumps(modelpackage_validation_specification, indent=4, sort_keys=True))
###Output
_____no_output_____
###Markdown
Putting it all togetherNow we put all the pieces together in the next cell and create an Amazon SageMaker Model Package.
###Code
import json
import time
model_package_name = "scikit-iris-detector-" + str(round(time.time()))
create_model_package_input_dict = {
"ModelPackageName" : model_package_name,
"ModelPackageDescription" : "Model to detect 3 different types of irises (Setosa, Versicolour, and Virginica)",
"CertifyForMarketplace" : True
}
create_model_package_input_dict.update(modelpackage_inference_specification)
create_model_package_input_dict.update(modelpackage_validation_specification)
print(json.dumps(create_model_package_input_dict, indent=4, sort_keys=True))
smmp.create_model_package(**create_model_package_input_dict)
###Output
_____no_output_____
###Markdown
Describe the ModelPackage The next cell describes the ModelPackage and waits until it reaches a terminal state (Completed or Failed)
###Code
import time
import json
while True:
response = smmp.describe_model_package(ModelPackageName=model_package_name)
status = response["ModelPackageStatus"]
print (status)
if (status == "Completed" or status == "Failed"):
print (response["ModelPackageStatusDetails"])
break
time.sleep(5)
###Output
_____no_output_____
###Markdown
Data preparation for AI training by NaturalisThis notebook executes all steps required to train a species recognition model based on various data sources, predominantly Artsobservasjoner.
###Code
from process_AO import process_AO
import os
process_AO(os.path.join("./Input", "Artsobservasjoner.csv"), "./Output")
from process_GBIF import process_GBIF
import os
process_GBIF(os.path.join("./Input", "GBIF.zip"), "./Output")
from process_ML import process_ML
import os
process_ML(
[
os.path.join("./Input", "ML_snegl.csv"),
os.path.join("./Input", "ML_fugl.csv"),
os.path.join("./Input", "ML_lepi.csv"),
os.path.join("./Input", "ML_meitemark.csv"),
os.path.join("./Input", "ML_fremmed.csv"),
],
"./Output")
from process_SUPP import process_supp
import os
process_supp(
[
os.path.join("./Input", "lichens_bold.csv"),
],
"./Output",
"Lichens"
)
from process_SUPP import process_supp
import os
process_supp(
[
os.path.join("./Input", "Fiskebilder.csv"),
],
"./Output",
"Fish",
checkfolder="/path/to/folder/with/img/files"
)
from process_SUPP import process_supp
import os
process_supp(
[
os.path.join("./Input", "Slugs.csv"),
],
"./Output",
"Slugs"
)
from combine import combine
import os
combine(
[
os.path.join("./Output", "AO_taxa.csv"),
os.path.join("./Output", "GBIF_taxa.csv"),
os.path.join("./Output", "ML_taxa.csv"),
os.path.join("./Output", "Lichens_taxa.csv"),
os.path.join("./Output", "Fish_taxa.csv"),
os.path.join("./Output", "Slugs_taxa.csv"),
],
[
os.path.join("./Output", "AO_images.csv"),
os.path.join("./Output", "GBIF_images.csv"),
os.path.join("./Output", "ML_images.csv"),
os.path.join("./Output", "Lichens_images.csv"),
os.path.join("./Output", "Fish_images.csv"),
os.path.join("./Output", "Slugs_images.csv"),
],
outputfolder="./Output",
previousImageList=".Input/previous_images.csv"
)
from extensions import filter_extensions
filter_extensions(os.path.join("./Output", "images.csv"), ["png", "jpeg", "jpg"])
import pandas as pd
from tqdm import tqdm
tqdm.pandas()
# There are some taxa that have multiple "valid" entries. Replace those with the correct ones
df = pd.read_csv(os.path.join("./Output", "images.csv"))
df["accepted_taxon_id_at_source"] = df["accepted_taxon_id_at_source"].progress_apply(lambda x: "NBIC:100959" if x == "NBIC:217764" else ("NBIC:100392" if x == "NBIC:162803" else x))
df["taxon_id_at_source"] = df["accepted_taxon_id_at_source"]
print(f"{len(df)} images")
df.to_csv(os.path.join("./Output", "images.csv"), index=False)
df = pd.read_csv(os.path.join("./Output", "taxa.csv"))
df = df[(df["accepted_taxon_id_at_source"] != "NBIC:217764") & (df["accepted_taxon_id_at_source"] != "NBIC:162803")]
print(f"{len(df)} taxa")
df.to_csv(os.path.join("./Output", "taxa.csv"), index=False)
###Output
/tmp/ipykernel_9296/145595103.py:8: DtypeWarning: Columns (8,10) have mixed types. Specify dtype option on import or set low_memory=False.
df = pd.read_csv(os.path.join("./Output", "images.csv"))
100%|██████████| 1702829/1702829 [00:01<00:00, 1562923.91it/s]
###Markdown
Import the data
###Code
mails = pd.read_csv("datasets\spam_ham_dataset.csv")
###Output
_____no_output_____
###Markdown
Take a quick look at the dataset
###Code
mails.head()
###Output
_____no_output_____
###Markdown
I will not need the first two columns: the model will not operate on string category and the first one does not give me any additional information. I check how many of the mails are spam and how many ham.
###Code
mails['label'].value_counts()
###Output
_____no_output_____
###Markdown
I split the data into features and labels:
###Code
x, y = mails["text"].values, mails["label_num"].values
###Output
_____no_output_____
###Markdown
I check the length of the first element to see the effects of data cleaning later on.
###Code
len(x[0])
###Output
_____no_output_____
###Markdown
Data cleaning I will clean the data first a little bit: I will make sure the lower and uppercase starting words meing the same thing are treated the same way, remove the special characters and numbers. I should also get rid of the "Subject:" at the beginning of each message - not treating it as a stopword, as they should be taken care of as well, but as the starting of each message - I do not want to remove it from the inside of some mails if it happens to occur.
###Code
StopWords = stopwords.words("english")
def clean(text):
text = text[len('subject: '):]
text = text.lower()
text = ' '.join([word for word in text.split() if word not in StopWords])
text = re.sub(r'([^a-zA-Z ]+?)',' ', text)
text = re.sub(' +', ' ', text)
return text
x = [clean(text) for text in x]
###Output
_____no_output_____
###Markdown
I check the length of the first mail now:
###Code
len(x[0])
###Output
_____no_output_____
###Markdown
That's quite a difference. It will also speed up the model as I removed the stopwords. Get the words Here I will find how many different words are in the dataset and check their frequency.
###Code
counts = Counter()
for sentence in x:
counts.update(word.strip('') for word in sentence.split())
sorted_counts = counts.most_common()
num_words = len(sorted_counts)
num_words
###Output
_____no_output_____
###Markdown
That's a lot of unique words! I will now check the frequences of their occurences in the mails.
###Code
fig = px.histogram(x=counts.values(), range_x=[1,150])
fig.update_layout(xaxis_title="Number of occurences", yaxis_title="Number of words", title="Count of words distribution")
fig.show()
###Output
_____no_output_____
###Markdown
Most of the words are not used even 10 times. The encoding will probably use only a part of them. Split into train and test data I divide the set into two parts - training and testing set. I do not shuffle the data as it already is unordered.
###Code
x_train, x_test = x[: int(len(x) * .8)], x[int(len(x) * .8):]
y_train, y_test = y[: int(len(y) * .8)], y[int(len(y) * .8):]
###Output
_____no_output_____
###Markdown
Encoding - bag of words I will use the Bag of Words provided by the scikit learn.
###Code
vectorizer = CountVectorizer()
X_train = vectorizer.fit_transform(x_train)
X_test = vectorizer.transform(x_test)
###Output
_____no_output_____
###Markdown
Model training I will use Support Vector Machine to classify the mails.
###Code
from sklearn import svm
model = svm.SVC().fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
I check how the model performs on the previously prepared train data.
###Code
model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Ligandnet workflow
###Code
#**************************************
# Govinda KC #
# UTEP, Computational Science #
# Last modified: 1/25/20 #
# *************************************
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import warnings
import os, sys, json, glob
sys.path.append('utilities')
from train2 import Train
from fetch_ligand2 import Pharos_Data
from utility import FeatureGenerator # for features generation of txt file
from utility2 import FeatureGenerator2 # for features generation of sdf file
import pandas as pd
import numpy as np
from tqdm import tqdm
from rdkit import Chem
from rdkit.Chem import AllChem
from sklearn import metrics
from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.utils.class_weight import compute_class_weight
import joblib
from sklearn.neural_network import MLPClassifier
# from sklearn.metrics import make_scorer, roc_auc_score, recall_score, accuracy_score, precision_score
class Run_Workflow:
def __init__(self, actives, decoys):
self.actives = actives
self.decoys = decoys
self.results = dict()
def get_fingerprints(self,smiles):
try:
fg = FeatureGenerator(smiles)
features = fg.toTPATF()
return features
except Exception as e: print(e)
def get_models(self):
# Get features at first!
if not self.fp_generation():
print('Error: features extraction failed!')
return
try:
t = Train(self.actives_x, self.decoys_x)
t.train_models()
except Exception as e: print(e)
def fp_generation(self):
# Fingerprint generation
print('Pleae wait! Fingerprints are getting generated......')
if self.decoys[-4:] == '.sdf' and self.actives[-4:] == '.sdf':
# Get fingerprints for actives
self.actives_x = self.sdf_fp_active()
# Get fingerprints for decoys
self.decoys_x = self.sdf_fp_decoy()
return True
elif self.decoys[-4:] == '.sdf':
df = pd.read_csv(self.actives)
# df = pd.read_csv(open(self.actives,'rU'))#, encoding='utf-8', engine='c')
# Get fingerprints for actives
df['tpatf'] = df.SMILES.apply(self.get_fingerprints)
self.actives_x = np.array([f for f in df.tpatf.values], dtype = np.float32)
# Get fingerprints for decoys
self.decoys_x = self.sdf_fp_decoy()
return True
else:
df = pd.read_csv(self.actives)
df2 = pd.read_csv(self.decoys)
# df = pd.read_csv(open(self.actives,'rU'))#, encoding='utf-8', engine='c')
# df2 = pd.read_csv(open(self.decoys, 'rU'))#, encoding='utf-8', engine='c')
# Get fingerprints for actives
df['tpatf'] = df.SMILES.apply(self.get_fingerprints)
# Get fingerprints for decoys
df2['tpatf'] = df2.SMILES.apply(self.get_fingerprints)
# numpy arrays
self.actives_x = np.array([f for f in df.tpatf.values], dtype = np.float32)
self.decoys_x = np.array([f for f in df2.tpatf.values], dtype = np.float32)
return True
return False
def sdf_fp_decoy(self):
try:
fg2 = FeatureGenerator2(self.decoys)
feat_decoy = fg2.sepTPATF()
return feat_decoy
except Exception as e: print(e)
def sdf_fp_active(self):
try:
fg2 = FeatureGenerator2(self.actives)
feat_active = fg2.sepTPATF()
return feat_active
except Exception as e: print(e)
# If users have their own actives and decoys
def actives_decoys():
active_file = input("Uniprot id of the file? Example: P07948 \n")
active_file = active_file.strip()
print('Looking for active and decoy files....')
# active in .txt
actives = main_path+'actives/'+active_file+'.txt'
if not os.path.isfile(actives):
# active in .sdf
actives = main_path+'actives/'+active_file+'.sdf'
# decoy in .txt..
decoys = main_path+'decoys/'+"decoys_" + active_file +".txt"
if not os.path.isfile(decoys):
# decoy in .sdf..
decoys = main_path+'decoys/'+ "decoys_" +active_file+".sdf"
if os.path.isfile(actives) and os.path.isfile(decoys):
print('Actives and Decoys are found!')
return actives, decoys
# Searches decoys in our database for give active file (Uniprot id)
def actives_bt_not_decoys():
active_file = input("Uniprot id of the file? Example: P07948 \n")
active_file = active_file.strip()
actives = main_path+'actives/'+active_file+'.txt'
if not os.path.isfile(actives):
actives = main_path+'actives/'+active_file+'.sdf'
# Path for decoys database
decoys_database = '../decoys_database'
# if not os.path.isfile(os.path.join(decoys_database, active_file+".txt")):
print('Searching decoys .....')
if not os.path.isfile(os.path.join(decoys_database, active_file+".sdf")):
print("Decoys are not found, exiting! Look for decoys in DUDE website and come back!")
sys.exit(1)
# decoys = os.path.join(decoys_database, active_file+".txt")
decoys = os.path.join(decoys_database, "decoys_" +active_file+".sdf")
if os.path.isfile(actives) and os.path.isfile(decoys):
print('Actives and decoys are extracted!')
return actives, decoys
def no_actives_and_decoys():
active_file = input("Uniprot id of the file? Example: P07948 \n")
active_file = active_file.strip()
active_dir = main_path+'/'+ "actives"
pdata = Pharos_Data(active_file, active_dir )
print('Actives for a given protein are getting downloaded from Pharos website!')
pdata.fetch_ligand()
actives = main_path+'actives/'+active_file+'.txt'
print('Searching decoys .....')
decoys_database = '../decoys_database/'
if not os.path.isfile(os.path.join(decoys_database, "decoys_" +active_file+".sdf")):
print("Decoys are not found, exiting! Look for decoys in DUDE website and come back!")
sys.exit(1)
decoys = os.path.join(decoys_database, active_file+".sdf")
if os.path.isfile(actives) and os.path.isfile(decoys):
print('Actives and decoys are extracted!')
return actives, decoys
# Start here
def start_workflow():
print('Actives and decoys should either be in sdf file or text file (with header "SMILES" for txt files!)')
print('ACTIVES AND DECOYS FILE NAMES SHOULD BE LIKE THAT: P07948.txt(or .sdf) and decoys_P07948.txt (or .sdf) ')
print('PLEASE, MAKE SURE YOU HAVE FOLDERS "actives" and "decoys"')
print('DO YOU HAVE "actives" and "decoys" FOLDERS? Type y for Yes and n for No!')
check = input()
if check != 'y':
print('Exiting...')
sys.exit(1)
print("Do you have actives? Please type y for Yes and n for No !")
answer1 = input()
print("Do you have decoys? Please type y for Yes and n for No !")
answer2 = input()
if answer1 == 'y' and answer2 == 'y':
actives, decoys = actives_decoys()
rw = Run_Workflow(actives, decoys)
rw.get_models()
elif answer1 == 'y' and answer2 == 'n':
actives, decoys = actives_bt_not_decoys()
rw = Run_Workflow(actives, decoys)
rw.get_models()
elif answer1 == 'n' and answer2 == 'n':
actives, decoys = no_actives_and_decoys()
rw = Run_Workflow(actives, decoys)
rw.get_models()
else:
print('Please provide the right information!. Exiting!')
sys.exit(1)
if __name__ == '__main__':
# Path for working directory
print("Please, provide the path for working directory. Example: /Users/gvin/ligandnet_workflow/test_ligandnet/ \n")
main_path = input()
main_path = main_path.strip()
os.chdir(main_path)
dirs = ["actives", "decoys"]
for _dir in dirs:
if not os.path.isdir(_dir): os.makedirs(_dir)
if main_path[-1]!='/':
main_path = main_path+'/'
# Start Function
start_workflow()
###Output
Please, provide the path for working directory. Example: /Users/gvin/ligandnet_workflow/test_ligandnet/
/Users/gvin/ligandnet_workflow/test_ligandnet/
Actives and decoys should either be in sdf file or text file (with header "SMILES" for txt files!)
ACTIVES AND DECOYS FILE NAMES SHOULD BE LIKE THAT: P07948.txt(or .sdf) and decoys_P07948.txt (or .sdf)
PLEASE, MAKE SURE YOU HAVE FOLDERS "actives" and "decoys"
DO YOU HAVE "actives" and "decoys" FOLDERS? Type y for Yes and n for No!
y
Do you have actives? Please type y for Yes and n for No !
y
Do you have decoys? Please type y for Yes and n for No !
y
Uniprot id of the file? Example: P07948
P07948
Looking for active and decoy files....
Actives and Decoys are found!
Pleae wait! Fingerprints are getting generated......
Please choose the name (Example type 1 for Random Forest) of the model from the following options!
1. Random Forest Classifier
2. Extreme Gradient Boosting
3. Support Vector Classifier
4. Artificial Neural Network
5. All
6. Exit with out running any model
2
Training xgboost..
Results: {'xgb': {'roc_auc': 0.9576923076923077, 'accuracy': 0.9393939393939394, 'f1_score': 0.9393939393939394, 'cohen_kappa': 0.8730769230769231, 'mcc': 0.8730769230769231, 'data_info': {'train_count': 129, 'test_count': 33, 'actives_count': 62, 'decoys_count': 100}}}
Writing results
Done
###Markdown
Setting up libcudnn7-doc (7.6.5.32-1+cuda10.0) ...This notebook will take a dataset of images, run them through TSNE to group them up (if enabled) then create a stylegan2 model with or without ADA.Below are setting to choose when running this workflow. Make sure before running to have all images you want to use in a folder inside of the images folder. For example have a folder inside images called mona-lisa filled with pictures of different versions of the Mona Lisa. Please have the subfolder have no whitespaces in the name.If TSNE is enable the program will halt after processing the images and ask you to choose which cluster to use. The clusters will be in the folder clusters.Before running make sure your kernal is set to Python 3 (TensorFlow 1.15 Python 3.7 GPU Optimized)
###Code
dataset_name = 'mona-lisa'
use_ada = True
use_tsne = False
use_spacewalk = True
gpus = 2
# Crop Settings
# Choose center or no-crop
# TODO: Add random
crop_type = 'no-crop'
resolution = 512
# TSNE Settings
# Choose number of clusters to make or None for auto clustering
num_clusters = None
# ADA Settings
knum = 10
# Spacewalk Settings
fps = 24
seconds = 10
#Leave seeds = None for random seeds or
# enter a list in the form of [int, int, int..] to define the seeds
seeds = None
# set walk_type to 'line', 'sphere', 'noiseloop', or 'circularloop'
walk_type = 'sphere'
!pip install -r requirements.txt
import os
import train
from PIL import Image, ImageFile, ImageOps
import shutil
import math
ImageFile.LOAD_TRUNCATED_IMAGES = True
def resize(pil_img, res):
return pil_img.resize((res, res))
def crop_center(pil_img, res):
crop = res
img_width, img_height = pil_img.size
if img_width < crop:
crop = img_width
if img_height < crop:
crop = img_height
a = (img_width - crop) // 2
b = (img_height - crop) // 2
c = (img_width + crop) // 2
d = (img_height + crop) // 2
cropped_image = pil_img.crop((a,b,c,d))
return resize(cropped_image, res)
def no_crop(pil_img, res):
color = [0, 0, 0]
img_width, img_height = pil_img.size
if img_width < img_height:
top = 0
bottom = 0
left = math.ceil((img_height - img_width) / 2.0)
right = math.floor((img_height - img_width) / 2.0)
else:
top = math.ceil((img_height - img_width) / 2.0)
bottom = math.floor((img_height - img_width) / 2.0)
left = 0
right = 0
border_image = ImageOps.expand(pil_img, border=(left, top, right, bottom), fill='white')
return resize(border_image, res)
image_dir = './images/'
tmp_dir = './tmp/'
image_dir = os.path.join(image_dir, dataset_name)
tmp_dir = os.path.join(tmp_dir, dataset_name)
if not os.path.exists(tmp_dir):
os.makedirs(tmp_dir)
else:
try:
shutil.rmtree(tmp_dir)
except OSError as e:
print("Error: %s : %s" % (dir_path, e.strerror))
os.makedirs(tmp_dir)
for filename in os.listdir(image_dir):
file_extension = os.path.splitext(filename)[-1]
if file_extension != '.jpg' and file_extension != '.png':
print(file_extension)
continue
image_path = os.path.join(image_dir, filename)
image = Image.open(image_path)
mode = image.mode
if str(mode) != 'RGB':
continue
if crop_type == "center":
image = crop_center(image, resolution)
if crop_type == "no-crop":
image = no_crop(image, resolution)
tmp_path = os.path.join(tmp_dir, filename)
image.save(tmp_path)
if use_tsne:
!python tsne.py --path={tmp_dir}
else:
print('TSNE is not in use')
###Output
_____no_output_____
###Markdown
If TSNE is enabled when it is finished running check the Clusters folder and choose the cluster you want to use below
###Code
if use_tsne:
clusters = []
while True:
x = input("Enter a cluster you want to use or Enter to continue: ")
if x == '':
break
clusters.append(int(x))
dataset_dir = os.path.join("./datasets", dataset_name)
if use_ada and use_tsne:
image_dir = os.path.join("./tmp", str(dataset_name + "_clusters"))
!python dataset_tool.py create_from_images {dataset_dir} {image_dir}
!python train.py --outdir=./training-runs --gpus={gpus} --res={resolution} --data={dataset_dir} --kimg={knum}
elif use_ada:
image_dir = os.path.join("./tmp", dataset_name)
!python dataset_tool.py create_from_images {dataset_dir} {image_dir}
!python train.py --outdir=./training-runs --gpus={gpus} --res={resolution} --data={dataset_dir} --kimg={knum}
else:
print("ADA is not in use")
###Output
_____no_output_____
###Markdown
機械学習ハンズオン(ワークフロー編) 1. ハンズオンの概要[UCIのAdultデータセット](https://archive.ics.uci.edu/ml/datasets/Adult)を使って、年齢や職業などのデータから、その人の収入が5万ドル以上あるかどうかの2値分類(binary classification)を行います。このハンズオンの流れは次のとおりです。 1. データの取得 1. データの分析 1. データの前処理 1. 学習モデルの作成 1. 学習モデルの評価 2. 事前準備 2.1. ランタイムの確認Google Colabを使っている場合は、メニューから「ランタイム」→「ランタイムのタイプを変更」を選択して、「ハードウェア アクセラレータ」を「GPU」に設定してください。 2.2. ライブラリのロード
###Code
!pip install pandas tensorflow numpy matplotlib seaborn scikit-learn
import pandas as pd
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
###Output
_____no_output_____
###Markdown
3. データ取得 pandasのAPIを使ってデータファイルを読み込みます。 * このファイルにはヘッダー行がないので、ヘッダーは自分で設定します。 * 不明値を表す "?" はN/Aに変換しておきます。 * のちほど不明値を処理します。
###Code
headers = ('age', 'workclass', 'fnlwgt', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'income')
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data', sep=', ', names=headers, na_values='?')
###Output
_____no_output_____
###Markdown
読み込んだデータを表示してみましょう。
###Code
df.head(10)
###Output
_____no_output_____
###Markdown
4. データ分析 4.1. ラベルごとのデータ件数5万ドル未満が約76%あるので、**学習モデルがすべて5万ドル未満と予測しても76%前後の正答率が出てしまう**ことに注意が必要です。
###Code
df.groupby('income').size()
df.groupby('income').size() / len(df)
###Output
_____no_output_____
###Markdown
4.2. 量的変数の分析 ラベル別の特徴量の分布fntwgtは、5万ドル超も5万ドル以下も同じ分布を取っているので、ラベルとは無関係と判断し、特徴量から除外することにします。
###Code
plt.figure(figsize=(20, 10))
features = ['age', 'fnlwgt', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
for i in range(len(features)):
plt.subplot(2, 3, i+1)
plt.title(features[i])
sns.kdeplot(df[df['income']=='<=50K'][features[i]], label='<=50K')
sns.kdeplot(df[df['income']=='>50K'][features[i]], label='>50K')
df[df['income']=='<=50K'].describe()
df[df['income']=='>50K'].describe()
###Output
_____no_output_____
###Markdown
4.3. 質的変数の分析 ラベル別の特徴量の分布
###Code
pd.crosstab(index=df['income'], columns=df['workclass'], normalize='columns')
pd.crosstab(index=df['income'], columns=df['marital-status'], normalize='columns')
pd.crosstab(index=df['income'], columns=df['occupation'], normalize='columns')
pd.crosstab(index=df['income'], columns=df['relationship'], normalize='columns')
pd.crosstab(index=df['income'], columns=df['race'], normalize='columns')
pd.crosstab(index=df['income'], columns=df['sex'], normalize='columns')
pd.crosstab(index=df['income'], columns=df['native-country'], normalize='columns')
###Output
_____no_output_____
###Markdown
4.4. 特徴量間の関係の分析 "education" vs "education-num"下図から同値だと判断できるため、"education"は除外することにします。
###Code
sns.boxplot(y='education', x='education-num', data=df)
###Output
_____no_output_____
###Markdown
4.5. 欠損値のチェック欠損値があるので、あとでこのレコードを削除します。
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
5. データ前処理 5.1. 前処理をする前の状態
###Code
df.head(10)
###Output
_____no_output_____
###Markdown
5.2. 欠損値のあるレコードの削除
###Code
df = df.dropna()
df.isnull().sum()
###Output
_____no_output_____
###Markdown
5.3. ラベルの作成データから"income"列を抜き出し、"50K"をそれぞれ0, 1の数値に変換します。
###Code
ys = pd.get_dummies(df['income'], drop_first=True)
ys.head(10)
###Output
_____no_output_____
###Markdown
5.4. 不要な特徴量の削除 * "income"はラベルなので削除 * "fnlwgt"はラベルと相関がないので削除 * "education"は"education-num"と同一の特徴なので削除
###Code
drop_columns = ['income', 'fnlwgt', 'education']
df = df.drop(drop_columns, axis=1)
df.head(10)
###Output
_____no_output_____
###Markdown
5.5. 質的変数のダミー化
###Code
xs = pd.get_dummies(df)
xs.head(10)
###Output
_____no_output_____
###Markdown
6. 学習モデルの作成 6.1. データ分割データを訓練データ・検証データ・テストデータの3つに分割します。まず、全体の20%をテストデータに回し、残ったデータの20%を検証データに回します。
###Code
all_xs = xs.values
all_ys = ys.values
tmp_xs, test_xs, tmp_ys, test_ys = train_test_split(all_xs, all_ys, test_size=0.2)
train_xs, valid_xs, train_ys, valid_ys = train_test_split(tmp_xs, tmp_ys, test_size=0.2)
print(train_xs.shape, valid_xs.shape, test_xs.shape, train_ys.shape, valid_ys.shape, test_ys.shape)
###Output
_____no_output_____
###Markdown
6.2. 正規化特徴量を $0.0\le{}x\le{}1.0$ の範囲に収まるように正規化します。
###Code
scaler = MinMaxScaler()
scaler.fit(all_xs)
train_xs = scaler.transform(train_xs)
valid_xs = scaler.transform(valid_xs)
test_xs = scaler.transform(test_xs)
###Output
_____no_output_____
###Markdown
6.3. 学習モデル構築 まずは単層のパーセプトロンモデルを作りましょう。 * パラメータ数は特徴量の数と同じ * 出力次元数は2値分類なので1
###Code
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, input_dim=train_xs.shape[1], activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
###Output
_____no_output_____
###Markdown
7. 学習モデルの評価 7.1. 学習実行実際に学習エポック(すべての訓練データを1回学習させることを**1エポック**と呼びます)ごとに、訓練データ・検証データそれぞれに対する損失・正答率が出力されます。 * `loss` : 訓練データの損失 * `acc` : 訓練データの正答率 * `val_loss`: 検証データの損失 * `val_acc`: 検証データの正答率
###Code
hist = model.fit(train_xs, train_ys, batch_size=128, epochs=100, validation_data=(valid_xs, valid_ys))
###Output
_____no_output_____
###Markdown
7.2. モデルの評価訓練データ・学習データに対する損失と正答率をグラフ化してみましょう。
###Code
%matplotlib inline
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(range(1, 101), hist.history["loss"])
plt.plot(range(1, 101), hist.history["val_loss"])
plt.title("loss")
plt.xlabel("epoch")
plt.ylabel("loss")
plt.subplot(1, 2, 2)
plt.plot(range(1, 101), hist.history["acc"])
plt.plot(range(1, 101), hist.history["val_acc"])
plt.title("accuracy")
plt.xlabel("epoch")
plt.ylabel("accuracy")
###Output
_____no_output_____
###Markdown
テストデータに対する性能を求めてみましょう。
###Code
pred = model.predict_classes(test_xs, batch_size=128)
accuracy = accuracy_score(test_ys, pred)
precision = precision_score(test_ys, pred)
recall = recall_score(test_ys, pred)
f1 = f1_score(test_ys, pred)
print("accuracy = {:.2f}, precision = {:.2f}, recall = {:.2f}, F1-score = {:.2f}".format(accuracy, precision, recall, f1))
###Output
_____no_output_____
###Markdown
7.3. 別のモデルの評価今度はパーセプトロンを3層にしたモデルを試してみましょう。
###Code
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(128, input_dim=train_xs.shape[1], activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
hist = model.fit(train_xs, train_ys, batch_size=128, epochs=100, validation_data=(valid_xs, valid_ys))
%matplotlib inline
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(range(1, 101), hist.history["loss"])
plt.plot(range(1, 101), hist.history["val_loss"])
plt.title("loss")
plt.xlabel("epoch")
plt.ylabel("loss")
plt.subplot(1, 2, 2)
plt.plot(range(1, 101), hist.history["acc"])
plt.plot(range(1, 101), hist.history["val_acc"])
plt.title("accuracy")
plt.xlabel("epoch")
plt.ylabel("accuracy")
pred = model.predict_classes(test_xs, batch_size=128)
accuracy = accuracy_score(test_ys, pred)
precision = precision_score(test_ys, pred)
recall = recall_score(test_ys, pred)
f1 = f1_score(test_ys, pred)
print("accuracy = {:.2f}, precision = {:.2f}, recall = {:.2f}, F1-score = {:.2f}".format(accuracy, precision, recall, f1))
###Output
_____no_output_____
###Markdown
Macro-Pipeline Workflow Set Run-Specific InputFill in the username/password for the SURF dCache. LAZ files updated since the last workflow run will be re-run through the full pipeline.
###Code
webdav_login = input('WebDAV username: ')
webdav_password = getpass.getpass('WebDAV password: ')
last_run = datetime.datetime.strptime(input('Date last run (YYYY-MM-DD): '), '%Y-%m-%d')
###Output
_____no_output_____
###Markdown
Check Connection to Remote Storage
###Code
remote_path_root = pathlib.Path('/pnfs/grid.sara.nl/data/projects.nl/eecolidar/01_Escience/')
wd_opts = {
'webdav_hostname': 'https://webdav.grid.surfsara.nl:2880',
'webdav_login': webdav_login,
'webdav_password': webdav_password
}
assert get_wdclient(wd_opts).check(remote_path_root.as_posix())
###Output
_____no_output_____
###Markdown
Setup ClusterSetup Dask cluster used for all the macro-pipeline calculations.
###Code
local_tmp = pathlib.Path('/tmp')
cluster = LocalCluster(processes=True,
n_workers=2,
threads_per_worker=1,
local_directory=local_tmp/'dask-worker-space')
# nprocs_per_node = 2
# cluster = SSHCluster(hosts=['172.17.0.2',
# '172.17.0.2',
# '172.17.0.3'],
# connect_options={'known_hosts': None,
# 'username': 'ubuntu',
# 'client_keys': '/home/ubuntu/.ssh/id_rsa'},
# worker_options={'nthreads': 1,
# 'nprocs': nprocs_per_node,
# 'local_directory': local_tmp/'dask-worker-space'},
# scheduler_options={'dashboard_address': '8787'})
cluster
###Output
_____no_output_____
###Markdown
RetilingThe raw point-cloud files are downloaded and retiled to a regular grid.
###Code
# dCache path to raw LAZ files
remote_path_ahn = remote_path_root / 'test_pipeline/test_full/raw'
# dCache path where to copy retiled PLY files
remote_path_retiled = remote_path_ahn.parent / 'retiled'
# details of the retiling schema
grid = {
'min_x': -113107.81,
'max_x': 398892.19,
'min_y': 214783.87,
'max_y': 726783.87,
'n_tiles_side': 512
}
# determine which LAZ files have been updated since the last run
laz_files = [f for f in list_remote(get_wdclient(wd_opts), remote_path_ahn.as_posix())
if f.lower().endswith('.laz') and last_modified(wd_opts, remote_path_ahn/f) > last_run]
print('Retrieve and retile: {} LAZ files'.format(len(laz_files)))
# setup input dictionary to configure the retiling pipeline
retiling_input = {
'setup_local_fs': {'tmp_folder': local_tmp.as_posix()},
'pullremote': remote_path_ahn.as_posix(),
'set_grid': grid,
'split_and_redistribute': {},
'validate': {},
'pushremote': remote_path_retiled.as_posix(),
'cleanlocalfs': {}
}
# write input dictionary to JSON file
with open('retiling.json', 'w') as f:
json.dump(retiling_input, f)
macro = MacroPipeline()
# add pipeline list to macro-pipeline object and set the corresponding labels
macro.tasks = [Retiler(file).config(retiling_input).setup_webdav_client(wd_opts) for file in laz_files]
macro.set_labels([os.path.splitext(file)[0] for file in laz_files])
macro.setup_cluster(cluster=cluster)
# run!
macro.run()
# save outcome results and check that no error occurred before continuing
macro.print_outcome(to_file='retiling.out')
assert not macro.get_failed_pipelines()
###Output
_____no_output_____
###Markdown
Feature ExtractionFeatures computed for the retiled point-cloud data are assigned to a regular 'target' grid.
###Code
# target mesh size & list of features
tile_mesh_size = 10.
features = ['perc_95_normalized_height', 'pulse_penetration_ratio', 'entropy_normalized_height', 'point_density']
# dCache path where to copy the feature-enriched target data
remote_path_targets = remote_path_ahn.parent / 'targets'
# determine which tiles have been updated since last run, and extract tile index numbers
tiles = [t.strip('/') for t in list_remote(get_wdclient(wd_opts), remote_path_retiled.as_posix())
if fnmatch.fnmatch(t, 'tile_*_*/') and last_modified(wd_opts, remote_path_retiled/t) > last_run]
tile_indices = [[int(el) for el in tile.split('_')[1:]] for tile in tiles]
print('Retrieve and process: {} tiles'.format(len(tile_indices)))
# setup input dictionary to configure the feature extraction pipeline
feature_extraction_input = {
'setup_local_fs': {'tmp_folder': local_tmp.as_posix()},
'pullremote': remote_path_retiled.as_posix(),
'load': {'attributes': ['raw_classification']},
'normalize': 1,
'apply_filter': {
'filter_type': 'select_equal',
'attribute': 'raw_classification',
'value': [1, 6]#ground surface (2), water (9), buildings (6), artificial objects (26), vegetation (?), and unclassified (1)
},
'generate_targets': {
'tile_mesh_size' : tile_mesh_size,
'validate' : True,
**grid
},
'extract_features': {
'feature_names': features,
'volume_type': 'cell',
'volume_size': tile_mesh_size
},
'export_targets': {
'attributes': features,
'multi_band_files': False
},
'pushremote': remote_path_targets.as_posix(),
# 'cleanlocalfs': {}
}
# write input dictionary to JSON file
with open('feature_extraction.json', 'w') as f:
json.dump(feature_extraction_input, f)
macro = MacroPipeline()
# add pipeline list to macro-pipeline object and set the corresponding labels
macro.tasks = [DataProcessing(t, tile_index=idx).config(feature_extraction_input).setup_webdav_client(wd_opts)
for t, idx in zip(tiles, tile_indices)]
macro.set_labels(tiles)
macro.setup_cluster(cluster=cluster)
# run!
macro.run()
# save outcome results and check that no error occurred before continuing
macro.print_outcome(to_file='feature_extraction.out')
assert not macro.get_failed_pipelines()
###Output
_____no_output_____
###Markdown
GeoTIFF ExportExport the rasterized features from the target grid to GeoTIFF files.
###Code
# dCache path where to copy the GeoTIFF files
remote_path_geotiffs = remote_path_ahn.parent / 'geotiffs'
# setup input dictionary to configure the GeoTIFF export pipeline
geotiff_export_input = {
'setup_local_fs': {'tmp_folder': local_tmp.as_posix()},
'pullremote': remote_path_targets.as_posix(),
'parse_point_cloud': {},
'data_split': {'xSub': 1, 'ySub': 1},
'create_subregion_geotiffs': {'output_handle': 'geotiff'},
'pushremote': remote_path_geotiffs.as_posix(),
'cleanlocalfs': {}
}
# write input dictionary to JSON file
with open('geotiff_export.json', 'w') as f:
json.dump(geotiff_export_input, f)
macro = MacroPipeline()
# add pipeline list to macro-pipeline object and set the corresponding labels
macro.tasks = [GeotiffWriter(input_dir=feature, bands=feature).config(geotiff_export_input).setup_webdav_client(wd_opts)
for feature in features]
macro.set_labels(features)
macro.setup_cluster(cluster=cluster)
# run!
macro.run()
# save outcome results and check that no error occurred before continuing
macro.print_outcome(to_file='geotiff_export.out')
assert not macro.get_failed_pipelines()
###Output
_____no_output_____
###Markdown
Terminate cluster
###Code
cluster.close()
###Output
_____no_output_____
###Markdown
Load data into dataframes
###Code
# Load data
vcf_df, feature_mapping = gwasio.load_vcf(vcf_path, info_keys=[], format_keys=["GT"])
#vcf_df = cudf.io.parquet.read_parquet("/data/1000-genomes/hail-dataset/1kg_full_jdaw_v2.pqt")
ann_df = gwasio.load_annotations(annotation_path)
print(vcf_df)
print("==")
print(ann_df)
###Output
_____no_output_____
###Markdown
Generate phenotype dataframe by merging vcf and annotation DF
###Code
phenotypes_df, features = dp.create_phenotype_df(vcf_df, ann_df, ['CaffeineConsumption','isFemale','SuperPopulation'], "call_GT",
vcf_sample_col="sample", ann_sample_col="Sample")
###Output
_____no_output_____
###Markdown
Run PCA on phenotype matrix
###Code
# Run PCA on phenotype dataframe
phenotypes_df = algos.PCA_concat(phenotypes_df, 2)
print(phenotypes_df)
colors = {'AFR':'red', 'AMR':'green', 'EAS':'blue', 'EUR':'yellow', 'SAS':'purple'}
from matplotlib.lines import Line2D
plt.scatter(phenotypes_df.PC0.to_array(), phenotypes_df.PC1.to_array(),
c=phenotypes_df.SuperPopulation.to_pandas().map(colors).values, s=9)
legend_elements = [Line2D([0], [0], marker='o', color='w', label=key,
markerfacecolor=value) for key, value in colors.items()]
plt.legend(handles=legend_elements)
###Output
_____no_output_____
###Markdown
Run GWAS with linear regression for each independent variant
###Code
# Fit linear regression model for each variant feature
print("Fitting linear regression model")
df = runner.run_gwas(phenotypes_df, 'CaffeineConsumption', features, algos.cuml_LinearReg, add_cols=['PC0', 'PC1'])
print(df)
plt.hist(-np.log(df["p_value"].to_array()), bins = np.linspace(0,1,100));
df.drop(columns="chrom", inplace=True)
g_feature_mapping = cudf.DataFrame(feature_mapping[["feature_id", "pos", "chrom"]])
df = df.merge(g_feature_mapping, how="inner", left_on=["feature"], right_on=["feature_id"])
df.chrom = df.chrom.astype("int64")
#plt.plot(result["feature"].to_array(), -np.log10(result["p_value"].to_array()), ".");
show_manhattan_plot(result, 'chrom', 'p_value', 'feature')
a = df["p_value"].to_array()
a.sort()
expect_p = np.linspace(0, 1, len(a))
#plt.plot(-np.log10(expect_p), -np.log10(a), '.')
#plt.plot([0,5],[0,5])
df["e_value"] = np.linspace(0, 1, len(a))
df["p_s_value"] = a
show_qq_plot(df, 'e_value', 'p_s_value', x_max=3, y_max=3)
from bokeh.plotting import figure
from bokeh.io import output_notebook, push_notebook, show
output_notebook()
plot = figure()
plot.circle(-np.log10(expect_p+1e-10), -np.log10(a))
handle = show(plot, notebook_handle=True)
# Update the plot title in the earlier cell
plot.title.text = "qqplot"
push_notebook(handle=handle)
!wget https://www.broadinstitute.org/files/shared/diabetes/scandinavs/DGI_chr3_pvals.txt
pvals = []
with open('DGI_chr3_pvals.txt') as f:
for r in f:
r = r.strip()
if r == 'PVAL':
continue
pvals.append(float(r))
pvals = np.array(pvals)
pvals.sort()
expect_p = np.linspace(0, 1, len(pvals))
plt.plot(-np.log10(expect_p), -np.log10(pvals), '.')
plt.plot([0,5],[0,5])
from bokeh.plotting import figure
from bokeh.io import output_notebook, push_notebook, show
f#rom bokeh.models import Range1d
output_notebook()
plot = figure(plot_width=300, plot_height=300,
y_range=(0,5),
x_range=(0,5))
plot.circle(-np.log10(expect_p+1e-10), -np.log10(pvals))
plot.line([0,5],[0,5])
handle = show(plot, notebook_handle=True)
# Update the plot title in the earlier cell
plot.title.text = "qqplot"
push_notebook(handle=handle)
pvals
###Output
_____no_output_____
###Markdown
Load data into dataframes
###Code
# Load data
vcf_df, feature_mapping = gwasio.load_vcf(vcf_path, info_keys=[], format_keys=["GT"])
#vcf_df = cudf.io.parquet.read_parquet("data/1kg_full_jdaw_v2.pqt")
#feature_mapping = vcf_df[["chrom", "pos", "feature_id"]].to_pandas()
ann_df = gwasio.load_annotations(annotation_path)
print(vcf_df)
print("==")
print(ann_df)
###Output
_____no_output_____
###Markdown
Generate phenotype dataframe by merging vcf and annotation DF
###Code
phenotypes_df, features = dp.create_phenotype_df(vcf_df, ann_df, ['CaffeineConsumption','isFemale','SuperPopulation'], "call_GT",
vcf_sample_col="sample", ann_sample_col="Sample")
###Output
_____no_output_____
###Markdown
Run PCA on phenotype matrix
###Code
# Run PCA on phenotype dataframe
phenotypes_df = algos.PCA_concat(phenotypes_df, 2)
print(phenotypes_df)
colors = {'AFR':'red', 'AMR':'green', 'EAS':'blue', 'EUR':'yellow', 'SAS':'purple'}
from matplotlib.lines import Line2D
plt.scatter(phenotypes_df.PC0.to_array(), phenotypes_df.PC1.to_array(),
c=phenotypes_df.SuperPopulation.to_pandas().map(colors).values, s=9)
legend_elements = [Line2D([0], [0], marker='o', color='w', label=key,
markerfacecolor=value) for key, value in colors.items()]
plt.legend(handles=legend_elements)
###Output
_____no_output_____
###Markdown
Run GWAS with linear regression for each independent variant
###Code
# Fit linear regression model for each variant feature
print("Fitting linear regression model")
df = runner.run_gwas(phenotypes_df, 'CaffeineConsumption', features, algos.cuml_LinearReg, add_cols=['PC0', 'PC1'])
print(df)
plt.hist(-np.log(df["p_value"].to_array()), bins = np.linspace(0,1,100));
df.drop(columns="chrom", inplace=True)
g_feature_mapping = cudf.DataFrame(feature_mapping[["feature_id", "pos", "chrom"]])
df = df.merge(g_feature_mapping, how="inner", left_on=["feature"], right_on=["feature_id"])
df.chrom = df.chrom.astype("int64")
show_manhattan_plot(df, 'chrom', 'pos', 'p_value', title='GWAS Manhattan Plot')
a = df["p_value"].to_array()
a.sort()
expect_p = np.linspace(0, 1, len(a))
df["e_value"] = np.linspace(0, 1, len(a))
df["p_s_value"] = a
show_qq_plot(df, 'e_value', 'p_s_value', x_max=3, y_max=3)
###Output
_____no_output_____
###Markdown
Imports
###Code
import ml_pdf.py
import gen_pdfs.py
import dns_plotter.py
import utilities
###Output
_____no_output_____
###Markdown
Plot DNS data
###Code
fdir = '/projects/exact/Shashank/plt_DRM_0.7_1095_ML_Output'
plot_dns(fdir)
###Output
_____no_output_____
###Markdown
Generate subvolume (aka dice) data You first need to generate the sub-volumes ("dices") by running:```$ python dicer.py -f $PELE_OUTPUT_FILE```where `$PELE_OUTPUT_FILE` is the output files from the Pele DNS. The default arguments will generate the necessary files for the analysis below. But one could also get a continuous series of dices (a single one would require too much memory) for a large part of the domain by doing:```$ python dicer.py -f /projects/exact/Shashank/plt_DRM_0.7_1095_ML_Output -z 0.003125 0.009375 0.015625 0.021875 0.028125 0.034375 0.040625 0.046875 0.053125 0.059375 0.065625 0.071875 0.078125 0.084375 0.090625 0.096875 0.103125 0.109375 0.115625 0.121875 0.128125 0.134375 0.140625 0.146875 0.153125 -ht 0.00625 --extent -0.125 0.125 --output data_full``` You can concatenate dices together by doing the following (e.g., for that last command)
###Code
dices = ["dice_{0:04d}".format(i) for i in range(25)]
concatenate_dices(dices=dices, datadir=os.path.abspath("data_full"))
###Output
_____no_output_____
###Markdown
Generate the PDFs from the DNS subvolume data
###Code
dice = "dice_0004"
datadir = os.path.abspath('data')
pdf, bins, means = gen_pdf_from_dice(os.path.join(datadir, f"{dice}.npz"))
###Output
_____no_output_____
###Markdown
Alternatively, load the pdf, bins, and means (if they have already been generated)
###Code
pdf = pd.read_pickle(os.path.join(datadir, f"{dice}_pdfs.gz"))
bins = pd.read_pickle(os.path.join(datadir, "bins.gz"))
means = pd.read_pickle(os.path.join(datadir, f"{dice}_src_pv_means.gz"))
###Output
_____no_output_____
###Markdown
If you have all the dice, you can concatenate them into one large dataframe
###Code
dices = ["dice_0001","dice_0002","dice_0003","dice_0004","dice_0005"]
pdf = pd.concat([pd.read_pickle(os.path.join(datadir, f"{dice}_pdfs.gz")) for dice in dices], ignore_index=True)
means = pd.concat([pd.read_pickle(os.path.join(datadir, f"{dice}_src_pv_means.gz")) for dice in dices], ignore_index=True)
pdf.to_pickle(os.path.join(datadir, "dices_pdfs.gz"))
means.to_pickle(os.path.join(datadir, "dices_src_pv_means.gz"))
###Output
_____no_output_____
###Markdown
This is how to get the bin edges
###Code
cbin_edges = utilities.midpoint_to_edges(np.unique(bins.Cbins))
zbin_edges = utilities.midpoint_to_edges(np.unique(bins.Zbins))
###Output
_____no_output_____
###Markdown
Plot slices in the dices, the input space and some sample pdfs
###Code
[plot_dice_slices(os.path.join(datadir, f"{dice}.npz")) for dice in dices]
for dice in dices:
pdf = pd.read_pickle(os.path.join(datadir, f"{dice}_pdfs.gz"))
plot_input_space(pdf, fname=f"inputs_{dice}.pdf")
# Find PDFs with points closest to these:
points = pd.DataFrame({'Z':[0, 0.4, 0.6255, 0.6714,0.8, 0.9252],
'Zvar': [0, 0.0066, 0.0134, 0.0128, 0.01, 0.0043],
'C':[0, 0.0269, 0.0318, 0.0822, 0.05, 0.1209],
'Cvar':[0, 0.0006, 0.0016, 0.0034, 0.0029, 0.0046]})
idx = [closest_point(points.loc[i,:], pdf.loc[:,points.columns]).name for i in points.index]
plot_pdfs(pdf.loc[idx], means.loc[idx], bins)
# Or (fewer points)
points = pd.DataFrame({'Z':[0, 0.4, 0.6714, 0.9252],
'Zvar': [0, 0.0066, 0.0128, 0.0043],
'C':[0, 0.0269, 0.0822, 0.1209],
'Cvar':[0, 0.0006, 0.0034, 0.0046]})
idx = [closest_point(points.loc[i,:], pdf.loc[:,points.columns]).name for i in points.index]
plot_pdfs(pdf.loc[idx], means.loc[idx], bins)
###Output
_____no_output_____
###Markdown
Find distances between PDFs in different dice
###Code
distances = pdf_distances("dice_0004")
plot_pdf_distances("dice_0004")
###Output
_____no_output_____
###Markdown
Generate the training data
###Code
Xtrain, Xdev, Xtest, Ytrain, Ydev, Ytest, scaler = gen_training(pdf, dice)
###Output
_____no_output_____
###Markdown
Alternatively, load the training data (if it has already been generated)
###Code
Xtrain = pd.read_pickle(os.path.join(datadir, f"{dice}_xtrain.gz"))
Xdev = pd.read_pickle(os.path.join(datadir, f"{dice}_xdev.gz"))
Ytrain = pd.read_pickle(os.path.join(datadir, f"{dice}_ytrain.gz"))
Ydev = pd.read_pickle(os.path.join(datadir, f"{dice}_ydev.gz"))
###Output
_____no_output_____
###Markdown
Sometimes, one might need to switch scalers (e.g. you train on one dice and want to predict on another)
###Code
scaler_dice_0002 = joblib.load(os.path.join(datadir, "dice_0002_scaler.pkl"))
scaler_dice_0003 = joblib.load(os.path.join(datadir, "dice_0003_scaler.pkl"))
Xtrain = utilities.switch_scaler(Xtrain, scaler_dice_0003, scaler_dice_0002)
Xdev = utilities.switch_scaler(Xdev, scaler_dice_0003, scaler_dice_0002);
###Output
_____no_output_____
###Markdown
PDF predictions with machine learning Random Forest
###Code
mtrain_rf, mdev_rf, RF = rf_training(Xtrain, Xdev, Ytrain, Ydev)
plot_result( Ytrain, mtrain_rf, Ydev, mdev_rf, pdf.loc[Xdev.index,Xdev.columns], bins, fname = "RF.pdf")
conv_rf = convolution_means(mdev_rf, means.loc[Ydev.index])
plot_scatter(pdf.SRC_PV.loc[Ydev.index], conv_rf, fname = "convolution_RF.pdf")
###Output
_____no_output_____
###Markdown
Linear regression
###Code
mtrain_lr, mdev_lr, LR = lr_training(Xtrain, Xdev, Ytrain, Ydev)
###Output
_____no_output_____
###Markdown
Polynomial regression
###Code
mtrain_pr, mdev_pr, PR = pr_training(Xtrain, Xdev, Ytrain, Ydev, order=6)
###Output
_____no_output_____
###Markdown
Feed-forward Neural Network
###Code
mtrain_dnn, mdev_dnn, DNN = dnn_training(Xtrain, Xdev, Ytrain, Ydev, use_gpu=True)
###Output
_____no_output_____
###Markdown
Alternatively, load a pretrained network
###Code
device = torch.device("cpu")
dtype = torch.double
vh = VariableHandler(device=device, dtype=dtype)
batch_size = 64
input_size = Xtrain.shape[1]
layer_sizes = [256, 512, Ytrain.shape[1]]
DNN = Net(input_size, layer_sizes, vh).to(device=device, dtype=dtype)
DNN.load('DNN.pkl')
mtrain_dnn = DNN.predict(Xtrain)
mdev_dnn = DNN.predict(Xdev)
plot_result( Ytrain, mtrain_dnn, Ydev, mdev_dnn, pdf.loc[Xdev.index,Xdev.columns], bins, fname = "DNN.pdf")
conv_dnn = convolution_means(mdev_dnn, means.loc[Ydev.index])
plot_scatter(pdf.SRC_PV.loc[Ydev.index], conv_dnn, fname = "convolution_DNN.pdf")
###Output
_____no_output_____
###Markdown
Estimate of feature importance through the shuffled input loss
###Code
imp_dnn = shuffled_input_loss(DNN, Xdev, Ydev)
imp_dnn.div(imp_dnn.original, axis=0)
###Output
_____no_output_____
###Markdown
PDF predictions with generative models Conditional Variational Autoencoder
###Code
mtrain_cvae, mdev_cvae, cvae = cvae_training(Xtrain, Xdev, Ytrain, Ydev, use_gpu=True)
###Output
_____no_output_____
###Markdown
Alternatively, load a pre-trained model:
###Code
device = torch.device("cpu")
vh = VariableHandler(device=device, dtype=torch.double)
nlabels = Xtrain.shape[1]
input_size = Ytrain.shape[1]
batch_size = 64
encoder_layer_sizes = [input_size + nlabels, 512, 256]
latent_size = 10
decoder_layer_sizes = [256, 512, input_size]
cvae = CVAE(
encoder_layer_sizes=encoder_layer_sizes,
latent_size=latent_size,
decoder_layer_sizes=decoder_layer_sizes,
nlabels=nlabels,
vh=vh,
).to(device=device)
cvae.load("CVAE.pkl")
mtrain_cvae = cvae.predict(Xtrain)
mdev_cvae = cvae.predict(Xdev)
plot_result( Ytrain, mtrain_cvae, Ydev, mdev_cvae, pdf.loc[Xdev.index,Xdev.columns], bins, fname='CVAE.pdf')
conv_cvae = convolution_means(mdev_cvae, means.loc[Ydev.index])
plot_scatter(pdf.SRC_PV.loc[Ydev.index], conv_cvae, fname = "convolution_CVAE.pdf")
###Output
_____no_output_____
###Markdown
You can also use the model to predict on all the dices
###Code
scaler_dice_0002 = joblib.load(os.path.join(datadir, "dice_0002_scaler.pkl"))
predict_all_dice(cvae, scaler_dice_0002)
###Output
_____no_output_____
###Markdown
Conditional Generative Adversarial Network
###Code
mtrain_cgan, mdev_cgan, cgan = cgan_training(Xtrain, Xdev, Ytrain, Ydev, use_gpu=True)
plot_result( Ytrain, mtrain_cgan, Ydev, mdev_cgan, pdf.loc[Xdev.index,Xdev.columns], bins, fname='CGAN.pdf')
conv_cgan = convolution_means(mdev_cgan, means.loc[Ydev.index])
plot_scatter(pdf.SRC_PV.loc[Ydev.index], conv_cgan, fname = "convolution_CGAN.pdf")
###Output
_____no_output_____
###Markdown
PDF predictions with analytical models delta-delta model
###Code
dd = DD(zbin_edges, cbin_edges)
mtrain_dd = dd.predict(pdf.loc[Xtrain.index,['C','Z']])
mdev_dd = dd.predict(pdf.loc[Xdev.index,['C','Z']])
summarize_training(Ytrain, mtrain_dd, Ydev, mdev_dd, fname="DD.log")
plot_result( Ytrain, mtrain_dd, Ydev, mdev_dd, pdf.loc[Xdev.index,Xdev.columns], bins, fname = "DD.pdf")
conv_dd = convolution_means(mdev_dd, means.loc[Ydev.index])
plot_scatter(pdf.SRC_PV.loc[Ydev.index], conv_dd, fname = "convolution_DD.pdf")
###Output
_____no_output_____
###Markdown
beta-delta model
###Code
bd = BD(zbin_edges, cbin_edges)
mtrain_bd = bd.predict(pdf.loc[Xtrain.index,['C','Z','Zvar']])
mdev_bd = bd.predict(pdf.loc[Xdev.index,['C','Z','Zvar']])
summarize_training(Ytrain, mtrain_bd, Ydev, mdev_bd, fname="BD.log")
plot_result( Ytrain, mtrain_bd, Ydev, mdev_bd, pdf.loc[Xdev.index,Xdev.columns], bins, fname = "BD.pdf")
conv_bd = convolution_means(mdev_bd, means.loc[Ydev.index])
plot_scatter(pdf.SRC_PV.loc[Ydev.index], conv_bd, fname = "convolution_BD.pdf")
###Output
_____no_output_____
###Markdown
beta-beta model
###Code
bb = BB(zbin_edges, cbin_edges)
mtrain_bb = bb.predict(pdf.loc[Xtrain.index,['C','Cvar','Z','Zvar']])
mdev_bb = bb.predict(pdf.loc[Xdev.index,['C','Cvar','Z','Zvar']])
summarize_training(Ytrain, mtrain_bb, Ydev, mdev_bb, fname="BB.log")
plot_result( Ytrain, mtrain_bb, Ydev, mdev_bb, pdf.loc[Xdev.index,Xdev.columns], bins, fname = "BB.pdf")
conv_bb = convolution_means(mdev_bb, means.loc[Ydev.index])
plot_scatter(pdf.SRC_PV.loc[Ydev.index], conv_bb, fname = "convolution_BB.pdf")
###Output
_____no_output_____
###Markdown
Good, medium, bad beta models:
###Code
# Find index
m_bb = bb.predict(pdf.loc[:,['C','Cvar','Z','Zvar']])
jsd_bb = calculate_jsd(pdf.loc[:,Ytrain.columns], m_bb)
idx = [jsd_bb.argmin(), np.fabs(jsd_bb - np.log(2)/2).argmin(), jsd_bb.argmax()]
# Plot PDFs
for i, index in enumerate(idx):
m_bb = {'BB': bb.predict(pdf.loc[[index],['C','Cvar','Z','Zvar']])}
plot_pdfs(pdf.loc[[index]], means.loc[[index]], bins, fname=f"pdfs_{index}.pdf", models=m_bb)
###Output
_____no_output_____
###Markdown
Training and predicting on a subset of the data
###Code
idx = pdf.xc < 0
Xtrain_sub = Xtrain.loc[idx.loc[Xtrain.index]]
Xdev_sub = Xdev.loc[idx.loc[Xdev.index]]
Ytrain_sub = Ytrain.loc[idx.loc[Ytrain.index]]
Ydev_sub = Ydev.loc[idx.loc[Ydev.index]]
mtrain_dnn, mdev_dnn, DNN = dnn_training(Xtrain_sub, Xdev_sub, Ytrain_sub, Ydev_sub, use_gpu=True)
dnn_h = predict_all_dice(DNN, scaler_dice_0004, half=True)
plot_dice_predictions({'DNN':dnn_h})
###Output
_____no_output_____
###Markdown
Prediction timings
###Code
# Load all the models and then:
pt = prediction_times({'RF':RF, 'DNN':DNN, 'CVAE': cvae}, Xdev, Ydev)
pt.loc[:,['model','time','error']].to_latex()
# For the analytical models, you can do
pt = prediction_times({'BB': bb}, pdf.loc[Xdev.index,['C','Cvar','Z','Zvar']], Ydev)
###Output
_____no_output_____
###Markdown
Summary graphs JSD plots
###Code
jsd = pd.DataFrame({'RF': calculate_jsd(Ydev, mdev_rf),
'DNN': calculate_jsd(Ydev, mdev_dnn),
'CVAE': calculate_jsd(Ydev, mdev_cvae),
'BB': calculate_jsd(Ydev, mdev_bb)})
plot_jsd(jsd)
###Output
_____no_output_____
###Markdown
Convolution plots
###Code
convolutions = pd.DataFrame({'RF': convolution_means(mdev_rf, means.loc[Ydev.index]),
'DNN': convolution_means(mdev_dnn, means.loc[Ydev.index]),
'CVAE': convolution_means(mdev_cvae, means.loc[Ydev.index]),
'BB': convolution_means(mdev_bb, means.loc[Ydev.index])})
plot_convolution(pdf.loc[Ydev.index], convolutions, bins)
###Output
_____no_output_____
###Markdown
Good, bad, medium PDFs
###Code
# based on BB predictions (use with dice_0004)
jsd_bb = calculate_jsd(Ydev, mdev_bb)
idx = [jsd_bb.argmin(), np.fabs(jsd_bb - np.log(2)/2).argmin(), jsd_bb.argmax()]
for i, index in zip(idx, Ydev.index[idx]):
model_pdfs = {'RF': mdev_rf[np.newaxis, i,:],
'DNN': mdev_dnn[np.newaxis, i,:],
'CVAE': mdev_cvae[np.newaxis, i,:],
'BB': mdev_bb[np.newaxis, i,:]}
plot_pdfs(pdf.loc[[index]], means.loc[[index]], bins, fname=f"pdfs_{index}.pdf", models=model_pdfs)
# based on PDF of DNN predictions and higher filtered reaction rates (use with dices_skip)
omega_lim = 15
jsd_dnn = calculate_jsd(Ydev, mdev_dnn)
points = [jsd_dnn[pdf.SRC_PV.loc[Ydev.index].values > omega_lim].min(),
np.median(jsd_dnn[pdf.SRC_PV.loc[Ydev.index].values > omega_lim]),
jsd_dnn[pdf.SRC_PV.loc[Ydev.index].values > omega_lim][np.fabs(jsd_dnn[pdf.SRC_PV.loc[Ydev.index].values > omega_lim] - 0.1).argmin()],
jsd_dnn[pdf.SRC_PV.loc[Ydev.index].values > omega_lim].max()]
idx = [np.fabs(jsd_dnn - point).argmin() for point in points]
src_pv_err_dnn = np.fabs(pdf.SRC_PV.loc[Ydev.index] - convolution_means(mdev_dnn, means.loc[Ydev.index])).values
for i, index in zip(idx, Ydev.index[idx]):
model_pdfs = {'RF': mdev_rf[np.newaxis, i,:],
'DNN': mdev_dnn[np.newaxis, i,:],
'CVAE': mdev_cvae[np.newaxis, i,:],
'BB': mdev_bb[np.newaxis, i,:]}
plot_pdfs(pdf.loc[[index]], means.loc[[index]], bins, fname=f"pdfs_{index}.pdf", models=model_pdfs)
###Output
_____no_output_____
###Markdown
Predictions across dices (load models first)
###Code
bbp = predict_all_dice(bb, None)
rf_4 = predict_all_dice(RF, scaler_dice_0004)
dnn_4 = predict_all_dice(DNN, scaler_dice_0004)
cvae_4 = predict_all_dice(cvae, scaler_dice_0004)
predictions_4 = {'RF': rf_4, 'DNN':dnn_4, 'CVAE': cvae_4, 'BB': bbp}
with open(os.path.join(datadir, 'predictions_4.pkl'), 'wb') as f:
pickle.dump(predictions_4, f, pickle.HIGHEST_PROTOCOL)
# or load
with open(os.path.join(datadir, 'predictions_4.pkl'), 'rb') as f:
predictions_4 = pickle.load(f)
# plot
plot_dice_predictions(predictions_4)
###Output
_____no_output_____
###Markdown
Layerwise relevance propagation (LRP)
###Code
scaler_dices_skip = joblib.load(os.path.join(datadir, "dices_skip_scaler.pkl"))
lrps = lrp_all_dice(DNN, scaler_dices_skip)
###Output
_____no_output_____
###Markdown
DrugEx APIAn example DrugEx workflow showcasing some basic DrugEx API features. The API provides interface definitions to handle data operations and training of models needed for obtaining a molecule designer. The interface should ensure that the current code base is extensible and loosely coupled to make interoperability with different data sources seamless and to also aid in monitoring of the training processes involved.Let's import and explain some of the important API features:
###Code
# main package
import drugex
# important classes for data access
from drugex.api.environ.data import ChEMBLCSV
from drugex.api.corpus import CorpusCSV, BasicCorpus, CorpusChEMBL
# important classes for QSAR modelling
# and (de)serialization of QSAR models
from drugex.api.environ.models import RF
from drugex.api.environ.serialization import FileEnvSerializer, FileEnvDeserializer
# classes that handle training of the exploration
# and exploitation networks and also handle monitoring
# of the process
from drugex.api.model.callbacks import BasicMonitor
from drugex.api.pretrain.generators import BasicGenerator
# ingredients needed for DrugEx agent training
from drugex.api.agent.agents import DrugExAgent
from drugex.api.agent.callbacks import BasicAgentMonitor
from drugex.api.agent.policy import PG
# designer API (wraps the agent after it was trained)
from drugex.api.designer.designers import BasicDesigner, CSVConsumer
###Output
_____no_output_____
###Markdown
Next let's define some global settings:
###Code
import torch
for device in range(torch.cuda.device_count()):
print(device, torch.cuda.get_device_capability(device))
import os
if torch.cuda.is_available():
# choose a GPU device based on the info above
# (the higher the capability, the better)
torch.cuda.set_device(2)
DATA_DIR="data" # folder with input data files
OUT_DIR="output/workflow" # folder to store the output of this workflow
os.makedirs(OUT_DIR, exist_ok=True) # create the output folder
# define a set of gene IDs that are interesting for
# the target that we want to design molecules for
GENE_IDS = ["ADORA2A"]
###Output
_____no_output_____
###Markdown
Data AquisitionIt's time to aquire the data we will need for training of our models. There are three models that we need to build so we need three separate data sets:1. Data for the exploitation model based on a random sample of 1 million molecules from the ZINC set.2. Data for the exploration model based on ChEMBL data we downloaded for the desired target.3. Data for the QSAR modelling of the environment model -> this model will bias the final generator towards more active molecules throug the a policy gradient. Exploitation NetworkThe exploitation network will be based on a large data set of known chemical structures. The ZINC database is a great source of data for the network:
###Code
# Randomly selected sample of 1 million molecules
# from the ZINC database.
# We only use this file for illustration purposes.
# In practice, the pretrained exploitation network should
# be provided so there will be no need for this data,
# but we are starting from square one here.
ZINC_CSV=os.path.join(DATA_DIR, "ZINC.txt")
# Load SMILES data into a corpus from a CSV file (we assume
# that we have the structures saved in a csv file in DATA_DIR).
# Corpus is a class which provides both the vocabulary and
# training data for a generator.
# This corupus will be used to train the exploitation network later.
corpus_pre = CorpusCSV(
update_file=ZINC_CSV
# The input CSV file with chemical structures as SMILES.
# This is the only required parameter of this class.
, vocabulary=drugex.VOC_DEFAULT
# A vocabulary object that defines the tokens
# and other options used to construct and parse SMILES.
# VOC_DEFAULT is a reasonable "catch all" default.
, smiles_column="CANONICAL_SMILES"
# Instructs the corpus object what column to look for when
# extracting SMILES to update the data.
, sep='\t'
# The column separator used in the CSV file
)
# Next we update the corpus (if we did not do it already).
# The updateData() method loads and tokenizes the SMILES it finds in the CSV.
# The tokenized data and updated vocabulary are returned to us.
corpus_out_zinc = os.path.join(OUT_DIR, "zinc_corpus.txt")
vocab_out_zinc = os.path.join(OUT_DIR, "zinc_voc.txt")
if not os.path.exists(corpus_out_zinc):
df, voc = corpus_pre.updateData(update_voc=True)
# We don't really use the return values here, but they are
# still there if we need them for logging purposes or
# something else. The update_voc flag tells the
# update method to also update the vocabulary
# based on the tokens found in the SMILES strings.
# We can save our corpus data if we want to reuse it later.
# The CorpusCSV class has a methods
# that we can use to save the vocabulary and tokenized data set.
corpus_pre.saveCorpus(corpus_out_zinc)
corpus_pre.saveVoc(vocab_out_zinc)
else:
# If we initialized and saved
# the corpus before, we just overwrite the
# current one with the saved one
corpus_pre = CorpusCSV.fromFiles(corpus_out_zinc, vocab_out_zinc)
###Output
Reading SMILES: 100%|██████████| 1018452/1018452 [00:02<00:00, 395864.75it/s]
Collecting tokens: 100%|██████████| 1018451/1018451 [20:45<00:00, 817.97it/s]
###Markdown
Exploration NetworkWe will also need a corpus for the exploration network. We will load it from ChEMBL using a different implementation of the Corpus interface than we saw above. When we update a CorpusChEMBL instance, it downloads the data for us automatically:
###Code
# CorpusChEMBL uses a list of gene identifiers
# and download activity dat`a for all tested compounds
# related to the particular genes.
corpus_out_chembl = os.path.join(OUT_DIR, "chembl_corpus.txt")
vocab_out_chembl = os.path.join(OUT_DIR, "chembl_voc.txt")
env_data_path = os.path.join(OUT_DIR, "{0}.txt".format(GENE_IDS[0]))
if not os.path.exists(corpus_out_chembl):
corpus_ex = CorpusChEMBL(GENE_IDS, clean_raw=False)
# lets update this corpus and save the results
# (same procedure as above)
df, voc = corpus_ex.updateData(update_voc=True)
corpus_ex.saveCorpus(corpus_out_chembl)
corpus_ex.saveVoc(vocab_out_chembl)
# in addition we will also save the raw downloaded data
# (this is what we will also use as a basis for the environment QSAR model)
corpus_ex.raw_data.to_csv(env_data_path, sep="\t", index=False)
else:
# If we already generated the corpus file,
# we can load it using the CorpusCSV class
corpus_ex = CorpusCSV.fromFiles(corpus_out_chembl, vocab_out_chembl)
###Output
Found following target chembl IDs related to ADORA2A ['CHEMBL251']
###Markdown
Since in both cases we requested to update the vocabulary according totokens found in the underlying smiles for both the zincand ChEMBL corpus, we now need to unify them. Vocabulariescan be combined using the plus operator:
###Code
voc_all = corpus_pre.voc + corpus_ex.voc
corpus_pre.voc = voc_all
corpus_ex.voc = voc_all
corpus_pre.saveVoc(os.path.join(OUT_DIR, "voc.txt"))
###Output
_____no_output_____
###Markdown
If we did not do this, the exploitation andexploration networks might not be compatibleand we would run into issues during modelling. Environment QSAR modelWe also need activity data totrain the environment QSAR model which will provide the activityvalues for policy gradient.Luckily, we already have the file to do this:
###Code
environ_data = ChEMBLCSV(
env_data_path # we got this file from ChEMBL thanks to CorpusChEMBL
, 6.5 # this is the activity threshold for the pChEMBL value
, id_col='MOLECULE_CHEMBL_ID' # column by which we group multiple results per molecule
)
###Output
_____no_output_____
###Markdown
The ChEMBLCSV class not only loads the activity data,but also provides access to it for theQSAR learning algorithms (see below). Model Training Exploitation NetworkTraining the exploitation generator takes a long time (we have over a million molecules in our ZINC sample)so we would like to monitorthis process. We can use the Monitorinterface for that. The "BasicMonitor" justsaves log files and model checkpointsin the given directory:
###Code
pr_monitor = BasicMonitor(
out_dir=OUT_DIR
, identifier="pr"
)
###Output
_____no_output_____
###Markdown
TODO: it would be nice to also have a method inthe monitor that would stop the training processHowever, we could easily implement our own monitor that could do a bit more than just what the basic monitor does. Here is an example:
###Code
from matplotlib import pyplot as plt
%matplotlib inline
class MyMonitor(BasicMonitor):
"""
This monitor adds some functionality on top of the basic monitor.
"""
def close(self):
"""
This method is called after training has completed.
"""
super().close()
# We just get the performance figure.
return self.getPerfFigure()
pr_monitor = MyMonitor(
out_dir=OUT_DIR
, identifier="pr"
)
###Output
_____no_output_____
###Markdown
The monitor actually does more than just monitoringof the process. It also keeps track of the bestmodel built yet and can be used to initializea generator based on that.We use that feature below. If there already isa network state saved somewhere in our output directory, we do not do any training and just load the model from disk:
###Code
if not pr_monitor.getState(): # this will be False if the monitor cannot find an existing state
print("Pretraining exploitation network...")
pretrained = BasicGenerator(
monitor=pr_monitor
, corpus=corpus_pre
, train_params={
# these parameters are fed directly to the
# fit method of the underlying pytorch model
"epochs" : 30 # lets just make this one quick
, "monitor_freq" : 10
}
)
pretrained.pretrain()
# This method also has parameters
# regarding partioning of the training data.
# We just use the defaults in this case.
else:
pretrained = BasicGenerator(
monitor=pr_monitor
, initial_state=pr_monitor # the monitor provides initial state
, corpus=corpus_pre
)
# we will not do any training this time,
# but we could just continue by
# specifying the training parameters and
# calling pretrain again
# TODO: maybe it would be nice if the monitor
# keeps track of the settings as well
###Output
Pretraining exploitation network...
###Markdown
See the figure above? That is from our customized pretrainer monitor. There will also be a CSV file (`net_pr.csv`) in the output folder with the collected training data. So we could configure the monitor to do much more (there are more methods besides `close` in the basic monitor that we can override). We could also implement our own monitor entirely by implementing all the methods in the `PretrainingMonitor` abstract class (also defined in the same module as the `BasicMonitor`). Exploration NetworkNext comes the exploration network. The approach is the same, but we use the previously trained network as the initial state. First, we define the monitor, though. We will use the one we defined above, but give it a different identifier:
###Code
ex_monitor = MyMonitor(
out_dir=OUT_DIR
, identifier="ex"
)
###Output
_____no_output_____
###Markdown
The exploration network fine-tunes the pretrainedone so we have to use the pr_monitor to initializethe initial state of the exploartion network:
###Code
if not ex_monitor.getState():
print("Pretraining exploration network...")
exploration = BasicGenerator(
monitor=ex_monitor
, initial_state=pr_monitor # initialize from the states of the best pretrained network
, corpus=corpus_ex # use target-specific corpus for exploration
, train_params={
"epochs" : 60 # We have less data so we might need to do more epochs.
}
)
exploration.pretrain(validation_size=512)
# In this case we want to use a validation set.
# This set will be used to estimate the
# loss instead of the training set.
else:
exploration = BasicGenerator(
monitor=ex_monitor
, initial_state=ex_monitor
, corpus=corpus_ex
)
###Output
Epoch: 0%| | 0/60 [00:00<?, ?it/s]
###Markdown
Environment ModelThis model will provide the environment for the policy gradient. We already got the data to train this model and saved it to the `environ_data`. This is a data provider for the QSAR model and can be used with other algorithms implemented in the library. However, we will just limit ourselves to random forest in this case:
###Code
# let's see if we can load the model already from disk
# using the standard deserializer...
identifier = 'environ_rf'
des = FileEnvDeserializer(OUT_DIR, identifier)
try:
# The deserializer automatically looks for
# a model in the given directory with the given identifier
environ_model = des.getModel()
print("Model found at:", des.path)
except FileNotFoundError:
# if the model is nowhere to be found, we train and save it
print("Training environment model...")
environ_model = RF(train_provider=environ_data)
environ_model.fit()
# we save the model so that we don't have to train again next time
# we also choose to save the performance data (this will
# also save a ROC curve figure in our output directory
# to check performance)
ser = FileEnvSerializer(OUT_DIR, identifier, include_perf=True)
ser.saveModel(environ_model)
###Output
Training environment model...
###Markdown
DrugEx AgentWe now have all ingredients to trainthe DrugEx agent. First, weneed to define the policy gradientstrategy:
###Code
policy = PG( # So far this is the only policy there is in the API
batch_size=512
, mc=10 # number of repeated samples
, epsilon=0.01
, beta=0.1
)
###Output
_____no_output_____
###Markdown
DrugEx agents have their own monitors.The basic one saves monitoring results to files as well and generally uses the same pattern as we have seen with generators to keep up to date with the best state of the model and so on:
###Code
identifier = 'e_%.2f_%.1f_%dx%d' % (policy.epsilon, policy.beta, policy.batch_size, policy.mc)
agent_monitor = BasicAgentMonitor(OUT_DIR, identifier)
###Output
_____no_output_____
###Markdown
Finally, the DrugEx agent itself:
###Code
if not agent_monitor.getState():
print("Training DrugEx agent...")
agent = DrugExAgent(
agent_monitor # our monitor
, environ_model # environment for the policy gradient
, pretrained # the pretrained model
, policy # our policy gradient implemntation
, exploration # the fine-tuned model
, {
"n_epochs" : 30 # quick again
}
)
agent.train()
else:
# The DrugEx agent monitor also provides
# a generator state -> it is the
# best model from training. We can
# therefore create a generator
# based on this initial state just like we did before:
agent = BasicGenerator(
initial_state=agent_monitor
, corpus=BasicCorpus(
# If we are not training the generator,
# we can just provide a basic corpus
# that only provides vocabulary
# and no corpus data -> we
# only have to specify the right
# vocabulary, which is the one of
# the exploration or exploitation network.
# We choose the exploration network here:
vocabulary=corpus_pre.voc
)
)
###Output
Epoch: 0%| | 0/30 [00:00<?, ?it/s]
###Markdown
We can now analyze the `net_e_0.01_0.1_512x10.log` file in the output directory for an overview of the agent training process.TODO: rewrite the agent monitor so that the results can be visualized and saved in a CSV file. Initializing DrugEx DesignerFrom a fully trained DrugEx agent generator,we can create a designer class whichwill handle sampling of SMILES:
###Code
consumer = CSVConsumer(
# a CSV file containing not just SMILES,
# but also scores as determined by the environment model.
os.path.join(OUT_DIR, 'designer_mols.csv')
)
designer = BasicDesigner(
agent=agent # our agent
, consumer=consumer # use this consumer to return results
, n_samples=1000 # number of SMILES to sample in total
, batch_size=512 # number of SMILES to sample in one batch
)
designer() # design the molecules
consumer.save() # save them
###Output
_____no_output_____
###Markdown
Install external libraries
###Code
!pip install requests # library for making HTTP req
!pip install lxml # library for working with XML
!pip install bs4 # yet another library for working with XML
###Output
_____no_output_____
###Markdown
Clone git repository with tools (to follow adopted contributing protocol it may be useful to make a fork of this repository at github first)
###Code
!git clone https://github.com/galaxyproject/tools-iuc
###Output
_____no_output_____
###Markdown
Import classes and functions from installed libraries
###Code
import requests
import json
from lxml import etree
from os import walk
import os
import glob
import re
from bs4 import BeautifulSoup
import csv
from urllib.request import urlopen
###Output
_____no_output_____
###Markdown
Create utility functions Function to download bio.tools data
###Code
def fetch(p="", c=[]):
try:
url = "https://bio.tools/api/t" + p + "&format=json"
json = requests.get(url).json()
print("Page: {}".format(p))
return fetch(json['next'], (c + json['list']))
except:
return c
data = fetch(p="?page=1")
###Output
_____no_output_____
###Markdown
Save data to file (to reuse in the next runs, but be careful, google collab provides no guarantees on data persistence)
###Code
with open('data.json', 'w') as outfile:
json.dump(data, outfile)
###Output
_____no_output_____
###Markdown
Function that enriches data with doi lists
###Code
def enrich_publication_data(biotool_description):
biotool_description['dois'] = []
for publication in biotool_description['publication']:
if publication['doi']:
biotool_description['dois'].append({
'doi': publication['doi'],
'type': publication['type'],
'source': 'doi'
})
else:
if publication['pmid']:
doi = get_doi(publication['pmid'])
if doi:
biotool_description['dois'].append({
'doi': doi,
'type': publication['type'],
'source': 'pmid'
})
elif publication['pmcid']:
doi = get_doi(publication['pmcid'])
if doi:
biotool_description['dois'].append({
'doi': doi,
'type': publication['type'],
'source': 'pmid'
})
###Output
_____no_output_____
###Markdown
Function to convert PMID and PMCID to DOI
###Code
def get_doi(pid):
# Based on implementation of DOI fetcher by Kenzo-Hugo Hillion
url = "https://www.ncbi.nlm.nih.gov/pmc/utils/idconv/v1.0/?ids=" + pid
xml = etree.fromstring(requests.get(url).text)
if xml.find('record') is not None:
try:
doi = xml.find('record').attrib['doi']
print("DOI was found for {}".format(pid))
return doi
except:
print("DOI was not found for {}".format(pid))
return None
###Output
_____no_output_____
###Markdown
Enrich tools description with DOIs
###Code
i = 0
for tool in data:
print("Tool #{}".format(i))
enrich_publication_data(tool)
i += 1
###Output
_____no_output_____
###Markdown
Save results to file
###Code
with open('data_enriched.json', 'w') as outfile:
json.dump(data, outfile)
###Output
_____no_output_____
###Markdown
Get the list of XML files
###Code
path ="{}/tools-iuc/tools/".format(os.getcwd())
filepathes = []
for (dirpath, dirnames, filenames) in walk(path):
for d in dirnames:
p = dirpath + d
filelist = dirList = glob.glob(p + "/*.xml")
filepathes += filelist
###Output
_____no_output_____
###Markdown
Function for extracting DOI from Galaxy tool description
###Code
tools_dois = {}
for filepath in filepathes:
#print("{}: Tool #{} parsed".format(filepath, i))
with open(filepath) as f:
xml = BeautifulSoup(f, 'xml')
dois = xml.find_all('citation', {"type" : "doi"})
if len(dois) > 0:
tools_dois[filepath] = list(map(lambda x: x.get_text(), dois))
###Output
_____no_output_____
###Markdown
Function to extract EDAM topics' and operations' IDs from bio.tools description
###Code
def enrich_from_biotools(biotool, galaxy_tool_path, results):
# extract edam topic and edam operation
topics = biotool.get('topic', [])
if len(topics) > 0:
results['biotools_topics'] += list(map(lambda x: x['uri'].split('/')[-1], topics))
results['biotools_topics'] = list(set(results['biotools_topics']))
functions = biotool.get('function', [])
if "biotools_operations" in results and results['biotools_operations'] != None:
results['biotools_operations'] = []
if len(functions) > 0:
for function in functions:
operations = function.get('operation', [])
if len(operations) > 0:
results['biotools_operations'].append(list(set(list(map(lambda x: x['uri'].split('/')[-1], operations)))))
results['biotools_id'] = biotool.get('biotoolsID', None)
return results
###Output
_____no_output_____
###Markdown
Function to extract EDAM topics' and operations' IDs from Debian Med repositories
###Code
def enrich_from_debmed(debtool, galaxy_tool_path, results):
topics = debtool.get('topics', [])
if topics and len(topics) > 0:
for topic in topics:
t = edam_data.get(topic, None)
results['deb_topics'].append({
'url': t,
'value': topic
})
functions = debtool.get('edam_scopes', [])
if functions and len(functions) > 0:
for function in functions:
operations = function.get('function', [])
if isinstance(operations, str):
op = edam_data.get(operations, None)
results['deb_operations'].append([{
'url': op,
'value': function
}])
else:
if len(operations) > 0:
ops = []
for operation in operations:
op = edam_data.get(operation, None)
ops.append({
'url': op,
'value': operation
})
if len(ops) > 0:
results['deb_operations'].append(ops)
results['deb_biotools_id'] = debtool.get('bio.tools', None)
return results
# The script `edam.sh` is written by Andreas Tille (https://github.com/tillea)
# and copied from https://github.com/bio-tools/biotoolsConnect
# It generates a file `edam.json`
!bash edam.sh -j
###Output
_____no_output_____
###Markdown
Load the JSON output of `edam.sh`
###Code
with open('edam.json') as json_file:
debian_data = json.load(json_file)
###Output
_____no_output_____
###Markdown
Download EDAM
###Code
version = '1.21'
url = 'http://edamontology.org/EDAM_{}.tsv'.format(version)
file = urlopen(url)
with open('edam.tsv','wb') as output:
output.write(file.read())
with open('edam.tsv','r') as tsv:
tsv = csv.reader(tsv, delimiter='\t')
edam_data = {}
for row in tsv:
edam_data[row[0]] = {
'label': row[1],
'synonyms': row[2].split('|'),
'definition': row[54],
'comments': row[3].split('|'),
}
edam_data['_version'] = version
###Output
_____no_output_____
###Markdown
Create tools annotations (match Galaxy tool's DOI against bio.tools' DOI and Debian Med tools' DOI to get topics and operations)
###Code
i = 0
j = 0
tool_annotations = {}
for path, galaxy_dois in tools_dois.items():
tool_annotations[path] = []
for galaxy_doi in galaxy_dois:
for biotool in data:
for biotool_doi in biotool['dois']:
if galaxy_doi == biotool_doi['doi']:
i += 1
tool_edam = enrich_from_biotools(biotool, path, {
'type': 'bio.tools',
'biotools_topics': [],
'biotools_operations': [],
'biotools_id': None,
'biotools_doi': biotool_doi
})
tool_annotations[path].append(tool_edam)
for deb_tool in debian_data:
if galaxy_doi == deb_tool['doi']:
j += 1
tool_edam = enrich_from_debmed(deb_tool, path, {
'type': 'debmed',
'deb_topics': [],
'deb_operations': [],
'deb_biotools_id': None,
})
tool_annotations[path].append(tool_edam)
print("Total bio.tools matches:", i)
print("Total Debian Med matches:", j)
with open('./client/src/tool_annotations.json', 'w') as outfile:
json.dump(tool_annotations, outfile)
###Output
_____no_output_____
###Markdown
RNA-Seq Workflow by @furkanmtorun [[email protected]](mailto:[email protected]) | GitHub: [@furkanmtorun](https://github.com/furkanmtorun) | [Google Scholar](https://scholar.google.com/citations?user=d5ZyOZ4AAAAJ) | [Personal Website](https://furkanmtorun.github.io/) Libraries , packages and required functions
###Code
# +--------------------------------------------------+
# Import required libraries & packages
# +--------------------------------------------------+
import pandas as pd
import glob2
import subprocess
# +--------------------------------------------------+
# Define folders and bin for tools
# +--------------------------------------------------+
fastq_folder, genome_folder, index_folder, bam_sam_folder, logs_folder, results_folder, \
FastQC_bin, STAR_bin, cufflinks_bin, bowtie_bin, TopHat_bin, R_bin = ["./files/fastq/",
"./files/genome/", "./files/index/", "./files/bam_sam/", "./files/logs/", "./files/results/", "./softs/FastQC/",
"./softs/STAR-2.7.3a/bin/Linux_x86_64/", "./softs/cufflinks-2.2.1.Linux_x86_64/",
"./softs/bowtie2-2.3.5.1-linux-x86_64/", "./softs/tophat-2.1.1.Linux_x86_64/",
"./softs/R/R-3.6.1/bin/Rscript"]
# +--------------------------------------------------+
# Define files
# +--------------------------------------------------+
fasta_files = " ".join(glob2.glob(genome_folder + "*.fa*"))
gtf_files = " ".join(glob2.glob(genome_folder + "*.gtf*"))
fastq_files = " ".join(glob2.glob(fastq_folder + "*.fastq*"))
# +--------------------------------------------------+
# The function for messages
# +--------------------------------------------------+
def msg_output(text):
dash = "-"*len(text)
dash = "-"*70 if len(dash) > 70 else "-"*len(dash)
msg_txt = "\n# +" + dash + "+\n> {}\n# +" + dash + "+\n"
print(msg_txt.format(text))
# +--------------------------------------------------+
# Execute and track the shell commands
# +--------------------------------------------------+
def run_command(command):
try:
return subprocess.check_output(command, shell=True)
except (Exception, TypeError):
msg_output("! Error!: Your command was:\n\t" + command)
# +--------------------------------------------------+
# Execute and track the shell commands
# +--------------------------------------------------+
def confirmation_runCommand(command):
msg_output("Your command is:\n\t" + command)
qa = input("> Are you OK with that command? Type 'YES' or 'NO': ")
if qa.upper() == "YES":
output = run_command(command).decode("utf-8")
msg_output(output)
elif qa.upper() == "NO":
print("! You can change the command and then, re-run the cell")
else:
print("! Just type YES or NO: Please, re-run the cell")
###Output
_____no_output_____
###Markdown
Quality Control using FastQC Website: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/
###Code
fastqc_command = "{}fastqc {} -f fastq -o {}".format(FastQC_bin, fastq_files, results_folder + "QC_reports")
confirmation_runCommand(fastqc_command)
###Output
_____no_output_____
###Markdown
Adapter Trimming using cutadapt Website: https://cutadapt.readthedocs.io/en/
###Code
preprocessing_ans = input("> Are the adapter sequnces of your FASTQ files trimmed? 'YES' or 'NO' : ")
if preprocessing_ans.upper() == "YES":
msg_output("! The process was terminaled because the adapter sequences already trimmed.")
elif preprocessing_ans.upper() == "NO":
adapter_seq = input("> Paste your adapter sequence: ")
number_of_threads = input("> Number Of Threads: ")
if True == number_of_threads.isdigit() == adapter_seq.isalpha():
for fastq_file in fastq_files.split(" "):
fastq_file_name = fastq_file.split("\\")[1]
cutadapt_command = "cutadapt -a {} -j {} {} -o {}trimmed_{}"\
.format(adapter_seq, number_of_threads, fastq_file, fastq_folder, fastq_file_name)
confirmation_runCommand(cutadapt_command)
else:
msg_output("! Check the number of threads or the adapter sequence you have typed!")
else:
msg_output("! Type only 'YES' or 'NO'.")
###Output
_____no_output_____
###Markdown
Curation of Genome Index using BowTie2 Website: http://bowtie-bio.sourceforge.net/bowtie2/index.shtml
###Code
bowtie_base_name = input("Type a basename for the files (e.g.: speciesName): ")
number_of_threads = input("> Number Of Threads: ")
extra_option = input("> Type your extra options: \n Check manual from http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml: \n")
if True == number_of_threads.isdigit():
bowtie_build_command = "{}bowtie2-build --threads {} {} {} {}"\
.format(bowtie_bin, number_of_threads, fasta_files, index_folder + bowtie_base_name, extra_option)
confirmation_runCommand(bowtie_build_command)
# To check your index, use following command: bowtie2-inspect -s <base_name>
else:
msg_output("! Check the number of threads you have typed!")
###Output
_____no_output_____
###Markdown
[Alternative] : Curation of Genome Index using STAR Website: https://github.com/alexdobin/STAR
###Code
run_mode = "genomeGenerate"
number_of_threads = input("> Number Of Threads: ")
overhang_number = input("> Overhang (ideally: ReadLength - 1): ")
extra_option = input("> Paste your extra options: \nUse formal manual: https://raw.githubusercontent.com/alexdobin/STAR/921a50b1b4730a2c8b6bffc03b85081e9de3f777/doc/STARmanual.pdf \nExample: --limitSjdbInsertNsj 4000 --limitGenomeGenerateRAM 269860224 --genomeSAindexNbases 12\n")
if True == number_of_threads.isdigit() == overhang_number.isdigit():
if len(glob2.glob(genome_folder+"*.fasta")) > 0:
run_command("gzip {}".format(genome_folder + "*fasta")).decode("utf-8")
indexing_command = "{}STAR --runThreadN {} --runMode {} --genomeDir {} --genomeFastaFiles {} --sjdbGTFfile {} --sjdbOverhang {} {}" \
.format(STAR_bin, number_of_threads, run_mode, index_folder, fasta_files, gtf_files, overhang_number, extra_option)
confirmation_runCommand(indexing_command)
else:
msg_output("! Check the number of threads and overhang number you have typed!")
###Output
_____no_output_____
###Markdown
Mapping/Alignment using TopHat Website: http://ccb.jhu.edu/software/tophat/index.shtml
###Code
library_type = input("> Library type 'fr-unstranded', ' fr-firststrand' or 'fr-secondstrand' : ")
number_of_threads = input("> Number Of Threads: ")
extra_option = input("> Type your extra options: \n Check manual from http://ccb.jhu.edu/software/tophat/manual.shtml#toph: \n")
msg_output("Please note that it is highly recommended that a FASTA file with the sequence(s) the genome being indexed be present \n in the same directory with the Bowtie index files and having the name <genome_index_base>.fa. \nIf not present, TopHat will automatically rebuild this FASTA file from the Bowtie index files.")
if True == number_of_threads.isdigit():
# TO-DO: Find an elegant way to handle that problem with regex!
reading_before_names = []
# To take only file names of FASTQ files
for fastq_file in fastq_files.split(" "):
reading_before_names.append(fastq_file.split("_")[0].split("\\")[1])
for read_file in list(set(reading_before_names)):
read_files_together = ",".join(glob2.glob(fastq_folder + read_file + "*"))
tophat_command = "{}tophat2 -p {} -o {} --library-type {} -G {} {} {} {}"\
.format(TopHat_bin, number_of_threads, bam_sam_folder + read_file, library_type,
gtf_files, index_folder + "bowtie_base_name", read_files_together, extra_option)
confirmation_runCommand(tophat_command)
else:
msg_output("! Check the number of threads you have typed!")
###Output
_____no_output_____
###Markdown
[Alternative]: Mapping/Alignment using STAR Website: https://github.com/alexdobin/STAR
###Code
number_of_threads = input("> Number Of Threads: ")
extra_option = input("> Type your extra options: \nUse formal manual: https://github.com/alexdobin/STAR\nExample: --outSAMunmapped Within --outSAMattributes Standard\n")
if True == number_of_threads.isdigit():
reading_before_names = []
# To take only file names of FASTQ files
for fastq_file in fastq_files.split(" "):
reading_before_names.append(fastq_file.split("_")[0].split("\\")[1])
for read_file in list(set(reading_before_names)):
read_files_together = ",".join(glob2.glob(fastq_folder + read_file + "*"))
star_command = "{}STAR --runThreadN {} --genomeDir {} --readFilesIn {} --outFileNamePrefix {} --readFilesCommand zcat --outSAMtype BAM SortedByCoordinate {}" \
.format(STAR_bin, number_of_threads, index_folder, read_files_together, bam_sam_folder + read_file, extra_option)
confirmation_runCommand(star_command)
else:
msg_output("! Check the number of threads you have typed!")
###Output
_____no_output_____
###Markdown
Index BAM files (.BAI) using samtools Website: http://www.htslib.org/
###Code
bam_files = glob2.glob(bam_sam_folder + "*.bam")
bai_files = [bam_file + ".bai" for bam_file in bam_files]
for i in range(len(bam_files)):
bai_command = "samtools index {} {}".format(bam_files[i], bai_files[i])
confirmation_runCommand(bai_command)
msg_output("! Your .BAI and .BAM files are stored in the 'bam_sam' folder.\n You can visualize them using IGV.")
###Output
_____no_output_____
###Markdown
Counting reads using HTSeq Website: https://htseq.readthedocs.io/en/latest/count.html
###Code
mode = input("> Choose a mode from 'union', 'intersection-strict' or 'intersection-nonempty' :")
stranded = input("> Data is stranded? 'yes', 'reverse' or 'no' : ")
order = input("> How the input data has been sorted? 'name' or 'pos' : ")
id_attribute = input("> Choose an id attribute? e.g: 'gene_id'")
feature_type = input("> Feature type (3rd column in GFF file) to be used? e.g: 'exon' : ")
extra_option = input("> Paste your extra options: e.g: --additional-attr=gene_name : ")
for bam_file in bam_files:
output_fn = results_folder + "counts/" + bam_file.split("\\")[-1] + "_HTSeq.txt"
htseq_command = "htseq-count -f bam -m {} -s {} -r {} -i {} -t {} {} {} {} > {}"\
.format(mode, stranded, order, id_attribute, feature_type, extra_option, bam_file, gtf_files, output_fn)
confirmation_runCommand(htseq_command)
###Output
_____no_output_____
###Markdown
[Alternative]: Counting reads using featureCounts Website: http://subread.sourceforge.net/
###Code
number_of_threads = input("> Number Of Threads: ")
stranded = input("> Data is stranded? 'yes', 'reverse' or 'no' : ")
id_attribute = input("> Choose an id attribute? e.g: 'gene_id'")
feature_type = input("> Feature type (3rd column in GFF file) to be used? e.g: 'exon' : ")
extra_option = input("> Type your extra options: (Check manual from http://bioinf.wehi.edu.au/featureCounts/)\n ")
# featureCount strand information
strand_conversion = {"no" : 0, "yes" : 1, "reverse" : 2}
try:
stranded = strand_conversion[xz.lower()]
except Exception:
msg_output("! Data is stranded? Select one of those: 'yes', 'reverse' or 'no' .")
if True == number_of_threads.isdigit():
for bam_file in bam_files:
output_fn = results_folder + "counts/" + bam_file.split("\\")[-1] + "_featureCounts.txt"
featureCounts_command = "featureCounts -T {} -t {} -g {} -s {} {} -a {} -o {} {}"\
.format(number_of_threads, feature_type, id_attribute, stranded,
extra_option, gtf_files, output_fn, bam_file)
confirmation_runCommand(featureCounts_command)
else:
msg_output("! Check the number of threads you have typed!")
###Output
_____no_output_____
###Markdown
Creating meta data file for the files
###Code
ms_presence = input("Do you have a meta data with a full of comma seperated values? : 'YES' or 'NO' : ")
counts_files = [temp_file.split("\\")[1].split("_")[0] for temp_file in glob2.glob(results_folder + "counts/*.txt")]
print(counts_files)
if ms_presence.upper() == "YES":
msg_output("! Your meta data file must be in the 'results' folder as 'meta_data.csv'\n\twith a full of comma seperated values including 'id,condition' header.")
elif ms_presence.upper() == "NO":
meta_data_table = {"id" : "condition"}
for count_file in counts_files:
condition = input("> What is the condition for {} such as 'Control' or 'XYZ_gene_mutant' ?".format(count_file))
meta_data_table[count_file] = condition
with open(results_folder + "meta_data.csv", "w") as meta_data_file:
for meta_data_key in meta_data_table:
meta_data_file.write(meta_data_key + "," + meta_data_table[meta_data_key] + "\n")
msg_output("! Your meta data file has been created in the 'results' folder.")
else:
msg_output("! Please, type either 'YES' or 'NO'.")
###Output
_____no_output_____
###Markdown
Differential Expression Analysis using DESeq2 Website: https://bioconductor.org/packages/DESeq2
###Code
# Merge all count files in a single count file
count_files = glob2.glob(results_folder + "/counts/" + "*.txt")
count_files_dfs = [pd.read_csv(count_file, index_col=0, sep="\t") for count_file in count_files]
merged_count_files = count_files_dfs[0].join(count_files_dfs[1:])
merged_count_files.to_csv(results_folder + "/counts/" + "merged_Counts.txt", sep="\t")
#The R script containing DESeq2 library namely DESeq2.R is used as following:
# DESeq2.R count_data.csv meta_data.csv control_label sample_label
label_qa = input("> Did you prepare your meta data file on your own or using previous cell 'CELL' or 'OWN' ?")
if label_qa.upper() == "CELL":
labels = " ".join(list(meta_data_table.values())[1:])
elif label_qa.upper() == "OWN":
labels = input("> Just type your labels which are identical to ones in meta_data file in 'condition' column. \nBring a single space between two terms.\n Example: 'Control XYZ_gene_mutant'")
if not labels:
msg_output("Please type your design two labels with a single space such as 'Control XYZ_gene_mutant' . Re-run the cell!")
else:
msg_output("Just type either 'CELL' or 'OWN' ! ")
count_file = results_folder + "/counts/" + "merged_Counts.txt"
meta_data_file = results_folder + "meta_data.csv"
R_command = "{} DESeq2.R {} {} {}".format(R_bin, count_file, meta_data_file, labels)
confirmation_runCommand(R_command)
###Output
_____no_output_____ |
courses/coursera/deeplearning_ai/01_nn_and_dl_week_02/python-numpy vectors.ipynb | ###Markdown
Tips: Avoid data structures where shape is 5, or n, (rank 1 array) -> Use column vectors or row vectors
###Code
a = np.random.rand(5, 1)
print(a) # column vector
print(a.shape)
print(a.T) # row vector
print(np.dot(a, a.T))
###Output
[[ 7.64012894e-05 2.59284184e-03 5.86300553e-03 7.39467054e-04
3.29404467e-04]
[ 2.59284184e-03 8.79936564e-02 1.98973684e-01 2.50954026e-02
1.11790480e-02]
[ 5.86300553e-03 1.98973684e-01 4.49924787e-01 5.67464170e-02
2.52783720e-02]
[ 7.39467054e-04 2.50954026e-02 5.67464170e-02 7.15709811e-03
3.18821518e-03]
[ 3.29404467e-04 1.11790480e-02 2.52783720e-02 3.18821518e-03
1.42022868e-03]]
###Markdown
Tips: Use assert to check the data structures
###Code
assert(a.shape == (5, 1))
###Output
_____no_output_____ |
docs/source/_build/html/examples/Benchmark.ipynb | ###Markdown
Benchmark=========----------------------We assessed the performance of two main functions of stmetrics: `get_metrics` and `sits2metrics`. For that, we used a core i7-8700 CPU @ 3.2 GHz and 16GB of RAM. With this test, we wanted to assess the performance of the package to compute the metrics available under different scenarios.We compared the time and memory performance of those functions using different approaches. For `get_metrics` function, we assessed the performance using a random time series, created with NumPy, with different lengths. For the `sits2metrics` function, we used images with different dimensions in columns and rows, maintaining the same length. Install stmetrics-----------------------pip install git+https://github.com/andersonreisoares/stmetrics.git@spatial --upgrade `get_metrics`--------------------To evaluate the performance of `get_metrics` function, we implemented a simple test using a random time series built with `NumPy` package, using the following code.
###Code
import time
from stmetrics import metrics
import numpy
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
The `get_metrics` function was designed to be used for compute the metrics of one time series. The stmetrics is currently composed by 4 modules:* Metrics - With some functions to compute the all metrics available* Basics - That has the implementation of the basics metrics* Polar - That has the implementation of the polar metrics proposed by Körting (2013).* Fractal - That has the implementatio of fractal metrics that are currently under assessment.Along with the metrics, `get_metrics` function also returns the polar plot of the inpute time series.
###Code
metrics.get_metrics(numpy.random.rand(1,20)[0], show = True)
tempos = []
for i in range(5,1000):
start = time.time()
metrics.get_metrics(numpy.random.rand(1,i)[0])
end = time.time()
tempos.append(end - start)
figure = plt.figure(figsize=(13,5))
plt.plot(tempos)
plt.ylabel('Time (s)')
plt.xlabel('Time Series Lenght')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
As shown above, the `get_metrics` function presents a quadratic response regarding the length of the time series. It is able to compute the metrics for a time series with 1,000 data points in less than **two second**. This beahaviour is explained by some polar metrics that requires more computational time, for example the `symmetry_ts` function. For the following versions, we will try to improve the performance of the package. `sits2metrics`------------------- To evaluate the `sits2metrics` function we used a sample image with the following dimensions: 249x394 and 12 dates. With this test, we aim to assess how the size of the image impacts the total time to compute the metrics. This function uses the multiprocessing library to speed up the process. According to the previous test, a time series with 12 dates as our sample requires 0.015s to compute the metrics for one pixel, therefore using a single core this should require 1,318s or approximately 21minutes. With the parallel implementation, according to our tests, the package performs the same task in 6 minutes.
###Code
import rasterio
sits = rasterio.open('https://github.com/tkorting/remote-sensing-images/blob/master/evi_corte.tif?raw=true').read()
tempos_sits = []
dim = []
for i in range(10,210,10):
dim.append(str(i)+'x'+str(i))
start = time.time()
metrics.sits2metrics(sits[:,:i,:i])
end = time.time()
tempos_sits.append(end - start)
fig = plt.figure(figsize=(15,5))
plt.bar(dim, tempos_sits)
plt.ylabel('Time (s)')
plt.xlabel('SITS dimensions (HxW)')
plt.xticks(rotation=45)
plt.grid()
plt.show()
###Output
_____no_output_____ |
linked_lists/linked_list/linked_list_challenge.ipynb | ###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a linked list with insert, append, find, delete, length, and print methods* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume this is a non-circular, singly linked list? * Yes* Do we keep track of the tail or just the head? * Just the head* Can we insert None values? * No Test Cases Insert to Front* Insert a None* Insert in an empty list* Insert in a list with one element or more elements Append* Append a None* Append in an empty list* Insert in a list with one element or more elements Find* Find a None* Find in an empty list* Find in a list with one element or more matching elements* Find in a list with no matches Delete* Delete a None* Delete in an empty list* Delete in a list with one element or more matching elements* Delete in a list with no matches Length* Length of zero or more elements Print* Print an empty list* Print a list with one or more elements AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/linked_lists/linked_list/linked_list_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Node(object):
# TODO: use dunder magic methods for iteration!
def __init__(self, data, next_node=None):
self.data = data
self.next_node = next_node
def __str__(self):
return self.data
class LinkedList(object):
def __init__(self, head=None):
self.head = head
def __len__(self):
lenght = 0
current_node = self.head
while current_node is not None:
current_node = current_node.next_node
lenght += 1
return lenght
def insert_to_front(self, data):
if data is None:
return
node_to_insert = Node(data, next_node=self.head)
self.head = node_to_insert
def append(self, data):
if data is None:
return
node_to_append = Node(data)
if self.head == None:
self.head = node_to_append
return node_to_append
current_node = self.head
while current_node.next_node is not None:
current_node = current_node.next_node
current_node.next_node = node_to_append
return node_to_append
def find(self, data):
if data is None:
return
current_node = self.head
while current_node is not None:
if current_node.data == data:
return current_node
current_node = current_node.next_node
def delete(self, data):
if data is None:
return
if self.head is None:
return
prev_node = self.head
curr_node = prev_node.next_node
while curr_node is not None:
if curr_node.data == data:
prev_node.next_node = curr_node.next_node
return
else:
prev_node = curr_node
curr_node = curr_node.next_node
def print_list(self):
current_node = self.head
while current_node is not None:
print(current_node)
current_node = current_node.next_node
def get_all_data(self):
data = []
current_node = self.head
while current_node is not None:
data.append(current_node.data)
current_node = current_node.next_node
return data
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_linked_list.py
from nose.tools import assert_equal
class TestLinkedList(object):
def test_insert_to_front(self):
print('Test: insert_to_front on an empty list')
linked_list = LinkedList(None)
linked_list.insert_to_front(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front on a None')
linked_list.insert_to_front(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front general case')
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(linked_list.get_all_data(), ['bc', 'a', 10])
print('Success: test_insert_to_front\n')
def test_append(self):
print('Test: append on an empty list')
linked_list = LinkedList(None)
linked_list.append(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: append a None')
linked_list.append(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: append general case')
linked_list.append('a')
linked_list.append('bc')
assert_equal(linked_list.get_all_data(), [10, 'a', 'bc'])
print('Success: test_append\n')
def test_find(self):
print('Test: find on an empty list')
linked_list = LinkedList(None)
node = linked_list.find('a')
assert_equal(node, None)
print('Test: find a None')
head = Node(10)
linked_list = LinkedList(head)
node = linked_list.find(None)
assert_equal(node, None)
print('Test: find general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
node = linked_list.find('a')
assert_equal(str(node), 'a')
print('Test: find general case with no matches')
node = linked_list.find('aaa')
assert_equal(node, None)
print('Success: test_find\n')
def test_delete(self):
print('Test: delete on an empty list')
linked_list = LinkedList(None)
linked_list.delete('a')
assert_equal(linked_list.get_all_data(), [])
print('Test: delete a None')
head = Node(10)
linked_list = LinkedList(head)
linked_list.delete(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: delete general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
linked_list.delete('a')
assert_equal(linked_list.get_all_data(), ['bc', 10])
print('Test: delete general case with no matches')
linked_list.delete('aa')
assert_equal(linked_list.get_all_data(), ['bc', 10])
print('Success: test_delete\n')
def test_len(self):
print('Test: len on an empty list')
linked_list = LinkedList(None)
assert_equal(len(linked_list), 0)
print('Test: len general case')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(len(linked_list), 3)
print('Success: test_len\n')
def main():
test = TestLinkedList()
test.test_insert_to_front()
test.test_append()
test.test_find()
test.test_delete()
test.test_len()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a linked list with insert, append, find, delete, length, and print methods* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Is this a singly or doubly linked list? * Singly* Is this a circular list? * No* Do we keep track of the tail or just the head? * Just the head Test Cases Insert to Front* Insert a None* Insert in an empty list* Insert in a list with one element or more elements Append* Append a None* Append in an empty list* Insert in a list with one element or more elements Find* Find a None* Find in an empty list* Find in a list with one element or more matching elements* Find in a list with no matches Delete* Delete a None* Delete in an empty list* Delete in a list with one element or more matching elements* Delete in a list with no matches Length* Length of zero or more elements Print* Print an empty list* Print a list with one or more elements AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/linked_lists/linked_list/linked_list_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Node(object):
def __init__(self, data, next_node=None):
pass
# TODO: Implement me
def __str__(self):
pass
# TODO: Implement me
class LinkedList(object):
def __init__(self, head=None):
pass
# TODO: Implement me
def __len__(self):
pass
# TODO: Implement me
def insert_to_front(self, data):
pass
# TODO: Implement me
def append(self, data, next_node=None):
pass
# TODO: Implement me
def find(self, data):
pass
# TODO: Implement me
def delete(self, data):
pass
# TODO: Implement me
def print_list(self):
pass
# TODO: Implement me
def get_all_data(self):
pass
# TODO: Implement me
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_linked_list.py
from nose.tools import assert_equal
class TestLinkedList(object):
def test_insert_to_front(self):
print('Test: insert_to_front on an empty list')
linked_list = LinkedList(None)
linked_list.insert_to_front(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front on a None')
linked_list.insert_to_front(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front general case')
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(linked_list.get_all_data(), ['bc', 'a', 10])
print('Success: test_insert_to_front\n')
def test_append(self):
print('Test: append on an empty list')
linked_list = LinkedList(None)
linked_list.append(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: append a None')
linked_list.append(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: append general case')
linked_list.append('a')
linked_list.append('bc')
assert_equal(linked_list.get_all_data(), [10, 'a', 'bc'])
print('Success: test_append\n')
def test_find(self):
print('Test: find on an empty list')
linked_list = LinkedList(None)
node = linked_list.find('a')
assert_equal(node, None)
print('Test: find a None')
head = Node(10)
linked_list = LinkedList(head)
node = linked_list.find(None)
assert_equal(node, None)
print('Test: find general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
node = linked_list.find('a')
assert_equal(str(node), 'a')
print('Test: find general case with no matches')
node = linked_list.find('aaa')
assert_equal(node, None)
print('Success: test_find\n')
def test_delete(self):
print('Test: delete on an empty list')
linked_list = LinkedList(None)
linked_list.delete('a')
assert_equal(linked_list.get_all_data(), [])
print('Test: delete a None')
head = Node(10)
linked_list = LinkedList(head)
linked_list.delete(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: delete general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
linked_list.delete('a')
assert_equal(linked_list.get_all_data(), ['bc', 10])
print('Test: delete general case with no matches')
linked_list.delete('aa')
assert_equal(linked_list.get_all_data(), ['bc', 10])
print('Success: test_delete\n')
def test_len(self):
print('Test: len on an empty list')
linked_list = LinkedList(None)
assert_equal(len(linked_list), 0)
print('Test: len general case')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(len(linked_list), 3)
print('Success: test_len\n')
def main():
test = TestLinkedList()
test.test_insert_to_front()
test.test_append()
test.test_find()
test.test_delete()
test.test_len()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a linked list with insert, append, find, delete, length, and print methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume this is a non-circular, singly linked list? * Yes* Do we keep track of the tail or just the head? * Just the head* Can we insert None values? * No Test Cases Insert to Front* Insert a None* Insert in an empty list* Insert in a list with one element or more elements Append* Append a None* Append in an empty list* Insert in a list with one element or more elements Find* Find a None* Find in an empty list* Find in a list with one element or more matching elements* Find in a list with no matches Delete* Delete a None* Delete in an empty list* Delete in a list with one element or more matching elements* Delete in a list with no matches Length* Length of zero or more elements Print* Print an empty list* Print a list with one or more elements AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/linked_lists/linked_list/linked_list_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Node(object):
def __init__(self, data, next_node=None):
self.data = data
self.next = next_node
def __str__(self):
return str(self.data)
class LinkedList(object):
def __init__(self, head=None):
self.head = head
def __len__(self):
pass
def insert_to_front(self, data):
if data is None:
return None
node = Node(data, self.head)
self.head = node
return node
def append(self, data):
if data is None:
return None
if self.head is None:
self.head = Node(data)
return self.head
node = self.head
while node.next is not None:
node = node.next
node.next = Node(data)
return node
def find(self, data):
pass
# TODO: Implement me
def delete(self, data):
pass
# TODO: Implement me
def print_list(self):
pass
# TODO: Implement me
def get_all_data(self):
data = []
node = self.head
while node is not None:
data.append(node.data)
node = node.next
return data
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_linked_list.py
from nose.tools import assert_equal
class TestLinkedList(object):
def test_insert_to_front(self):
print('Test: insert_to_front on an empty list')
linked_list = LinkedList(None)
linked_list.insert_to_front(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front on a None')
linked_list.insert_to_front(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front general case')
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(linked_list.get_all_data(), ['bc', 'a', 10])
print('Success: test_insert_to_front\n')
def test_append(self):
print('Test: append on an empty list')
linked_list = LinkedList(None)
linked_list.append(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: append a None')
linked_list.append(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: append general case')
linked_list.append('a')
linked_list.append('bc')
assert_equal(linked_list.get_all_data(), [10, 'a', 'bc'])
print('Success: test_append\n')
def test_find(self):
print('Test: find on an empty list')
linked_list = LinkedList(None)
node = linked_list.find('a')
assert_equal(node, None)
print('Test: find a None')
head = Node(10)
linked_list = LinkedList(head)
node = linked_list.find(None)
assert_equal(node, None)
print('Test: find general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
node = linked_list.find('a')
assert_equal(str(node), 'a')
print('Test: find general case with no matches')
node = linked_list.find('aaa')
assert_equal(node, None)
print('Success: test_find\n')
def test_delete(self):
print('Test: delete on an empty list')
linked_list = LinkedList(None)
linked_list.delete('a')
assert_equal(linked_list.get_all_data(), [])
print('Test: delete a None')
head = Node(10)
linked_list = LinkedList(head)
linked_list.delete(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: delete general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
linked_list.delete('a')
assert_equal(linked_list.get_all_data(), ['bc', 10])
print('Test: delete general case with no matches')
linked_list.delete('aa')
assert_equal(linked_list.get_all_data(), ['bc', 10])
print('Success: test_delete\n')
def test_len(self):
print('Test: len on an empty list')
linked_list = LinkedList(None)
assert_equal(len(linked_list), 0)
print('Test: len general case')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(len(linked_list), 3)
print('Success: test_len\n')
def main():
test = TestLinkedList()
test.test_insert_to_front()
test.test_append()
test.test_find()
test.test_delete()
test.test_len()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a linked list with insert, append, find, delete, length, and print methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume this is a non-circular, singly linked list? * Yes* Do we keep track of the tail or just the head? * Just the head* Can we insert None values? * No Test Cases Insert to Front* Insert a None* Insert in an empty list* Insert in a list with one element or more elements Append* Append a None* Append in an empty list* Insert in a list with one element or more elements Find* Find a None* Find in an empty list* Find in a list with one element or more matching elements* Find in a list with no matches Delete* Delete a None* Delete in an empty list* Delete in a list with one element or more matching elements* Delete in a list with no matches Length* Length of zero or more elements Print* Print an empty list* Print a list with one or more elements AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/linked_lists/linked_list/linked_list_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Node(object):
def __init__(self, data, next_node=None):
pass
# TODO: Implement me
def __str__(self):
pass
# TODO: Implement me
class LinkedList(object):
def __init__(self, head=None):
pass
# TODO: Implement me
def __len__(self):
pass
# TODO: Implement me
def insert_to_front(self, data):
pass
# TODO: Implement me
def append(self, data):
pass
# TODO: Implement me
def find(self, data):
pass
# TODO: Implement me
def delete(self, data):
pass
# TODO: Implement me
def print_list(self):
pass
# TODO: Implement me
def get_all_data(self):
pass
# TODO: Implement me
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_linked_list.py
from nose.tools import assert_equal
class TestLinkedList(object):
def test_insert_to_front(self):
print('Test: insert_to_front on an empty list')
linked_list = LinkedList(None)
linked_list.insert_to_front(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front on a None')
linked_list.insert_to_front(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front general case')
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(linked_list.get_all_data(), ['bc', 'a', 10])
print('Success: test_insert_to_front\n')
def test_append(self):
print('Test: append on an empty list')
linked_list = LinkedList(None)
linked_list.append(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: append a None')
linked_list.append(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: append general case')
linked_list.append('a')
linked_list.append('bc')
assert_equal(linked_list.get_all_data(), [10, 'a', 'bc'])
print('Success: test_append\n')
def test_find(self):
print('Test: find on an empty list')
linked_list = LinkedList(None)
node = linked_list.find('a')
assert_equal(node, None)
print('Test: find a None')
head = Node(10)
linked_list = LinkedList(head)
node = linked_list.find(None)
assert_equal(node, None)
print('Test: find general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
node = linked_list.find('a')
assert_equal(str(node), 'a')
print('Test: find general case with no matches')
node = linked_list.find('aaa')
assert_equal(node, None)
print('Success: test_find\n')
def test_delete(self):
print('Test: delete on an empty list')
linked_list = LinkedList(None)
linked_list.delete('a')
assert_equal(linked_list.get_all_data(), [])
print('Test: delete a None')
head = Node(10)
linked_list = LinkedList(head)
linked_list.delete(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: delete general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
linked_list.delete('a')
assert_equal(linked_list.get_all_data(), ['bc', 10])
print('Test: delete general case with no matches')
linked_list.delete('aa')
assert_equal(linked_list.get_all_data(), ['bc', 10])
print('Success: test_delete\n')
def test_len(self):
print('Test: len on an empty list')
linked_list = LinkedList(None)
assert_equal(len(linked_list), 0)
print('Test: len general case')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(len(linked_list), 3)
print('Success: test_len\n')
def main():
test = TestLinkedList()
test.test_insert_to_front()
test.test_append()
test.test_find()
test.test_delete()
test.test_len()
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a linked list with insert, append, find, delete, length, and print methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume this is a non-circular, singly linked list? * Yes* Do we keep track of the tail or just the head? * Just the head* Can we insert None values? * No Test Cases Insert to Front* Insert a None* Insert in an empty list* Insert in a list with one element or more elements Append* Append a None* Append in an empty list* Insert in a list with one element or more elements Find* Find a None* Find in an empty list* Find in a list with one element or more matching elements* Find in a list with no matches Delete* Delete a None* Delete in an empty list* Delete in a list with one element or more matching elements* Delete in a list with no matches Length* Length of zero or more elements Print* Print an empty list* Print a list with one or more elements AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/linked_lists/linked_list/linked_list_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
from nose.tools import assert_equal
class Node(object):
def __init__(self, data, next_node=None):
self.data=data
self.next_node=next_node
#print('create node')
# TODO: Implement me
def __str__(self):
pass
# TODO: Implement me
class LinkedList(object):
def __init__(self, head=None):
#print('create LinkedList')
self.head=head
self.prevNode=head
self.length=0
def __len__(self):
self.length=0
node=self.head
while node:
#print('inside while ')
self.length+=1
node=node.next_node
return self.length
# TODO: Implement me
def insert_to_front(self, data):
#len()
#print('insert to front')
if data is None:
return 0
if self.head is None:
node=Node(data)
self.head=node
self.prevNode=node
else:
node=Node(data,self.prevNode)
self.head=node
self.prevNode=node
#self.head=node
#print(f'{node.data}')
def append(self, data):
if self.head is None or data is None:
#print('append when head is None')
self.insert_to_front(data)
return 0
append_node=Node(data)
#head=node
node=self.head
while node:
self.prevNode=node
node=node.next_node
self.prevNode.next_node=append_node
#previous=self.head
def find(self, data):
node=self.head
l=self.__len__()
for i in range(l):
if node.data==data:
return node.data
elif node.next_node is None:
return None
break
else:
node=node.next_node
return None
def delete(self, data):
if data is None:
return self.get_all_data()
l=self.__len__()
node=self.head
for i in range(l):
#print(f'i val is {i}')
if node.data==data:
#print('found')
self.prevNode.next_node=node.next_node
del node
return self.get_all_data()
elif node.next_node is None:
#print('not found')
break
else:
#print(f'inside else block of delete')
self.prevNode=node
node=node.next_node
return self.get_all_data()
def print_list(self):
pass
# TODO: Implement me
def get_all_data(self):
l=self.__len__()
#print(f'lenth is {l}')
list1=[]
node=self.head
for i in range(l):
#print('inside for')
list1.append(node.data)
if node.next_node:
#print(f'{node.next_node}')
node=node.next_node
#print(list1)
return list1
#def get_reverse(self,head=sef.head): #this wont work becoz a default value cannot access self
def get_reverse(self,head=None):
# if head is None:
# head=self.head
head = head or self.head #shortform for above if conditon with None
print(id(self.head))
l=self.__len__()
node=head
current=node
next=node.next_node
if next:
print(f'data is {node.data}')
print('next is not none')
self.get_reverse(next)
print('returned to recurse fun')
#print(self.q.data)
if next is None:
print(f'data is {node.data}')
print('next is none')
#self.head=next
#global x
self.head=current
print(id(self.head))
return
current.next_node.next_node=current
print(current.next_node.next_node.data)
current.next_node=None
print(node.data)
print(next.data)
#print(node.next_node.next_node.data)
current.next_node=None
print(f'gloabal head x is {head.data}')
return
def get_loop(self,head=None):
head = head or self.head #shortform for above if conditon with None
print(id(self.head))
l=self.__len__()
node=head
current=self.head
next=self.head
while(current and next and next.next_node):
current=current.next_node
next=node.next_node.next_node
if(current==next):
return current
return None
print('Test: insert_to_front on an empty list')
linked_list = LinkedList(None)
linked_list.insert_to_front(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front on a None')
linked_list.insert_to_front(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front general case')
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(linked_list.get_all_data(), ['bc', 'a', 10])
print('Success: test_insert_to_front\n')
#print(linked_list.head)
print(id(linked_list.head))
#linked_list.get_reverse(linked_list.head)
linked_list.get_reverse()
print(linked_list.get_loop())
print(id(linked_list.head))
#print(linked_list.head)
linked_list.get_all_data()
###Output
Test: insert_to_front on an empty list
Test: insert_to_front on a None
Test: insert_to_front general case
Success: test_insert_to_front
1915963401288
1915963401288
data is bc
next is not none
1915963401288
data is a
next is not none
1915963401288
data is 10
next is none
1915952055560
returned to recurse fun
a
a
10
gloabal head x is a
returned to recurse fun
bc
bc
a
gloabal head x is bc
1915952055560
None
1915952055560
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_linked_list.py
from nose.tools import assert_equal
class TestLinkedList(object):
def test_insert_to_front(self):
print('Test: insert_to_front on an empty list')
linked_list = LinkedList(None)
linked_list.insert_to_front(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front on a None')
linked_list.insert_to_front(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: insert_to_front general case')
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(linked_list.get_all_data(), ['bc', 'a', 10])
print('Success: test_insert_to_front\n')
def test_append(self):
print('Test: append on an empty list')
linked_list = LinkedList(None)
linked_list.append(10)
assert_equal(linked_list.get_all_data(), [10])
print('Test: append a None')
linked_list.append(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: append general case')
linked_list.append('a')
linked_list.append('bc')
assert_equal(linked_list.get_all_data(), [10, 'a', 'bc'])
print('Success: test_append\n')
def test_find(self):
print('Test: find on an empty list')
linked_list = LinkedList(None)
node = linked_list.find('a')
assert_equal(node, None)
print('Test: find a None')
head = Node(10)
linked_list = LinkedList(head)
node = linked_list.find(None)
assert_equal(node, None)
print('Test: find general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
node = linked_list.find('a')
assert_equal(str(node), 'a')
print('Test: find general case with no matches')
node = linked_list.find('aaa')
assert_equal(node, None)
print('Success: test_find\n')
def test_delete(self):
print('Test: delete on an empty list')
linked_list = LinkedList(None)
linked_list.delete('a')
assert_equal(linked_list.get_all_data(), [])
print('Test: delete a None')
head = Node(10)
linked_list = LinkedList(head)
linked_list.delete(None)
assert_equal(linked_list.get_all_data(), [10])
print('Test: delete general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
linked_list.delete('a')
assert_equal(linked_list.get_all_data(), ['bc', 10])
print('Test: delete general case with no matches')
linked_list.delete('aa')
assert_equal(linked_list.get_all_data(), ['bc', 10])
print('Success: test_delete\n')
def test_len(self):
print('Test: len on an empty list')
linked_list = LinkedList(None)
assert_equal(len(linked_list), 0)
print('Test: len general case')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
assert_equal(len(linked_list), 3)
print('Success: test_len\n')
def main():
test = TestLinkedList()
test.test_insert_to_front()
# test.test_append()
# test.test_find()
# test.test_delete()
# test.test_len()
if __name__ == '__main__':
main()
###Output
Test: insert_to_front on an empty list
Test: insert_to_front on a None
Test: insert_to_front general case
Success: test_insert_to_front
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Challenge Notebook Problem: Implement a linked list with insert, append, find, delete, length, and print methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test)* [Solution Notebook](Solution-Notebook) Constraints* Can we assume this is a non-circular, singly linked list? * Yes* Do we keep track of the tail or just the head? * Just the head* Can we insert None values? * No Test Cases Insert to Front* Insert a None* Insert in an empty list* Insert in a list with one element or more elements Append* Append a None* Append in an empty list* Insert in a list with one element or more elements Find* Find a None* Find in an empty list* Find in a list with one element or more matching elements* Find in a list with no matches Delete* Delete a None* Delete in an empty list* Delete in a list with one element or more matching elements* Delete in a list with no matches Length* Length of zero or more elements Print* Print an empty list* Print a list with one or more elements AlgorithmRefer to the [Solution Notebook](http://nbviewer.ipython.org/github/donnemartin/interactive-coding-challenges/blob/master/linked_lists/linked_list/linked_list_solution.ipynb). If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start. Code
###Code
class Node(object):
def __init__(self, data, next_node=None):
pass
# TODO: Implement me
def __str__(self):
pass
# TODO: Implement me
class LinkedList(object):
def __init__(self, head=None):
pass
# TODO: Implement me
def __len__(self):
pass
# TODO: Implement me
def insert_to_front(self, data):
pass
# TODO: Implement me
def append(self, data):
pass
# TODO: Implement me
def find(self, data):
pass
# TODO: Implement me
def delete(self, data):
pass
# TODO: Implement me
def print_list(self):
pass
# TODO: Implement me
def get_all_data(self):
pass
# TODO: Implement me
###Output
_____no_output_____
###Markdown
Unit Test **The following unit test is expected to fail until you solve the challenge.**
###Code
# %load test_linked_list.py
import unittest
class TestLinkedList(unittest.TestCase):
def test_insert_to_front(self):
print('Test: insert_to_front on an empty list')
linked_list = LinkedList(None)
linked_list.insert_to_front(10)
self.assertEqual(linked_list.get_all_data(), [10])
print('Test: insert_to_front on a None')
linked_list.insert_to_front(None)
self.assertEqual(linked_list.get_all_data(), [10])
print('Test: insert_to_front general case')
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
self.assertEqual(linked_list.get_all_data(), ['bc', 'a', 10])
print('Success: test_insert_to_front\n')
def test_append(self):
print('Test: append on an empty list')
linked_list = LinkedList(None)
linked_list.append(10)
self.assertEqual(linked_list.get_all_data(), [10])
print('Test: append a None')
linked_list.append(None)
self.assertEqual(linked_list.get_all_data(), [10])
print('Test: append general case')
linked_list.append('a')
linked_list.append('bc')
self.assertEqual(linked_list.get_all_data(), [10, 'a', 'bc'])
print('Success: test_append\n')
def test_find(self):
print('Test: find on an empty list')
linked_list = LinkedList(None)
node = linked_list.find('a')
self.assertEqual(node, None)
print('Test: find a None')
head = Node(10)
linked_list = LinkedList(head)
node = linked_list.find(None)
self.assertEqual(node, None)
print('Test: find general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
node = linked_list.find('a')
self.assertEqual(str(node), 'a')
print('Test: find general case with no matches')
node = linked_list.find('aaa')
self.assertEqual(node, None)
print('Success: test_find\n')
def test_delete(self):
print('Test: delete on an empty list')
linked_list = LinkedList(None)
linked_list.delete('a')
self.assertEqual(linked_list.get_all_data(), [])
print('Test: delete a None')
head = Node(10)
linked_list = LinkedList(head)
linked_list.delete(None)
self.assertEqual(linked_list.get_all_data(), [10])
print('Test: delete general case with matches')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
linked_list.delete('a')
self.assertEqual(linked_list.get_all_data(), ['bc', 10])
print('Test: delete general case with no matches')
linked_list.delete('aa')
self.assertEqual(linked_list.get_all_data(), ['bc', 10])
print('Success: test_delete\n')
def test_len(self):
print('Test: len on an empty list')
linked_list = LinkedList(None)
self.assertEqual(len(linked_list), 0)
print('Test: len general case')
head = Node(10)
linked_list = LinkedList(head)
linked_list.insert_to_front('a')
linked_list.insert_to_front('bc')
self.assertEqual(len(linked_list), 3)
print('Success: test_len\n')
def main():
test = TestLinkedList()
test.test_insert_to_front()
test.test_append()
test.test_find()
test.test_delete()
test.test_len()
if __name__ == '__main__':
main()
###Output
_____no_output_____ |
Python-Programming/Python-3-Bootcamp/16-Bonus Material - Introduction to GUIs/.ipynb_checkpoints/07-Advanced Widget List-checkpoint.ipynb | ###Markdown
Advanced Widget ListThis notebook is an extension of **Widget List**, describing even more of the GUI widgets available!
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
OutputThe `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlmodule-IPython.display). After the widget is created, direct output to it using a context manager.
###Code
out = widgets.Output()
out
###Output
_____no_output_____
###Markdown
You can print text to the output area as shown below.
###Code
with out:
for i in range(10):
print(i, 'Hello world!')
###Output
_____no_output_____
###Markdown
Rich material can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the `Output` widget.
###Code
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
###Output
_____no_output_____
###Markdown
Play (Animation) widgetThe `Play` widget is useful to perform animations by iterating on a sequence of integers with a certain speed. The value of the slider below is linked to the player.
###Code
play = widgets.Play(
# interval=10,
value=50,
min=0,
max=100,
step=1,
description="Press play",
disabled=False
)
slider = widgets.IntSlider()
widgets.jslink((play, 'value'), (slider, 'value'))
widgets.HBox([play, slider])
###Output
_____no_output_____
###Markdown
Date pickerThe date picker widget works in Chrome and IE Edge, but does not currently work in Firefox or Safari because they do not support the HTML date input field.
###Code
widgets.DatePicker(
description='Pick a Date',
disabled=False
)
###Output
_____no_output_____
###Markdown
Color picker
###Code
widgets.ColorPicker(
concise=False,
description='Pick a color',
value='blue',
disabled=False
)
###Output
_____no_output_____
###Markdown
ControllerThe `Controller` allows a game controller to be used as an input device.
###Code
widgets.Controller(
index=0,
)
###Output
_____no_output_____
###Markdown
Container/Layout widgetsThese widgets are used to hold other widgets, called children. Each has a `children` property that may be set either when the widget is created or later. Box
###Code
items = [widgets.Label(str(i)) for i in range(4)]
widgets.Box(items)
###Output
_____no_output_____
###Markdown
HBox
###Code
items = [widgets.Label(str(i)) for i in range(4)]
widgets.HBox(items)
###Output
_____no_output_____
###Markdown
VBox
###Code
items = [widgets.Label(str(i)) for i in range(4)]
left_box = widgets.VBox([items[0], items[1]])
right_box = widgets.VBox([items[2], items[3]])
widgets.HBox([left_box, right_box])
###Output
_____no_output_____
###Markdown
Accordion
###Code
accordion = widgets.Accordion(children=[widgets.IntSlider(), widgets.Text()])
accordion.set_title(0, 'Slider')
accordion.set_title(1, 'Text')
accordion
###Output
_____no_output_____
###Markdown
TabsIn this example the children are set after the tab is created. Titles for the tabes are set in the same way they are for `Accordion`.
###Code
tab_contents = ['P0', 'P1', 'P2', 'P3', 'P4']
children = [widgets.Text(description=name) for name in tab_contents]
tab = widgets.Tab()
tab.children = children
for i in range(len(children)):
tab.set_title(i, str(i))
tab
###Output
_____no_output_____
###Markdown
Accordion and Tab use `selected_index`, not valueUnlike the rest of the widgets discussed earlier, the container widgets `Accordion` and `Tab` update their `selected_index` attribute when the user changes which accordion or tab is selected. That means that you can both see what the user is doing *and* programmatically set what the user sees by setting the value of `selected_index`.Setting `selected_index = None` closes all of the accordions or deselects all tabs.In the cells below try displaying or setting the `selected_index` of the `tab` and/or `accordion`.
###Code
tab.selected_index = 3
accordion.selected_index = None
###Output
_____no_output_____
###Markdown
Nesting tabs and accordionsTabs and accordions can be nested as deeply as you want. If you have a few minutes, try nesting a few accordions or putting an accordion inside a tab or a tab inside an accordion.The example below makes a couple of tabs with an accordion children in one of them
###Code
tab_nest = widgets.Tab()
tab_nest.children = [accordion, accordion]
tab_nest.set_title(0, 'An accordion')
tab_nest.set_title(1, 'Copy of the accordion')
tab_nest
###Output
_____no_output_____ |
r/assignment.ipynb | ###Markdown
Applied Process Mining ModuleThis notebook is part of an Applied Process Mining module. The collection of notebooks is a *living document* and subject to change. Assignment - BPI Challenge 2020 SetupIn this notebook, we are going to need the `tidyverse` and the `bupaR` packages.
###Code
## Perform the commented out commands below in a separate R session
# install.packages("tidyverse")
# install.packages("bupaR")
# for larger and readable plots
options(jupyter.plot_scale=1.25)
# the initial execution of these may give you warnings that you can safely ignore
library(tidyverse)
library(bupaR)
library(processanimateR)
###Output
Attaching package: 'bupaR'
The following object is masked from 'package:stats':
filter
The following object is masked from 'package:utils':
timestamp
###Markdown
Assignment In the first hands-on session, you are going to explore a real-life dataset and apply what was presented in the lecture about event logs and basic process mining visualizations. The objective is to explore your dataset and as an event log and with the learned process mining visualizations in mind.* Analyse basic properties of the the process (business process or other process) that has generated it. * What are possible case notions / what is the or what are the case identifiers? * What are the activities? Are all activities on the same abstraction level? Can activities be derived from other data? * Can activities or actions be derived from other (non-activity) data?* Discovery a map of the process (or a sub-process) behind it. * Are there multiple processes that can be discovered? * What is the effect of taking a subset of the data? Dataset The proposed real-life dataset to investigate is the *BPI Challenge 2020* dataset. The dataset is captured from the travel reimbursment process of Eindhoven University of Technolog and has been collected for usage in the BPI challenge. The BPI challenge is a yearly event in the Process Mining research community in which an event log is released along with some business questions that shall be addressed with process analytics techniques.Here is more informaation on the dataset and downloads links to the data files:* [Overview of the Case](https://icpmconference.org/2020/bpi-challenge/)* [Dataset](https://doi.org/10.4121/uuid:52fb97d4-4588-43c9-9d04-3604d4613b51)On the BPI Challenge 2020 website above, there are several reports (including the winners of the challenge) that describe and analyze the dataset in detail. However, we suggest that you first try to explore the dataset without reading the reports. The business questions and a description of the process flow can be also found at the BPI Challenge 2020 website. We repeat it here for convenience: Process FlowThe various declaration documents (domestic and international declarations, pre-paid travel costs and requests for payment) all follow a similar process flow. After submission by the employee, the request is sent for approval to the travel administration. If approved, the request is then forwarded to the budget owner and after that to the supervisor. If the budget owner and supervisor are the same person, then only one of the these steps it taken. In some cases, the director also needs to approve the request.In all cases, a rejection leads to one of two outcomes. Either the employee resubmits the request, or the employee also rejects the request.If the approval flow has a positive result, the payment is requested and made.The travel permits follow a slightly different flow as there is no payment involved. Instead, after all approval steps a trip can take place, indicated with an estimated start and end date. These dates are not exact travel dates, but rather estimated by the employee when the permit request is submitted. The actual travel dates are not recorded in the data, but should be close to the given dates in most cases.After the end of a trip, an employee receives several reminders to submit a travel declaration.After a travel permit is approved, but before the trip starts, employees can ask for a reimbursement of pre-paid travel costs. Several requests can be submitted independently of each other. After the trip ends, an international declaration can be submitted, although sometimes multiple declarations are seen for specific cases.It’s important to realize that the process described above is the process for 2018. For 2017, there are some differences as this was a pilot year and the process changed slightly on several occasions. Business QuestionsThe following questions are of interest:* What is the throughput of a travel declaration from submission (or closing) to paying?* Is there are difference in throughput between national and international trips?* Are there differences between clusters of declarations, for example between cost centers/departments/projects etc.?* What is the throughput in each of the process steps, i.e. the submission, judgement by various responsible roles and payment?* Where are the bottlenecks in the process of a travel declaration?* Where are the bottlenecks in the process of a travel permit (note that there can be mulitple requests for payment and declarations per permit)?* How many travel declarations get rejected in the various processing steps and how many are never approved?Then there are more detailed questions* How many travel declarations are booked on projects?* How many corrections have been made for declarations?* Are there any double payments?* Are there declarations that were not preceded properly by an approved travel permit? Or are there even declarations for which no permit exists?* How many travel declarations are submitted by the traveler and how many by a mandated person?* How many travel declarations are first rejected because they are submitted more than 2 months after the end of a trip and are then re-submitted?* Is this different between departments?* How many travel declarations are not approved by budget holders in time (7 days) and are then automatically rerouted to supervisors?* Next to travel declarations, there are also requests for payments. These are specific for non-TU/e employees. Are there any TU/e employees that submitted a request for payment instead of a travel declaration?Similar to the task at the BPI challenge, we are aware that not all questions can be answered on this dataset and we encourage you to come up with new and interesting insights. Data Loading Several datasets have been released as part of the BPI challenge. The data is split into travel permits and several request types, namely domestic declarations, international declarations, prepaid travel costs and requests for payment, where the latter refers to expenses which should not be related to trips (think of representation costs, hardware purchased for work, etc.). At Eindhoven University of Technology (TU/e), this is no different. The TU/e staff travels a lot to conferences or to other universities for project meetings and/or to meet up with colleagues in the field. And, as many companies, they have procedures in place for arranging the travels as well as for the reimbursement of costs.To make your life a bit easier, we have created the initial code to load the dataset that is already stored in the [XES format](http://xes-standard.org/) for event logs.
###Code
read_xes_gzip <- function(xes_url) {
temp <- tempfile(fileext = ".xes.gz")
download.file(xes_url, temp, mode = "wb")
temp_xes <- tempfile()
R.utils::gunzip(temp, temp_xes)
xes <- xesreadR::read_xes(temp_xes)
unlink(temp)
unlink(temp_xes)
return(xes)
}
# some warnings are expected here (bupaR needs an updating)
rfp_data <- read_xes_gzip("https://data.4tu.nl/ndownloader/files/24061154")
ptc_data <- read_xes_gzip("https://data.4tu.nl/ndownloader/files/24043835")
int_decl_data <- read_xes_gzip("https://data.4tu.nl/ndownloader/files/24023492")
dom_decl_data <- read_xes_gzip("https://data.4tu.nl/ndownloader/files/24031811")
rfp_data %>% summary()
ptc_data %>% summary()
int_decl_data %>% summary()
dom_decl_data %>% summary()
###Output
Number of events: 56437
Number of cases: 10500
Number of traces: 99
Number of distinct activities: 17
Average trace length: 5.374952
Start eventlog: 2017-01-09 08:49:50
End eventlog: 2019-06-17 15:30:58
|
notebooks/1_preprocessing.ipynb | ###Markdown
Preprocessing This notebook is first in the series of soiling detection pipeline notebooksData from other parks (eg Park1) can be used by changing the filepaths and working_dirAuthor: Lisa Crowther
###Code
import pandas as pd
import numpy as np
import matplotlib
import copy
import matplotlib.pyplot as plt
from pathlib import Path
from sys import path as syspath
syspath.insert(1, '../src/')
###Output
_____no_output_____
###Markdown
Import dataframes from previous notebook
###Code
root_path = "../data/raw/New_data/"
park1_power_filepath = root_path + "SolarPark2_Oct_2019_Oct2020_string_production.csv"
park1_environment_filepath = root_path + "Solarpark2_Oct_2019_Oct2020_environmental.csv"
park1_capacity_filepath = root_path + "Solarpark_2_CB_capacity.csv"
working_dir = "../data/temp/park2/"
def read_data(power_data_filepath, env_data_filepath, cap_data_filepath):
df_pow = pd.read_csv(power_data_filepath, delimiter=';',parse_dates=['datetime'], date_parser = pd.to_datetime, index_col='datetime')
df_env = pd.read_csv(env_data_filepath, delimiter = ',',parse_dates=['datetime'], date_parser = pd.to_datetime, index_col='datetime')
df_cap = pd.read_csv(cap_data_filepath)
return [df_pow, df_env, df_cap]
df_pow, df_env, df_cap = read_data(park1_power_filepath, park1_environment_filepath, park1_capacity_filepath)
###Output
_____no_output_____
###Markdown
Clean dataframes:Rename columns
###Code
df_env.columns = ['Temp_A', 'Temp_P', 'Irradiance']
df_cap.columns= ['displayname', 'capacity_kW', 'number_panels']
Inversors =df_pow.columns[(df_pow.columns).str.contains('Inv')]
RCBs =df_pow.columns[(df_pow.columns).str.contains('RCB')]
strings = df_pow.columns[(df_pow.columns).str.contains('ST')]
CBs=df_pow.columns[(df_pow.columns).str.contains('CB')]
###Output
_____no_output_____
###Markdown
Remove RCB columns (in park 1 contains NAs only)Remove inversors columns (want to analyse individual strings or CBs)
###Code
df_pow.drop(columns=(RCBs), inplace=True)
df_pow.drop(columns=(Inversors), inplace=True)
df_cap.dropna(inplace=True)
###Output
_____no_output_____
###Markdown
Calculate efficiency of panels
###Code
pan_No = df_cap.number_panels
pan_No.index=df_cap.displayname
cap= df_cap.capacity_kW
cap.index=df_cap.displayname
panelArea=1.956*.992
#in m2, from datasheet
totalPanelA= pan_No*panelArea
totalPanelA.dropna(inplace=True)
Efficiency = cap/totalPanelA
Efficiency= round(Efficiency.tail(1).values[0],4)
###Output
_____no_output_____
###Markdown
Merge power and environment dataframes, drop rows where Irradiance is NA
###Code
power_env = pd.merge(df_pow,df_env, on=['datetime'], how='inner')
power_env_sub = power_env.dropna(subset=['Irradiance'])
df_env_sub =df_env.dropna(subset=['Irradiance'])
###Output
_____no_output_____
###Markdown
Calculate theoretical outputs (maxP)
###Code
# env data without nas
df_env_sub = df_env_sub.dropna()
# Irradiance and temperature adjustment
To = 25
gamma = -0.004
df_env_sub['irr_T_adj'] = df_env_sub.Irradiance/1000 * (1+((df_env_sub.Temp_P-To) * gamma))
factor_Irr_Temp = df_env_sub.drop(columns=['Temp_A','Temp_P','Irradiance'])
# Panel area and efficiency adjustment
AE = (totalPanelA*Efficiency).dropna()
AE.unique()
#multiply the area and efficiency for each string by the irradiance and temperature adjustment factor
#this has only dates where irradiance was not NA
for i in range(len(AE)):
factor_Irr_Temp[AE.index[i]]=factor_Irr_Temp.irr_T_adj*AE[i]
#this is the A * E for each string multiplied by the irradiance * temperature adjustment factor : ie power max in Watts
## Theoretical output dataframe: adjusted power output values and drop the adjustment factor column
maxP_df = copy.deepcopy(factor_Irr_Temp.drop(columns=['irr_T_adj']))
# Output dataframe where irradiance is not NA
output_sub= power_env_sub.drop(columns=['Temp_A','Temp_P', 'Irradiance'])
output_sub.head()
maxP_df.columns=output_sub.columns
###Output
_____no_output_____
###Markdown
Calculate Energy Performance Index (EPI)Power output/ theoretical calculated output
###Code
EPI= output_sub.div(maxP_df)
EPI.median(axis=1).plot()
###Output
_____no_output_____
###Markdown
Save csvs: Output data of strings/CBs only, EPIs of strings/CBs, theoretical output of strings/CBs
###Code
##save data function from Marcus's scripts
def save_data(dataframes, names, root_dir, sub_dir):
if root_dir[-1] != "/":
root_dir += "/"
if sub_dir[-1] != "/":
root_dir += sub_dir + "/"
for data, name in zip(dataframes, names):
try:
filepath_out = root_dir + name + ".csv"
Path(root_dir).mkdir(parents=True, exist_ok=True)
print(f"\tSaving {filepath_out}...")
data.to_csv(filepath_out)
print("\tDone.")
except Exception as e:
print(e)
pass
save_data([output_sub, EPI, maxP_df], ["df_output", "df_EPI", 'df_theor_output'], working_dir, "preprocessing")
###Output
Saving ../data/temp/park2/preprocessing/df_output.csv...
Done.
Saving ../data/temp/park2/preprocessing/df_EPI.csv...
Done.
Saving ../data/temp/park2/preprocessing/df_theor_output.csv...
Done.
###Markdown
Loading Articles
###Code
# Load in each article
a1 = pd.read_parquet('~/Documents/bert-news/data/articles1.gzip')
a2 = pd.read_parquet('~/Documents/bert-news/data/articles2.gzip')
a3 = pd.read_parquet('~/Documents/bert-news/data/articles3.gzip')
# Concatenate articles together
articles = pd.concat([a1, a2, a3], ignore_index=True)
del a1, a2, a3
# For now, including 140K articles should be enough
articles.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 142570 entries, 0 to 142569
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 title 142570 non-null object
1 author 142570 non-null object
2 publication 142570 non-null object
3 content 142570 non-null object
dtypes: object(4)
memory usage: 4.4+ MB
###Markdown
Only Include Top Publications
###Code
# Maybe, focus on top news sources
articles['publication'].value_counts()
# Only include Breitbart and CNN
pubs = ['Breitbart', 'New York Post', 'NPR', 'CNN', 'Washington Post', 'New York Times']
articles = articles[articles['publication'].isin(pubs)]
# Another glimpse!
articles['publication'].value_counts()
###Output
_____no_output_____
###Markdown
Only Include Articles with Authors
###Code
# Occurrence of NULL authors
articles['author'].value_counts()
# Remove NAN authors
articles = articles[articles['author'] != 'nan']
# Another glimpse!
articles['author'].value_counts()
###Output
_____no_output_____
###Markdown
Assign Publications to Political Party
###Code
# Determine party based on PEW survey
right_pubs = ['Breitbart', 'New York Post']
# Assign publication to party
articles['party'] = 'left'
articles.loc[articles['publication'].isin(right_pubs), 'party'] = 'right'
# Another glimpse!
articles['party'].value_counts()
###Output
_____no_output_____
###Markdown
Stratify on Publications
###Code
# For each publication,
# randomly select the same number of
# articles as the publication with the
# fewest number of articles
articles['publication'].value_counts()
# Stratify articles by publication
min_strat = articles.groupby('publication').size().min()
articles = articles.groupby('publication').apply(lambda x: x.sample(min_strat))
# Another glimpse!
articles['publication'].value_counts()
###Output
_____no_output_____
###Markdown
Save and Serialize Data
###Code
# Save preprocessed data
articles.to_parquet('~/Downloads/proc_articles.gzip', compression='gzip')
###Output
_____no_output_____
###Markdown
Preprocessing of Topic Modeling Project for Emergency Medicine
###Code
import pandas as pd
import os
def read_files():
files = os.listdir("../data")
files_xls = [f for f in files if f[-3:] == 'xls']
df = pd.DataFrame()
for f in files_xls:
data = pd.read_excel("../data/" + f, header=1, index_col=0)
df = df.append(data)
return df
df = read_files()
df.head()
print("Number of articles prior to processing: ", len(df))
df.to_csv('../Data/data_uncleaned.csv', index=False)
def process_initial(df):
""" cleans intial data set"""
# filter for only journal articles
df = df[~df['PT'].str.contains('Case|Comment|Review|Editorial|Letter')]
df = df.filter(items = ['AB', 'SO', 'TI','YR'])
df['SO'] = df['SO'].str.split('.').str[0]
df = df.rename(index=str, columns={"AB": "abstract", "SO": "journal", "TI": "title", "YR":"year"})
df = df.reset_index()
# add column with title + abstract
df['title_abstract'] = df[['title', 'abstract']].apply(lambda x: ' '.join(x.astype(str)), axis=1)
df = df.filter(items = ['title', 'abstract', 'title_abstract', 'journal', 'year'])
df = df.reset_index(drop=True)
return df
df = process_initial(df)
df.head()
print("Number of articles after removing Cases, Comments, Review, etc.: ", len(df))
df = df.dropna()
print("Number of articles after ones without abstracts: ", len(df))
# save file
df.to_csv('../Data/data_cleaned.csv', index=False)
###Output
_____no_output_____ |
notebooks/lstm_conditional_.ipynb | ###Markdown
Imports
###Code
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.optimizers.schedules import ExponentialDecay # from https://arxiv.org/pdf/1506.02078.pdf
from tensorflow.keras.callbacks import EarlyStopping
from tqdm.notebook import tqdm
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
###Output
Num GPUs Available: 0
###Markdown
Hyper-parameters
###Code
tunable_hparams = {
'stateful_generation' : True,
'mapping_type' : 'seq2seq',
'early_stopping' : False,
'seq_length' : 200,
'game' : 'mario'
}
fixed_hparams = {
'hidden_size' : 128,
'learning_rate' : 2e-3,
'learning_rate_decay' : 0.95,
'dropout' : 0.5,
'batch_size' : 100,
'num_layers' : 3,
'max_epochs' : 50
}
for key, val in tunable_hparams.items():
exec(key + '=val')
for key, val in fixed_hparams.items():
exec(key + '=val')
###Output
_____no_output_____
###Markdown
Data
###Code
data = open('corpuses/mario_corpus_conditional.txt', 'r').read()
level_strs = data.rstrip().split(')')[:-1]
print(len(level_strs))
chars = []
for level_str in level_strs:
chars.extend(list(level_str))
chars = list(set(chars))
vocab_size = len(chars)
print(chars, vocab_size)
char_to_ix = { ch:i for i, ch in enumerate(chars) }
ix_to_char = { i:ch for i, ch in enumerate(chars) }
ix_to_char
level_arrays = []
for level_str in level_strs:
level_arrays.append(np.array([char_to_ix[char] for char in list(level_str)]))
def get_inputs_and_targets_from_level_array(level_array):
inputs, targets = [], []
for i in range(len(level_array) - seq_length):
inputs.append(level_array[i:i+seq_length])
targets.append(level_array[i+1:i+seq_length+1])
inputs, targets = map(np.array, [inputs, targets])
inputs = np.eye(vocab_size)[inputs]
return inputs, targets
inputs, targets = [], []
for level_array in tqdm(level_arrays, leave=False):
inputs_temp, targets_temp = get_inputs_and_targets_from_level_array(level_array)
inputs.extend(inputs_temp); targets.extend(targets_temp)
inputs, targets = map(np.array, [inputs, targets])
inputs.shape, targets.shape
###Output
_____no_output_____
###Markdown
Model callbacks
###Code
lr_scheduler = ExponentialDecay(
initial_learning_rate=learning_rate,
decay_steps=len(inputs) // batch_size,
decay_rate=learning_rate_decay,
)
optimizer = RMSprop(learning_rate=lr_scheduler)
es_callback = EarlyStopping(
monitor='val_out_acc_custom_acc', mode='max', patience=2, restore_best_weights=early_stopping
)
def custom_loss(y_true, y_pred):
scce = tf.keras.losses.SparseCategoricalCrossentropy()
return scce(
tf.reshape(y_true, shape=(tf.shape(y_true)[0] * seq_length, )),
tf.reshape(y_pred, shape=(tf.shape(y_pred)[0] * seq_length, vocab_size))
)
def custom_acc(y_true, y_pred):
return tf.math.reduce_mean(
tf.cast(
tf.math.equal(
tf.math.argmax(tf.reshape(y_pred, shape=(tf.shape(y_pred)[0] * seq_length, vocab_size)), axis=-1),
tf.cast(tf.reshape(y_true, shape=(tf.shape(y_true)[0] * seq_length, )), dtype=tf.int64)
),
dtype=tf.float32
)
)
###Output
_____no_output_____
###Markdown
Model definition
###Code
lstm_1_state_h_in = keras.layers.Input(shape=[hidden_size])
lstm_1_state_c_in = keras.layers.Input(shape=[hidden_size])
lstm_2_state_h_in = keras.layers.Input(shape=[hidden_size])
lstm_2_state_c_in = keras.layers.Input(shape=[hidden_size])
lstm_3_state_h_in = keras.layers.Input(shape=[hidden_size])
lstm_3_state_c_in = keras.layers.Input(shape=[hidden_size])
input = keras.layers.Input(shape=[seq_length, vocab_size])
out, lstm_1_state_h_out, lstm_1_state_c_out = keras.layers.LSTM(hidden_size, return_sequences=True, return_state=True)(
input, initial_state=[lstm_1_state_h_in, lstm_1_state_c_in]
)
out = layers.Dropout(dropout)(out)
out, lstm_2_state_h_out, lstm_2_state_c_out = keras.layers.LSTM(hidden_size, return_sequences=True, return_state=True)(
out, initial_state=[lstm_2_state_h_in, lstm_2_state_c_in]
)
out = layers.Dropout(dropout)(out)
out, lstm_3_state_h_out, lstm_3_state_c_out = keras.layers.LSTM(hidden_size, return_sequences=True, return_state=True)(
out, initial_state=[lstm_3_state_h_in, lstm_3_state_c_in]
)
out = layers.Dropout(dropout)(out)
out = layers.Dense(vocab_size)(out)
out = layers.Activation('softmax')(out)
out_acc = layers.Lambda(lambda x:x, name = "out_acc")(out)
model = keras.models.Model(
inputs=[
input,
lstm_1_state_h_in, lstm_1_state_c_in,
lstm_2_state_h_in, lstm_2_state_c_in,
lstm_3_state_h_in, lstm_3_state_c_in
],
outputs=[
out_acc,
lstm_1_state_h_out, lstm_1_state_c_out,
lstm_2_state_h_out, lstm_2_state_c_out,
lstm_3_state_h_out, lstm_3_state_c_out
]
)
model.compile(
loss=[custom_loss, None, None, None, None, None, None],
loss_weights=[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
metrics={'out_acc':custom_acc},
optimizer=optimizer
)
###Output
_____no_output_____
###Markdown
Model training
###Code
dummy = np.zeros((len(inputs), hidden_size))
history = model.fit(
[inputs, dummy, dummy, dummy, dummy, dummy, dummy],
[targets, dummy, dummy, dummy, dummy, dummy, dummy],
batch_size=batch_size,
validation_split=0.1,
shuffle=True,
epochs=max_epochs,
callbacks=[es_callback]
)
for i in range(10):
model.save('lstm_conditional.h5')
###Output
Epoch 1/50
1140/1140 [==============================] - 158s 138ms/step - loss: 0.4111 - out_acc_loss: 0.4111 - out_acc_custom_acc: 0.8785 - val_loss: 0.3668 - val_out_acc_loss: 0.3668 - val_out_acc_custom_acc: 0.8844
Epoch 2/50
1140/1140 [==============================] - 156s 137ms/step - loss: 0.1537 - out_acc_loss: 0.1537 - out_acc_custom_acc: 0.9533 - val_loss: 0.3750 - val_out_acc_loss: 0.3750 - val_out_acc_custom_acc: 0.9099
Epoch 4/50
1140/1140 [==============================] - 156s 137ms/step - loss: 0.1356 - out_acc_loss: 0.1356 - out_acc_custom_acc: 0.9586 - val_loss: 0.4130 - val_out_acc_loss: 0.4130 - val_out_acc_custom_acc: 0.9105
###Markdown
Load trained model
###Code
model = keras.models.load_model(
'trained_models/lstm_conditional_elements.h5',
custom_objects={'custom_loss':custom_loss, 'custom_acc':custom_acc}
)
model.evaluate(
[inputs, dummy, dummy, dummy, dummy, dummy, dummy],
[targets, dummy, dummy, dummy, dummy, dummy, dummy],
batch_size=5, verbose=1
) # sanity check
###Output
151/25319 [..............................] - ETA: 20:55 - loss: 0.1233 - out_acc_loss: 0.1233 - out_acc_custom_acc: 0.9629
###Markdown
Generate level
###Code
def onehot_to_string(onehot):
ints = np.argmax(onehot, axis=-1)
chars = [ix_to_char[ix] for ix in ints]
string = "".join(chars)
char_array = []
if len(string.rstrip().split('\n')[-1]) < 17:
for line in string.rstrip().split('\n')[:-1]:
char_array.append(list(line))
else:
for line in string.rstrip().split('\n'):
char_array.append(list(line))
char_array = np.array(char_array).T
string = ""
for row in char_array:
string += "".join(row) + "\n"
return string
seed = inputs[0][:3 * 18 - 2].copy() # 3 cols * 18 tiles per col - newline char - condition char
seed[17+16] = 0
seed[17+16][12] = 1
seed[17*2+17] = 0
seed[17*2+17][12] = 1
print(seed.shape)
print(onehot_to_string(seed))
num_chunks_to_gen = 5# just for testing purposed
num_tile_to_gen = 1 + num_chunks_to_gen * 16 * 17 # 1 newline char for the 3rd col, condition char are offered directly
condition_tape_question = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0] * num_chunks_to_gen
condition_tape_coin = [1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] * num_chunks_to_gen
condition_tape_enemy = [0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0] * num_chunks_to_gen
condition_tape_pipe = [0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0] * num_chunks_to_gen
condition_tape_cannon = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1] * num_chunks_to_gen
print(len(condition_tape_question)) # should be of length 16
for j in tqdm(range(1, 20+1)):
seed = inputs[0][:3 * 18 - 2].copy() # 3 cols * 18 tiles per col - newline char - condition char
seed[17+16] = 0
seed[17+16][10] = 1
seed[17*2+17] = 0
seed[17*2+17][10] = 1
gen = seed.copy()
# initialize all hidden and cell states to zeros
lstm1_h = np.zeros((1, hidden_size))
lstm1_c = np.zeros((1, hidden_size))
lstm2_h = np.zeros((1, hidden_size))
lstm2_c = np.zeros((1, hidden_size))
lstm3_h = np.zeros((1, hidden_size))
lstm3_c = np.zeros((1, hidden_size))
add_condition_char_next = False
col_ix_generating = -1
for i in tqdm(range(num_tile_to_gen), leave=False):
seed = np.expand_dims(seed, axis=0)
# predict probas and update hidden and cell states
probas, lstm1_h, lstm1_c, lstm2_h, lstm2_c, lstm3_h, lstm3_c = model.predict([
seed, lstm1_h, lstm1_c, lstm2_h, lstm2_c, lstm3_h, lstm3_c
])
# ========== generic prediction ==========
if not add_condition_char_next:
probas = probas[0][-1] # first batch, last timestep
idx = np.random.choice(np.arange(len(probas)), p=probas)
seed = np.zeros((1, vocab_size))
seed[:, idx] = 1.
gen = np.vstack([gen, seed])
if ix_to_char[idx] == '\n':
add_condition_char_next = True
col_ix_generating += 1
# ========== condition char are not generated, they are loaded from the condition tape ==========
else:
seed = np.zeros((1, vocab_size))
if condition_tape[col_ix_generating] == 0:
seed[:, char_to_ix['N']] = 1
elif condition_tape[col_ix_generating] == 1:
seed[:, char_to_ix['Y']] = 1
gen = np.vstack([gen, seed])
add_condition_char_next = False
with open(f'./lstm_conditional_generated_levels_txt/{j}.txt', 'w+') as txt_f:
txt_f.write(onehot_to_string(gen))
###Output
_____no_output_____ |
NYC House Prediction XGBoost/NYC_House_Prediction.ipynb | ###Markdown
Preprocessing
###Code
def preprocess_inputs(df):
df = df.copy()
df.columns = df.columns.str.lower().str.replace(' ', '_')
df = df.rename(columns={"ease-ment": "easement"})
df['sale_price'] = df['sale_price'].replace(' - ', np.NaN).astype(np.float)
#dropping the rows in sale_price column having missing value
df = df.dropna(axis=0).reset_index(drop=True)
df = df.drop(["unnamed:_0", "block", "lot", "easement", "address", "apartment_number"], axis=1)
#fill all missing value with NaN
df = df.replace(' - ', np.NaN)
#fill missing values with column mean
for column in ["land_square_feet", "gross_square_feet"]:
df[column] = df[column].astype(np.float)
df[column] = df[column].fillna(df[column].mean())
df['sale_date'] = pd.to_datetime(df['sale_date'])
df['year']= df['sale_date'].apply(lambda x: x.year)
df['month']= df['sale_date'].apply(lambda x: x.month)
df['day']= df['sale_date'].apply(lambda x: x.day)
df = df.drop('sale_date', axis=1)
#make numerical and categorical columns in string columns
for column in ["borough", "zip_code"]:
df[column] = df[column].astype(str)
#one hot encoding
df = onehot_encode(
df,
columns=[
'borough', 'zip_code', 'neighborhood', 'building_class_category',
'tax_class_at_present', 'building_class_at_present', 'building_class_at_time_of_sale'
],
prefixes=['bo', 'zc', 'ne', 'bc', 'tx', 'bp', 'bs']
)
#X and y
X = df.drop("sale_price", axis=1)
y = df['sale_price']
scaler = StandardScaler()
X = pd.DataFrame(scaler.fit_transform(X), columns=X.columns)
return X, y
X, y = preprocess_inputs(data)
X
X.info()
y
y.unique()
y.isna().sum()
print("Percentage of missing value is :", (y.isna().mean())*100)
X.isna().sum()
###Output
_____no_output_____
###Markdown
Training the model using XGBoost
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.7, random_state=123)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=123)
dtrain = xgb.DMatrix(X_train, label=y_train)
dval = xgb.DMatrix(X_val, label=y_val)
dtest = xgb.DMatrix(X_test, label=y_test)
params = {'learning_rate': 0.001, 'max_depth': 6, 'lambda': 0.01}
model = xgb.train(params, dtrain, num_boost_round=10000, evals=[(dval, 'eval')], early_stopping_rounds=10)
y_true = np.array(y_test)
y_pred = model.predict(dtest)
print("Model R^2 Score: {:.4f}".format(r2_score(y_true, y_pred)))
###Output
_____no_output_____ |
NRPyPN/PN-p_t.ipynb | ###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); $p_t$, the tangential component of the momentum vector, up to and including 3.5 post-Newtonian order This notebook constructs the tangential component of the momentum vector**Notebook Status:** Validated **Validation Notes:** All expressions in this notebook were transcribed twice by hand on separate occasions, and expressions were corrected as needed to ensure consistency with published work. Published work was cross-validated and typo(s) in published work were corrected. In addition, this tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented.** Author: Zach Etienne This notebook exists as the following Python module:1. [PN_p_t.py](../../edit/NRPyPN/PN_p_t.py) This notebook & corresponding Python module depend on the following NRPy+/NRPyPN Python modules:1. [indexedexp.py](../../edit/indexedexp.py): [**documentation+tutorial**](../Tutorial-Indexed_Expressions.ipynb)1. [NRPyPN_shortcuts.py](../../edit/NRPyPN/NRPyPN_shortcuts.py): [**documentation**](NRPyPN_shortcuts.ipynb) Table of Contents$$\label{toc}$$1. Part 1: [$p_t$](p_t), up to and including 3.5PN order, as derived in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)1. Part 2: [Validation against second transcription and corresponding Python module](code_validation)1. Part 3: [Validation against trusted numerical values](code_validationv2) (i.e., in Table V of [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036))1. Part 4: [LaTeX PDF output](latex_pdf_output): $\LaTeX$ PDF Output Part 1: $p_t$, up to and including 3.5PN order, as derived in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036) \[Back to [top](toc)\]$$\label{p_t}$$ As described in the [nonspinning Hamiltonian notebook](PN-Hamiltonian-Nonspinning.ipynb), the basic physical system assumes two point particles of mass $m_1$ and $m_2$ with corresponding momentum vectors $\mathbf{P}_1$ and $\mathbf{P}_2$, and displacement vectors $\mathbf{X}_1$ and $\mathbf{X}_2$ with respect to the center of mass. Here we also consider the spin vectors of each point mass $\mathbf{S}_1$ and $\mathbf{S}_2$, respectively.To reduce possibility of copying error, the equation for $p_t$ is taken directly from the arXiv LaTeX source code of Eq A2 in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036), and only mildly formatted to (1) improve presentation in Jupyter notebooks, (2) to ensure some degree of consistency in notation across different terms in other NRPyPN notebooks, and (3) to correct any errors. In particular, the boxed negative sign at 2.5PN order ($a_5$ below) was missing in the original equation. We will later show that this negative sign is necessary for consistency with other expressions in the same paper, as well as with the expression up to 3PN order in [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872):$$p_t = \frac{q}{(1+q)^2}\frac{1}{r^{1/2}}\left(1 + \sum_{k=2}^7 \frac{a_k}{r^{k/2}}\right),$$where\begin{align}a_2 &= 2\\a_3 &= \left[-\frac{3 \left(4 q^2+3 q\right) \chi _{2z}}{4 (q+1)^2}-\frac{3 (3 q+4) \chi _{1z}}{4 (q+1)^2}\right]\\a_4 &= \left[ -\frac{3 q^2 \chi _{2x}^2}{2 (q+1)^2} +\frac{3 q^2 \chi _{2y}^2}{4 (q+1)^2}+\frac{3 q^2 \chi _{2z}^2}{4 (q+1)^2} +\frac{42 q^2+41 q+42}{16 (q+1)^2}-\frac{3 \chi _{1x}^2}{2 (q+1)^2} \right.\\&\quad\quad \left. -\frac{3 q \chi _{1x} \chi _{2x}}{(q+1)^2}+\frac{3 \chi _{1y}^2}{4 (q+1)^2}+\frac{3 q \chi _{1y}\chi _{2y}}{2 (q+1)^2}+\frac{3 \chi _{1z}^2}{4 (q+1)^2}+\frac{3 q \chi _{1z} \chi _{2z}}{2 (q+1)^2}\right]\\a_5 &= \left[ \boxed{-1} \frac{\left(13 q^3+60 q^2+116 q+72\right) \chi _{1z}}{16 (q+1)^4}+\frac{\left(-72 q^4-116 q^3-60 q^2-13 q\right) \chi _{2z}}{16 (q+1)^4} \right]\\a_6 &= \left[\frac{\left(472 q^2-640\right) \chi _{1x}^2}{128 (q+1)^4} + \frac{\left(-512 q^2-640 q-64\right) \chi _{1y}^2}{128 (q+1)^4}+\frac{\left(-108 q^2+224 q+512\right) \chi _{1z}^2}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(472 q^2-640 q^4\right) \chi _{2x}^2}{128 (q+1)^4}+\frac{\left(192 q^3+560 q^2+192 q\right) \chi _{1x} \chi _{2x}}{128 (q+1)^4} +\frac{\left(-864 q^3-1856 q^2-864 q\right) \chi _{1y} \chi _{2y}}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(480 q^3+1064 q^2+480 q\right) \chi _{1z} \chi _{2z}}{128 (q+1)^4}+\frac{\left(-64 q^4-640 q^3-512 q^2\right) \chi _{2y}^2}{128 (q+1)^4}+\frac{\left(512 q^4+224 q^3-108 q^2\right) \chi _{2z}^2}{128 (q+1)^4} \right. \nonumber\\&\quad\quad\left.+\frac{480 q^4+163 \pi ^2 q^3-2636 q^3+326 \pi ^2 q^2-6128 q^2+163 \pi ^2 q-2636 q+480}{128 (q+1)^4} \right]\\a_7 &= \left[ \frac{5 (4 q+1) q^3 \chi _{2 x}^2 \chi _{2 z}}{2 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 y}^2 \chi _{2 z}}{8 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 z}^3}{8 (q+1)^4}+\chi _{1x} \left(\frac{15 (2 q+1) q^2 \chi _{2 x} \chi _{2 z}}{4 (q+1)^4}+\frac{15 (q+2) q \chi _{2 x} \chi _{1z}}{4 (q+1)^4}\right)\right. \nonumber\\&\quad\quad \left.+\chi _{1y} \left(\frac{15 q^2 \chi _{2 y} \chi _{1z}}{4 (q+1)^4}+\frac{15 q^2 \chi _{2 y} \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1z} \left(\frac{15 q^2 (2 q+3) \chi _{2 x}^2}{4 (q+1)^4}-\frac{15 q^2 (q+2) \chi _{2 y}^2}{4 (q+1)^4}-\frac{15 q^2 \chi _{2 z}^2}{4 (q+1)^3} \right.\right. \nonumber\\&\quad\quad \left.\left. -\frac{103 q^5+145 q^4-27 q^3+252 q^2+670 q+348}{32 (q+1)^6}\right)-\frac{\left(348 q^5+670 q^4+252 q^3-27 q^2+145 q+103\right) q \chi _{2 z}}{32 (q+1)^6}\right.\nonumber\\&\quad\quad \left.+\chi _{1x}^2 \left(\frac{5 (q+4) \chi _{1z}}{2 (q+1)^4}+\frac{15 q (3 q+2) \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1y}^2 \left(-\frac{5 (q+4) \chi _{1z}}{8 (q+1)^4}-\frac{15 q (2 q+1) \chi _{2 z}}{4 (q+1)^4}\right)-\frac{15 q \chi _{1z}^2 \chi _{2 z}}{4 (q+1)^3}-\frac{5 (q+4) \chi _{1z}^3}{8 (q+1)^4} \right]\end{align} Let's divide and conquer, by tackling the coefficients one at a time:\begin{align}a_2 &= 2\\a_3 &= \left[-\frac{3 \left(4 q^2+3 q\right) \chi _{2z}}{4 (q+1)^2}-\frac{3 (3 q+4) \chi _{1z}}{4 (q+1)^2}\right]\\a_4 &= \left[ -\frac{3 q^2 \chi _{2x}^2}{2 (q+1)^2} +\frac{3 q^2 \chi _{2y}^2}{4 (q+1)^2}+\frac{3 q^2 \chi _{2z}^2}{4 (q+1)^2} +\frac{42 q^2+41 q+42}{16 (q+1)^2}-\frac{3 \chi _{1x}^2}{2 (q+1)^2} \right.\\&\quad\quad \left. -\frac{3 q \chi _{1x} \chi _{2x}}{(q+1)^2}+\frac{3 \chi _{1y}^2}{4 (q+1)^2}+\frac{3 q \chi _{1y}\chi _{2y}}{2 (q+1)^2}+\frac{3 \chi _{1z}^2}{4 (q+1)^2}+\frac{3 q \chi _{1z} \chi _{2z}}{2 (q+1)^2}\right]\end{align}
###Code
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import indexedexpNRPyPN as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
from NRPyPN_shortcuts import div # NRPyPN: shortcuts for e.g., vector operations
# Step 1: Construct terms a_2, a_3, and a_4, from
# Eq A2 of Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
# These terms have been independently validated
# against the same terms in Eq 7 of
# Healy, Lousto, Nakano, and Zlochower (2017)
# https://arxiv.org/abs/1702.00872
def p_t__a_2_thru_a_4(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_2,a_3,a_4
a_2 = 2
a_3 = (-3*(4*q**2+3*q)*chi2z/(4*(q+1)**2) - 3*(3*q+4)*chi1z/(4*(q+1)**2))
a_4 = (-3*q**2*chi2x**2/(2*(q+1)**2)
+3*q**2*chi2y**2/(4*(q+1)**2)
+3*q**2*chi2z**2/(4*(q+1)**2)
+(+42*q**2 + 41*q + 42)/(16*(q+1)**2)
-3*chi1x**2/(2*(q+1)**2)
-3*q*chi1x*chi2x/(q+1)**2
+3*chi1y**2/(4*(q+1)**2)
+3*q*chi1y*chi2y/(2*(q+1)**2)
+3*chi1z**2/(4*(q+1)**2)
+3*q*chi1z*chi2z/(2*(q+1)**2))
# Second version, for validation purposes only.
def p_t__a_2_thru_a_4v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_2v2,a_3v2,a_4v2
# Validated against HLNZ2017 version
a_2v2 = 2
# Validated against HLNZ2017 version
a_3v2 = (-(3*(4*q**2+3*q)*chi2z)/(4*(q+1)**2)-(3*(3*q+4)*chi1z)/(4*(q+1)**2))
# Validated against HLNZ2017 version
a_4v2 = -(3*q**2*chi2x**2)/(2*(q+1)**2)+(3*q**2*chi2y**2)/(4*(q+1)**2)+(3*q**2*chi2z**2)/(4*(q+1)**2)+(42*q**2+41*q+42)/(16*(q+1)**2)-(3*chi1x**2)/(2*(q+1)**2)-(3*q*chi1x*chi2x)/((q+1)**2)+(3*chi1y**2)/(4*(q+1)**2)+(3*q*chi1y*chi2y)/(2*(q+1)**2)+(3*chi1z**2)/(4*(q+1)**2)+(3*q*chi1z*chi2z)/(2*(q+1)**2)
###Output
_____no_output_____
###Markdown
Next, $a_5$ and $a_6$:\begin{align}a_5 &= \left[ \boxed{-1} \frac{\left(13 q^3+60 q^2+116 q+72\right) \chi _{1z}}{16 (q+1)^4}+\frac{\left(-72 q^4-116 q^3-60 q^2-13 q\right) \chi _{2z}}{16 (q+1)^4} \right]\\a_6 &= \left[\frac{\left(472 q^2-640\right) \chi _{1x}^2}{128 (q+1)^4} + \frac{\left(-512 q^2-640 q-64\right) \chi _{1y}^2}{128 (q+1)^4}+\frac{\left(-108 q^2+224 q+512\right) \chi _{1z}^2}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(472 q^2-640 q^4\right) \chi _{2x}^2}{128 (q+1)^4}+\frac{\left(192 q^3+560 q^2+192 q\right) \chi _{1x} \chi _{2x}}{128 (q+1)^4} +\frac{\left(-864 q^3-1856 q^2-864 q\right) \chi _{1y} \chi _{2y}}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(480 q^3+1064 q^2+480 q\right) \chi _{1z} \chi _{2z}}{128 (q+1)^4}+\frac{\left(-64 q^4-640 q^3-512 q^2\right) \chi _{2y}^2}{128 (q+1)^4}+\frac{\left(512 q^4+224 q^3-108 q^2\right) \chi _{2z}^2}{128 (q+1)^4} \right. \nonumber\\&\quad\quad\left.+\frac{480 q^4+163 \pi ^2 q^3-2636 q^3+326 \pi ^2 q^2-6128 q^2+163 \pi ^2 q-2636 q+480}{128 (q+1)^4} \right]\\\end{align}
###Code
# Construct terms a_5 and a_6, from
# Eq A2 of Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
# These terms have been independently validated
# against the same terms in Eq 7 of
# Healy, Lousto, Nakano, and Zlochower (2017)
# https://arxiv.org/abs/1702.00872
# and a sign error was corrected in the a_5
# expression.
def p_t__a_5_thru_a_6(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z, FixSignError=True):
SignFix = sp.sympify(-1)
if FixSignError == False:
SignFix = sp.sympify(+1)
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_5,a_6
a_5 = (SignFix*(13*q**3 + 60*q**2 + 116*q + 72)*chi1z/(16*(q+1)**4)
+(-72*q**4 - 116*q**3 - 60*q**2 - 13*q)*chi2z/(16*(q+1)**4))
a_6 = (+(+472*q**2 - 640)*chi1x**2/(128*(q+1)**4)
+(-512*q**2 - 640*q - 64)*chi1y**2/(128*(q+1)**4)
+(-108*q**2 + 224*q +512)*chi1z**2/(128*(q+1)**4)
+(+472*q**2 - 640*q**4)*chi2x**2/(128*(q+1)**4)
+(+192*q**3 + 560*q**2 + 192*q)*chi1x*chi2x/(128*(q+1)**4)
+(-864*q**3 -1856*q**2 - 864*q)*chi1y*chi2y/(128*(q+1)**4)
+(+480*q**3 +1064*q**2 + 480*q)*chi1z*chi2z/(128*(q+1)**4)
+( -64*q**4 - 640*q**3 - 512*q**2)*chi2y**2/(128*(q+1)**4)
+(+512*q**4 + 224*q**3 - 108*q**2)*chi2z**2/(128*(q+1)**4)
+(+480*q**4 + 163*sp.pi**2*q**3 - 2636*q**3 + 326*sp.pi**2*q**2 - 6128*q**2 + 163*sp.pi**2*q-2636*q+480)
/(128*(q+1)**4))
# Second version, for validation purposes only.
def p_t__a_5_thru_a_6v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z, FixSignError=True):
SignFix = sp.sympify(-1)
if FixSignError == False:
SignFix = sp.sympify(+1)
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
pi = sp.pi
global a_5v2,a_6v2
# Validated (separately) against HLNZ2017, as well as row 3 of Table V in RHP2018
a_5v2 = SignFix*((13*q**3+60*q**2+116*q+72)*chi1z)/(16*(q+1)**4)+((-72*q**4-116*q**3-60*q**2-13*q)*chi2z)/(16*(q+1)**4)
# Validated (separately) against HLNZ2017 version
a_6v2 = (+(+472*q**2 - 640)*chi1x**2/(128*(q+1)**4)
+(-512*q**2 - 640*q - 64)*chi1y**2/(128*(q+1)**4)
+(-108*q**2 + 224*q + 512)*chi1z**2/(128*(q+1)**4)
+(+472*q**2 - 640*q**4)*chi2x**2/(128*(q+1)**4)
+(+192*q**3 + 560*q**2 + 192*q)*chi1x*chi2x/(128*(q+1)**4)
+(-864*q**3 -1856*q**2 - 864*q)*chi1y*chi2y/(128*(q+1)**4)
+(+480*q**3 +1064*q**2 + 480*q)*chi1z*chi2z/(128*(q+1)**4)
+(- 64*q**4 - 640*q**3 - 512*q**2)*chi2y**2/(128*(q+1)**4)
+(+512*q**4 + 224*q**3 - 108*q**2)*chi2z**2/(128*(q+1)**4)
+(+480*q**4 + 163*pi**2*q**3 - 2636*q**3 + 326*pi**2*q**2 - 6128*q**2 + 163*pi**2*q - 2636*q + 480)
/(128*(q+1)**4))
###Output
_____no_output_____
###Markdown
Next we compare the expression for $a_5$ with Eq. 7 of [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872), as additional validation that there at least is a sign inconsistency:To reduce possibility of copying error, the following equation for $a_5$ is taken directly from the arXiv LaTeX source code of Eq. 7 of [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872), and only mildly formatted to (1) improve presentation in Jupyter notebooks and (2) to ensure some degree of consistency in notation across different terms in other NRPyPN notebooks.**Important: Note that [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872) adopts notation such that particle labels are interchanged: $1\leftrightarrow 2$, with respect to [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)**\begin{align}a_5 &= + \left( -\frac{1}{16}\,{\frac {q \left( 72\,{q}^{3}+116\,{q}^{2}+60\,q+13 \right) {\chi_{1z}}}{ \left( 1+q \right) ^{4}}}-\frac{1}{16}\,{\frac { \left( 13\,{q}^{3}+60\,{q}^{2}+116\,q+72 \right) {\chi_{2z}}}{ \left( 1+q \right) ^{4}}} \right)\\\end{align}
###Code
# Third version, for addtional validation.
def p_t__a_5_thru_a_6_HLNZ2017(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_5_HLNZ2017
a_5_HLNZ2017 = (-div(1,16)*(q*(72*q**3 + 116*q**2 + 60*q + 13)*chi1z/(1+q)**4)
-div(1,16)*( (13*q**3 + 60*q**2 +116*q + 72)*chi2z/(1+q)**4))
###Output
_____no_output_____
###Markdown
Finally, we validate that all 3 expressions for $a_5$ agree. (At the bottom, we confirm that all v2 expressions for $a_i$ match.)
###Code
from NRPyPN_shortcuts import m1,m2, chi1U,chi2U # Import needed input variables
p_t__a_5_thru_a_6( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
p_t__a_5_thru_a_6v2( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
# Again, the particle labels are interchanged in Healy, Lousto, Nakano, and Zlochower (2017):
p_t__a_5_thru_a_6_HLNZ2017(m1,m2, chi2U[0],chi2U[1],chi2U[2], chi1U[0],chi1U[1],chi1U[2])
def error(varname):
print("ERROR: When comparing Python module & notebook, "+varname+" was found not to match.")
sys.exit(1)
if sp.simplify(a_5 - a_5v2) != 0: error("a_5v2")
if sp.simplify(a_5 - a_5_HLNZ2017) != 0: error("a_5_HLNZ2017")
###Output
_____no_output_____
###Markdown
Finally $a_7$:\begin{align}a_7 &= \left[ \frac{5 (4 q+1) q^3 \chi _{2 x}^2 \chi _{2 z}}{2 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 y}^2 \chi _{2 z}}{8 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 z}^3}{8 (q+1)^4}+\chi _{1x} \left(\frac{15 (2 q+1) q^2 \chi _{2 x} \chi _{2 z}}{4 (q+1)^4}+\frac{15 (q+2) q \chi _{2 x} \chi _{1z}}{4 (q+1)^4}\right)\right. \nonumber\\&\quad\quad \left.+\chi _{1y} \left(\frac{15 q^2 \chi _{2 y} \chi _{1z}}{4 (q+1)^4}+\frac{15 q^2 \chi _{2 y} \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1z} \left(\frac{15 q^2 (2 q+3) \chi _{2 x}^2}{4 (q+1)^4}-\frac{15 q^2 (q+2) \chi _{2 y}^2}{4 (q+1)^4}-\frac{15 q^2 \chi _{2 z}^2}{4 (q+1)^3} \right.\right. \nonumber\\&\quad\quad \left.\left. -\frac{103 q^5+145 q^4-27 q^3+252 q^2+670 q+348}{32 (q+1)^6}\right)-\frac{\left(348 q^5+670 q^4+252 q^3-27 q^2+145 q+103\right) q \chi _{2 z}}{32 (q+1)^6}\right.\nonumber\\&\quad\quad \left.+\chi _{1x}^2 \left(\frac{5 (q+4) \chi _{1z}}{2 (q+1)^4}+\frac{15 q (3 q+2) \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1y}^2 \left(-\frac{5 (q+4) \chi _{1z}}{8 (q+1)^4}-\frac{15 q (2 q+1) \chi _{2 z}}{4 (q+1)^4}\right)-\frac{15 q \chi _{1z}^2 \chi _{2 z}}{4 (q+1)^3}-\frac{5 (q+4) \chi _{1z}^3}{8 (q+1)^4} \right]\end{align}
###Code
# Construct term a_7, from Eq A2 of
# Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
def p_t__a_7(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_7
a_7 = (+5*(4*q+1)*q**3*chi2x**2*chi2z/(2*(q+1)**4)
-5*(4*q+1)*q**3*chi2y**2*chi2z/(8*(q+1)**4)
-5*(4*q+1)*q**3*chi2z**3 /(8*(q+1)**4)
+chi1x*(+15*(2*q+1)*q**2*chi2x*chi2z/(4*(q+1)**4)
+15*(1*q+2)*q *chi2x*chi1z/(4*(q+1)**4))
+chi1y*(+15*q**2*chi2y*chi1z/(4*(q+1)**4)
+15*q**2*chi2y*chi2z/(4*(q+1)**4))
+chi1z*(+15*q**2*(2*q+3)*chi2x**2/(4*(q+1)**4)
-15*q**2*( q+2)*chi2y**2/(4*(q+1)**4)
-15*q**2 *chi2z**2/(4*(q+1)**3)
-(103*q**5 + 145*q**4 - 27*q**3 + 252*q**2 + 670*q + 348)/(32*(q+1)**6))
-(+348*q**5 + 670*q**4 + 252*q**3 - 27*q**2 + 145*q + 103)*q*chi2z/(32*(q+1)**6)
+chi1x**2*(+5*(q+4)*chi1z/(2*(q+1)**4)
+15*q*(3*q+2)*chi2z/(4*(q+1)**4))
+chi1y**2*(-5*(q+4)*chi1z/(8*(q+1)**4)
-15*q*(2*q+1)*chi2z/(4*(q+1)**4))
-15*q*chi1z**2*chi2z/(4*(q+1)**3)
-5*(q+4)*chi1z**3/(8*(q+1)**4))
# Second version, for validation purposes only.
def p_t__a_7v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_7v2
a_7v2 = (+5*(4*q+1)*q**3*chi2x**2*chi2z/(2*(q+1)**4)
-5*(4*q+1)*q**3*chi2y**2*chi2z/(8*(q+1)**4)
-5*(4*q+1)*q**3*chi2z**3/(8*(q+1)**4)
+chi1x*(+(15*(2*q+1)*q**2*chi2x*chi2z)/(4*(q+1)**4)
+(15*( q+2)*q *chi2x*chi1z)/(4*(q+1)**4))
+chi1y*(+(15*q**2*chi2y*chi1z)/(4*(q+1)**4)
+(15*q**2*chi2y*chi2z)/(4*(q+1)**4))
+chi1z*(+(15*q**2*(2*q+3)*chi2x**2)/(4*(q+1)**4)
-(15*q**2*( q+2)*chi2y**2)/(4*(q+1)**4)
-(15*q**2* chi2z**2)/(4*(q+1)**3)
-(103*q**5+145*q**4-27*q**3+252*q**2+670*q+348)/(32*(q+1)**6))
-(348*q**5+670*q**4+252*q**3-27*q**2+145*q+103)*q*chi2z/(32*(q+1)**6)
+chi1x**2*(+5*(q+4)*chi1z/(2*(q+1)**4) + 15*q*(3*q+2)*chi2z/(4*(q+1)**4))
+chi1y**2*(-5*(q+4)*chi1z/(8*(q+1)**4) - 15*q*(2*q+1)*chi2z/(4*(q+1)**4))
-15*q*chi1z**2*chi2z/(4*(q+1)**3) - 5*(q+4)*chi1z**3/(8*(q+1)**4))
###Output
_____no_output_____
###Markdown
Putting it all together, recall that$$p_t = \frac{q}{(1+q)^2}\frac{1}{r^{1/2}}\left(1 + \sum_{k=2}^7 \frac{a_k}{r^{k/2}}\right),$$where $k/2$ is the post-Newtonian order.
###Code
# Finally, sum the expressions for a_k to construct p_t as prescribed:
# p_t = q/(sqrt(r)*(1+q)^2) (1 + \sum_{k=2}^7 (a_k/r^{k/2}))
def f_p_t(m1,m2, chi1U,chi2U, r):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
a = ixp.zerorank1(DIM=10)
p_t__a_2_thru_a_4(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[2] = a_2
a[3] = a_3
a[4] = a_4
p_t__a_5_thru_a_6(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[5] = a_5
a[6] = a_6
p_t__a_7( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[7] = a_7
global p_t
p_t = 1 # Term prior to the sum in parentheses
for k in range(8):
p_t += a[k]/r**div(k,2)
p_t *= q / (1+q)**2 * 1/r**div(1,2)
# Second version, for validation purposes only.
def f_p_tv2(m1,m2, chi1U,chi2U, r):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
a = ixp.zerorank1(DIM=10)
p_t__a_2_thru_a_4v2(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[2] = a_2v2
a[3] = a_3v2
a[4] = a_4v2
p_t__a_5_thru_a_6v2(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[5] = a_5v2
a[6] = a_6v2
p_t__a_7v2( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[7] = a_7v2
global p_tv2
p_tv2 = 1 # Term prior to the sum in parentheses
for k in range(8):
p_tv2 += a[k]/r**div(k,2)
p_tv2 *= q / (1+q)**2 * 1/r**div(1,2)
###Output
_____no_output_____
###Markdown
Part 2: Validation against second transcription and corresponding Python module \[Back to [top](toc)\]$$\label{code_validation}$$ As a code validation check, we verify agreement between * the SymPy expressions transcribed from the cited published work on two separate occasions, and* the SymPy expressions generated in this notebook, and the corresponding Python module.
###Code
from NRPyPN_shortcuts import q, num_eval # Import needed input variable & numerical evaluation routine
f_p_t(m1,m2, chi1U,chi2U, q)
def error(varname):
print("ERROR: When comparing Python module & notebook, "+varname+" was found not to match.")
sys.exit(1)
# Validation against second transcription of the expressions:
f_p_tv2(m1,m2, chi1U,chi2U, q)
if sp.simplify(p_t - p_tv2) != 0: error("p_tv2")
# Validation against corresponding Python module:
import PN_p_t as pt
pt.f_p_t(m1,m2, chi1U,chi2U, q)
if sp.simplify(p_t - pt.p_t) != 0: error("pt.p_t")
print("ALL TESTS PASS")
###Output
ALL TESTS PASS
###Markdown
Part 3: Validation against trusted numerical values (i.e., in Table V of [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)) \[Back to [top](toc)\]$$\label{code_validationv2}$$
###Code
# Useful function for comparing published & NRPyPN results
def compare_pub_NPN(desc, pub,NPN,NPN_with_a5_chi1z_sign_error):
print("##################################################")
print(" "+desc)
print("##################################################")
print(str(pub) + " <- Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018)")
print(str(NPN) + " <- Result from NRPyPN")
relerror = abs(pub-NPN)/pub
resultstring = "Relative error between NRPyPN & published: "+str(relerror*100)+"%"
if relerror > 1e-3:
resultstring += " <--- NOT GOOD! (see explanation below)"
else:
resultstring += " <--- EXCELLENT AGREEMENT!"
print(resultstring+"\n")
print(str(NPN_with_a5_chi1z_sign_error) + " <- Result from NRPyPN, with chi1z sign error in a_5 expression.")
# 1. Let's consider the case:
# * Mass ratio q=1, chi1=chi2=(0,0,0), radial separation r=12
pub_result = 0.850941e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0850940927209620 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 1.0, # must be >= 1
nr = 12.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.)
compare_pub_NPN("Case: q=1, nonspinning, initial separation 12",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 2. Let's consider the case:
# * Mass ratio q=1.5, chi1= (0,0,-0.6); chi2=(0,0,0.6), radial separation r=10.8
pub_result = 0.868557e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0867002374951143
NPN_result = num_eval(p_t,
qmassratio = 1.5, # must be >= 1
nr = 10.8, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = -0.6,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.6)
compare_pub_NPN("Case: q=1.5, chi1z=-0.6, chi2z=0.6, initial separation 10.8",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 3. Let's consider the case:
# * Mass ratio q=4, chi1= (0,0,-0.8); chi2=(0,0,0.8), radial separation r=11
pub_result = 0.559207e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0557629777874552
NPN_result = num_eval(p_t,
qmassratio = 4.0, # must be >= 1
nr = 11.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = -0.8,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.8)
compare_pub_NPN("Case: q=4.0, chi1z=-0.8, chi2z=0.8, initial separation 11.0",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
print("0.0558369 <- Second iteration value in pub result. Note that NRPyPN value is *closer* to this value.")
# 4. Let's consider the case:
# * Mass ratio q=2, chi1= (0,0,0); chi2=(−0.3535, 0.3535, 0.5), radial separation r=10.8
pub_result = 0.7935e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0793500403866190 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 2.0, # must be >= 1
nr = 10.8, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.,
nchi2x = -0.3535,
nchi2y = +0.3535,
nchi2z = +0.5)
compare_pub_NPN("Case: q=2.0, chi2x=-0.3535, chi2y=+0.3535, chi2z=+0.5, initial separation 10.8",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 5. Let's consider the case:
# * Mass ratio q=8, chi1= (0, 0, 0.5); chi2=(0, 0, 0.5), radial separation r=11
pub_result = 0.345755e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0345584951081129 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 8.0, # must be >= 1
nr = 11.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.5,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.5)
compare_pub_NPN("""
Case: q=8.0, chi1z=chi2z=+0.5, initial separation 11
Note: This one is weird. Clearly the value in the table
has a typo, such that the p_r and p_t values
should be interchanged; p_t is about 20% the
next smallest value in the table, and the
parameters aren't that different. We therefore
assume that this is the case, and find agreement
with the published result to about 0.07%, which
isn't the best, but given that the table values
seem to be clearly wrong, it's an encouraging
sign.
""",pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
###Output
##################################################
Case: q=8.0, chi1z=chi2z=+0.5, initial separation 11
Note: This one is weird. Clearly the value in the table
has a typo, such that the p_r and p_t values
should be interchanged; p_t is about 20% the
next smallest value in the table, and the
parameters aren't that different. We therefore
assume that this is the case, and find agreement
with the published result to about 0.07%, which
isn't the best, but given that the table values
seem to be clearly wrong, it's an encouraging
sign.
##################################################
0.0345755 <- Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018)
0.0345503689803291 <- Result from NRPyPN
Relative error between NRPyPN & published: 0.0726844721578464% <--- EXCELLENT AGREEMENT!
0.0345584951081129 <- Result from NRPyPN, with chi1z sign error in a_5 expression.
###Markdown
Part 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[PN-p_t.pdf](PN-p_t.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import os,sys # Standard Python modules for multiplatform OS-level functions
import cmdline_helperNRPyPN as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("PN-p_t",location_of_template_file=os.path.join(".."))
###Output
Created PN-p_t.tex, and compiled LaTeX file to PDF file PN-p_t.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); $p_t$, the tangential component of the momentum vector, up to and including 3.5 post-Newtonian order This notebook constructs the tangential component of the momentum vector**Notebook Status:** Validated **Validation Notes:** All expressions in this notebook were transcribed twice by hand on separate occasions, and expressions were corrected as needed to ensure consistency with published work. Published work was cross-validated and typo(s) in published work were corrected. In addition, this tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented.** Author: Zach Etienne This notebook exists as the following Python module:1. [PN_p_t.py](../../edit/NRPyPN/PN_p_t.py) This notebook & corresponding Python module depend on the following NRPy+/NRPyPN Python modules:1. [indexedexp.py](../../edit/indexedexp.py): [**documentation+tutorial**](../Tutorial-Indexed_Expressions.ipynb)1. [NRPyPN_shortcuts.py](../../edit/NRPyPN/NRPyPN_shortcuts.py): [**documentation**](NRPyPN_shortcuts.ipynb) Table of Contents$$\label{toc}$$1. Part 1: [$p_t$](p_t), up to and including 3.5PN order, as derived in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)1. Part 2: [Validation against second transcription and corresponding Python module](code_validation)1. Part 3: [Validation against trusted numerical values](code_validationv2) (i.e., in Table V of [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036))1. Part 4: [LaTeX PDF output](latex_pdf_output): $\LaTeX$ PDF Output Part 1: $p_t$, up to and including 3.5PN order, as derived in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036) \[Back to [top](toc)\]$$\label{p_t}$$ As described in the [nonspinning Hamiltonian notebook](PN-Hamiltonian-Nonspinning.ipynb), the basic physical system assumes two point particles of mass $m_1$ and $m_2$ with corresponding momentum vectors $\mathbf{P}_1$ and $\mathbf{P}_2$, and displacement vectors $\mathbf{X}_1$ and $\mathbf{X}_2$ with respect to the center of mass. Here we also consider the spin vectors of each point mass $\mathbf{S}_1$ and $\mathbf{S}_2$, respectively.To reduce possibility of copying error, the equation for $p_t$ is taken directly from the arXiv LaTeX source code of Eq A2 in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036), and only mildly formatted to (1) improve presentation in Jupyter notebooks, (2) to ensure some degree of consistency in notation across different terms in other NRPyPN notebooks, and (3) to correct any errors. In particular, the boxed negative sign at 2.5PN order ($a_5$ below) was missing in the original equation. We will later show that this negative sign is necessary for consistency with other expressions in the same paper, as well as with the expression up to 3PN order in [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872):$$p_t = \frac{q}{(1+q)^2}\frac{1}{r^{1/2}}\left(1 + \sum_{k=2}^7 \frac{a_k}{r^{k/2}}\right),$$where\begin{align}a_2 &= 2\\a_3 &= \left[-\frac{3 \left(4 q^2+3 q\right) \chi _{2z}}{4 (q+1)^2}-\frac{3 (3 q+4) \chi _{1z}}{4 (q+1)^2}\right]\\a_4 &= \left[ -\frac{3 q^2 \chi _{2x}^2}{2 (q+1)^2} +\frac{3 q^2 \chi _{2y}^2}{4 (q+1)^2}+\frac{3 q^2 \chi _{2z}^2}{4 (q+1)^2} +\frac{42 q^2+41 q+42}{16 (q+1)^2}-\frac{3 \chi _{1x}^2}{2 (q+1)^2} \right.\\&\quad\quad \left. -\frac{3 q \chi _{1x} \chi _{2x}}{(q+1)^2}+\frac{3 \chi _{1y}^2}{4 (q+1)^2}+\frac{3 q \chi _{1y}\chi _{2y}}{2 (q+1)^2}+\frac{3 \chi _{1z}^2}{4 (q+1)^2}+\frac{3 q \chi _{1z} \chi _{2z}}{2 (q+1)^2}\right]\\a_5 &= \left[ \boxed{-1} \frac{\left(13 q^3+60 q^2+116 q+72\right) \chi _{1z}}{16 (q+1)^4}+\frac{\left(-72 q^4-116 q^3-60 q^2-13 q\right) \chi _{2z}}{16 (q+1)^4} \right]\\a_6 &= \left[\frac{\left(472 q^2-640\right) \chi _{1x}^2}{128 (q+1)^4} + \frac{\left(-512 q^2-640 q-64\right) \chi _{1y}^2}{128 (q+1)^4}+\frac{\left(-108 q^2+224 q+512\right) \chi _{1z}^2}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(472 q^2-640 q^4\right) \chi _{2x}^2}{128 (q+1)^4}+\frac{\left(192 q^3+560 q^2+192 q\right) \chi _{1x} \chi _{2x}}{128 (q+1)^4} +\frac{\left(-864 q^3-1856 q^2-864 q\right) \chi _{1y} \chi _{2y}}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(480 q^3+1064 q^2+480 q\right) \chi _{1z} \chi _{2z}}{128 (q+1)^4}+\frac{\left(-64 q^4-640 q^3-512 q^2\right) \chi _{2y}^2}{128 (q+1)^4}+\frac{\left(512 q^4+224 q^3-108 q^2\right) \chi _{2z}^2}{128 (q+1)^4} \right. \nonumber\\&\quad\quad\left.+\frac{480 q^4+163 \pi ^2 q^3-2636 q^3+326 \pi ^2 q^2-6128 q^2+163 \pi ^2 q-2636 q+480}{128 (q+1)^4} \right]\\a_7 &= \left[ \frac{5 (4 q+1) q^3 \chi _{2 x}^2 \chi _{2 z}}{2 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 y}^2 \chi _{2 z}}{8 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 z}^3}{8 (q+1)^4}+\chi _{1x} \left(\frac{15 (2 q+1) q^2 \chi _{2 x} \chi _{2 z}}{4 (q+1)^4}+\frac{15 (q+2) q \chi _{2 x} \chi _{1z}}{4 (q+1)^4}\right)\right. \nonumber\\&\quad\quad \left.+\chi _{1y} \left(\frac{15 q^2 \chi _{2 y} \chi _{1z}}{4 (q+1)^4}+\frac{15 q^2 \chi _{2 y} \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1z} \left(\frac{15 q^2 (2 q+3) \chi _{2 x}^2}{4 (q+1)^4}-\frac{15 q^2 (q+2) \chi _{2 y}^2}{4 (q+1)^4}-\frac{15 q^2 \chi _{2 z}^2}{4 (q+1)^3} \right.\right. \nonumber\\&\quad\quad \left.\left. -\frac{103 q^5+145 q^4-27 q^3+252 q^2+670 q+348}{32 (q+1)^6}\right)-\frac{\left(348 q^5+670 q^4+252 q^3-27 q^2+145 q+103\right) q \chi _{2 z}}{32 (q+1)^6}\right.\nonumber\\&\quad\quad \left.+\chi _{1x}^2 \left(\frac{5 (q+4) \chi _{1z}}{2 (q+1)^4}+\frac{15 q (3 q+2) \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1y}^2 \left(-\frac{5 (q+4) \chi _{1z}}{8 (q+1)^4}-\frac{15 q (2 q+1) \chi _{2 z}}{4 (q+1)^4}\right)-\frac{15 q \chi _{1z}^2 \chi _{2 z}}{4 (q+1)^3}-\frac{5 (q+4) \chi _{1z}^3}{8 (q+1)^4} \right]\end{align} Let's divide and conquer, by tackling the coefficients one at a time:\begin{align}a_2 &= 2\\a_3 &= \left[-\frac{3 \left(4 q^2+3 q\right) \chi _{2z}}{4 (q+1)^2}-\frac{3 (3 q+4) \chi _{1z}}{4 (q+1)^2}\right]\\a_4 &= \left[ -\frac{3 q^2 \chi _{2x}^2}{2 (q+1)^2} +\frac{3 q^2 \chi _{2y}^2}{4 (q+1)^2}+\frac{3 q^2 \chi _{2z}^2}{4 (q+1)^2} +\frac{42 q^2+41 q+42}{16 (q+1)^2}-\frac{3 \chi _{1x}^2}{2 (q+1)^2} \right.\\&\quad\quad \left. -\frac{3 q \chi _{1x} \chi _{2x}}{(q+1)^2}+\frac{3 \chi _{1y}^2}{4 (q+1)^2}+\frac{3 q \chi _{1y}\chi _{2y}}{2 (q+1)^2}+\frac{3 \chi _{1z}^2}{4 (q+1)^2}+\frac{3 q \chi _{1z} \chi _{2z}}{2 (q+1)^2}\right]\end{align}
###Code
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import indexedexpNRPyPN as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
from NRPyPN_shortcuts import div # NRPyPN: shortcuts for e.g., vector operations
# Step 1: Construct terms a_2, a_3, and a_4, from
# Eq A2 of Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
# These terms have been independently validated
# against the same terms in Eq 7 of
# Healy, Lousto, Nakano, and Zlochower (2017)
# https://arxiv.org/abs/1702.00872
def p_t__a_2_thru_a_4(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_2,a_3,a_4
a_2 = 2
a_3 = (-3*(4*q**2+3*q)*chi2z/(4*(q+1)**2) - 3*(3*q+4)*chi1z/(4*(q+1)**2))
a_4 = (-3*q**2*chi2x**2/(2*(q+1)**2)
+3*q**2*chi2y**2/(4*(q+1)**2)
+3*q**2*chi2z**2/(4*(q+1)**2)
+(+42*q**2 + 41*q + 42)/(16*(q+1)**2)
-3*chi1x**2/(2*(q+1)**2)
-3*q*chi1x*chi2x/(q+1)**2
+3*chi1y**2/(4*(q+1)**2)
+3*q*chi1y*chi2y/(2*(q+1)**2)
+3*chi1z**2/(4*(q+1)**2)
+3*q*chi1z*chi2z/(2*(q+1)**2))
# Second version, for validation purposes only.
def p_t__a_2_thru_a_4v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_2v2,a_3v2,a_4v2
# Validated against HLNZ2017 version
a_2v2 = 2
# Validated against HLNZ2017 version
a_3v2 = (-(3*(4*q**2+3*q)*chi2z)/(4*(q+1)**2)-(3*(3*q+4)*chi1z)/(4*(q+1)**2))
# Validated against HLNZ2017 version
a_4v2 = -(3*q**2*chi2x**2)/(2*(q+1)**2)+(3*q**2*chi2y**2)/(4*(q+1)**2)+(3*q**2*chi2z**2)/(4*(q+1)**2)+(42*q**2+41*q+42)/(16*(q+1)**2)-(3*chi1x**2)/(2*(q+1)**2)-(3*q*chi1x*chi2x)/((q+1)**2)+(3*chi1y**2)/(4*(q+1)**2)+(3*q*chi1y*chi2y)/(2*(q+1)**2)+(3*chi1z**2)/(4*(q+1)**2)+(3*q*chi1z*chi2z)/(2*(q+1)**2)
###Output
_____no_output_____
###Markdown
Next, $a_5$ and $a_6$:\begin{align}a_5 &= \left[ \boxed{-1} \frac{\left(13 q^3+60 q^2+116 q+72\right) \chi _{1z}}{16 (q+1)^4}+\frac{\left(-72 q^4-116 q^3-60 q^2-13 q\right) \chi _{2z}}{16 (q+1)^4} \right]\\a_6 &= \left[\frac{\left(472 q^2-640\right) \chi _{1x}^2}{128 (q+1)^4} + \frac{\left(-512 q^2-640 q-64\right) \chi _{1y}^2}{128 (q+1)^4}+\frac{\left(-108 q^2+224 q+512\right) \chi _{1z}^2}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(472 q^2-640 q^4\right) \chi _{2x}^2}{128 (q+1)^4}+\frac{\left(192 q^3+560 q^2+192 q\right) \chi _{1x} \chi _{2x}}{128 (q+1)^4} +\frac{\left(-864 q^3-1856 q^2-864 q\right) \chi _{1y} \chi _{2y}}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(480 q^3+1064 q^2+480 q\right) \chi _{1z} \chi _{2z}}{128 (q+1)^4}+\frac{\left(-64 q^4-640 q^3-512 q^2\right) \chi _{2y}^2}{128 (q+1)^4}+\frac{\left(512 q^4+224 q^3-108 q^2\right) \chi _{2z}^2}{128 (q+1)^4} \right. \nonumber\\&\quad\quad\left.+\frac{480 q^4+163 \pi ^2 q^3-2636 q^3+326 \pi ^2 q^2-6128 q^2+163 \pi ^2 q-2636 q+480}{128 (q+1)^4} \right]\\\end{align}
###Code
# Construct terms a_5 and a_6, from
# Eq A2 of Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
# These terms have been independently validated
# against the same terms in Eq 7 of
# Healy, Lousto, Nakano, and Zlochower (2017)
# https://arxiv.org/abs/1702.00872
# and a sign error was corrected in the a_5
# expression.
def p_t__a_5_thru_a_6(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z, FixSignError=True):
SignFix = sp.sympify(-1)
if FixSignError == False:
SignFix = sp.sympify(+1)
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_5,a_6
a_5 = (SignFix*(13*q**3 + 60*q**2 + 116*q + 72)*chi1z/(16*(q+1)**4)
+(-72*q**4 - 116*q**3 - 60*q**2 - 13*q)*chi2z/(16*(q+1)**4))
a_6 = (+(+472*q**2 - 640)*chi1x**2/(128*(q+1)**4)
+(-512*q**2 - 640*q - 64)*chi1y**2/(128*(q+1)**4)
+(-108*q**2 + 224*q +512)*chi1z**2/(128*(q+1)**4)
+(+472*q**2 - 640*q**4)*chi2x**2/(128*(q+1)**4)
+(+192*q**3 + 560*q**2 + 192*q)*chi1x*chi2x/(128*(q+1)**4)
+(-864*q**3 -1856*q**2 - 864*q)*chi1y*chi2y/(128*(q+1)**4)
+(+480*q**3 +1064*q**2 + 480*q)*chi1z*chi2z/(128*(q+1)**4)
+( -64*q**4 - 640*q**3 - 512*q**2)*chi2y**2/(128*(q+1)**4)
+(+512*q**4 + 224*q**3 - 108*q**2)*chi2z**2/(128*(q+1)**4)
+(+480*q**4 + 163*sp.pi**2*q**3 - 2636*q**3 + 326*sp.pi**2*q**2 - 6128*q**2 + 163*sp.pi**2*q-2636*q+480)
/(128*(q+1)**4))
# Second version, for validation purposes only.
def p_t__a_5_thru_a_6v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z, FixSignError=True):
SignFix = sp.sympify(-1)
if FixSignError == False:
SignFix = sp.sympify(+1)
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
pi = sp.pi
global a_5v2,a_6v2
# Validated (separately) against HLNZ2017, as well as row 3 of Table V in RHP2018
a_5v2 = SignFix*((13*q**3+60*q**2+116*q+72)*chi1z)/(16*(q+1)**4)+((-72*q**4-116*q**3-60*q**2-13*q)*chi2z)/(16*(q+1)**4)
# Validated (separately) against HLNZ2017 version
a_6v2 = (+(+472*q**2 - 640)*chi1x**2/(128*(q+1)**4)
+(-512*q**2 - 640*q - 64)*chi1y**2/(128*(q+1)**4)
+(-108*q**2 + 224*q + 512)*chi1z**2/(128*(q+1)**4)
+(+472*q**2 - 640*q**4)*chi2x**2/(128*(q+1)**4)
+(+192*q**3 + 560*q**2 + 192*q)*chi1x*chi2x/(128*(q+1)**4)
+(-864*q**3 -1856*q**2 - 864*q)*chi1y*chi2y/(128*(q+1)**4)
+(+480*q**3 +1064*q**2 + 480*q)*chi1z*chi2z/(128*(q+1)**4)
+(- 64*q**4 - 640*q**3 - 512*q**2)*chi2y**2/(128*(q+1)**4)
+(+512*q**4 + 224*q**3 - 108*q**2)*chi2z**2/(128*(q+1)**4)
+(+480*q**4 + 163*pi**2*q**3 - 2636*q**3 + 326*pi**2*q**2 - 6128*q**2 + 163*pi**2*q - 2636*q + 480)
/(128*(q+1)**4))
###Output
_____no_output_____
###Markdown
Next we compare the expression for $a_5$ with Eq. 7 of [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872), as additional validation that there at least is a sign inconsistency:To reduce possibility of copying error, the following equation for $a_5$ is taken directly from the arXiv LaTeX source code of Eq. 7 of [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872), and only mildly formatted to (1) improve presentation in Jupyter notebooks and (2) to ensure some degree of consistency in notation across different terms in other NRPyPN notebooks.**Important: Note that [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872) adopts notation such that particle labels are interchanged: $1\leftrightarrow 2$, with respect to [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)**\begin{align}a_5 &= + \left( -\frac{1}{16}\,{\frac {q \left( 72\,{q}^{3}+116\,{q}^{2}+60\,q+13 \right) {\chi_{1z}}}{ \left( 1+q \right) ^{4}}}-\frac{1}{16}\,{\frac { \left( 13\,{q}^{3}+60\,{q}^{2}+116\,q+72 \right) {\chi_{2z}}}{ \left( 1+q \right) ^{4}}} \right)\\\end{align}
###Code
# Third version, for addtional validation.
def p_t__a_5_thru_a_6_HLNZ2017(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_5_HLNZ2017
a_5_HLNZ2017 = (-div(1,16)*(q*(72*q**3 + 116*q**2 + 60*q + 13)*chi1z/(1+q)**4)
-div(1,16)*( (13*q**3 + 60*q**2 +116*q + 72)*chi2z/(1+q)**4))
###Output
_____no_output_____
###Markdown
Finally, we validate that all 3 expressions for $a_5$ agree. (At the bottom, we confirm that all v2 expressions for $a_i$ match.)
###Code
from NRPyPN_shortcuts import m1,m2, chi1U,chi2U # Import needed input variables
p_t__a_5_thru_a_6( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
p_t__a_5_thru_a_6v2( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
# Again, the particle labels are interchanged in Healy, Lousto, Nakano, and Zlochower (2017):
p_t__a_5_thru_a_6_HLNZ2017(m1,m2, chi2U[0],chi2U[1],chi2U[2], chi1U[0],chi1U[1],chi1U[2])
def error(varname):
print("ERROR: When comparing Python module & notebook, "+varname+" was found not to match.")
sys.exit(1)
if sp.simplify(a_5 - a_5v2) != 0: error("a_5v2")
if sp.simplify(a_5 - a_5_HLNZ2017) != 0: error("a_5_HLNZ2017")
###Output
_____no_output_____
###Markdown
Finally $a_7$:\begin{align}a_7 &= \left[ \frac{5 (4 q+1) q^3 \chi _{2 x}^2 \chi _{2 z}}{2 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 y}^2 \chi _{2 z}}{8 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 z}^3}{8 (q+1)^4}+\chi _{1x} \left(\frac{15 (2 q+1) q^2 \chi _{2 x} \chi _{2 z}}{4 (q+1)^4}+\frac{15 (q+2) q \chi _{2 x} \chi _{1z}}{4 (q+1)^4}\right)\right. \nonumber\\&\quad\quad \left.+\chi _{1y} \left(\frac{15 q^2 \chi _{2 y} \chi _{1z}}{4 (q+1)^4}+\frac{15 q^2 \chi _{2 y} \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1z} \left(\frac{15 q^2 (2 q+3) \chi _{2 x}^2}{4 (q+1)^4}-\frac{15 q^2 (q+2) \chi _{2 y}^2}{4 (q+1)^4}-\frac{15 q^2 \chi _{2 z}^2}{4 (q+1)^3} \right.\right. \nonumber\\&\quad\quad \left.\left. -\frac{103 q^5+145 q^4-27 q^3+252 q^2+670 q+348}{32 (q+1)^6}\right)-\frac{\left(348 q^5+670 q^4+252 q^3-27 q^2+145 q+103\right) q \chi _{2 z}}{32 (q+1)^6}\right.\nonumber\\&\quad\quad \left.+\chi _{1x}^2 \left(\frac{5 (q+4) \chi _{1z}}{2 (q+1)^4}+\frac{15 q (3 q+2) \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1y}^2 \left(-\frac{5 (q+4) \chi _{1z}}{8 (q+1)^4}-\frac{15 q (2 q+1) \chi _{2 z}}{4 (q+1)^4}\right)-\frac{15 q \chi _{1z}^2 \chi _{2 z}}{4 (q+1)^3}-\frac{5 (q+4) \chi _{1z}^3}{8 (q+1)^4} \right]\end{align}
###Code
# Construct term a_7, from Eq A2 of
# Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
def p_t__a_7(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_7
a_7 = (+5*(4*q+1)*q**3*chi2x**2*chi2z/(2*(q+1)**4)
-5*(4*q+1)*q**3*chi2y**2*chi2z/(8*(q+1)**4)
-5*(4*q+1)*q**3*chi2z**3 /(8*(q+1)**4)
+chi1x*(+15*(2*q+1)*q**2*chi2x*chi2z/(4*(q+1)**4)
+15*(1*q+2)*q *chi2x*chi1z/(4*(q+1)**4))
+chi1y*(+15*q**2*chi2y*chi1z/(4*(q+1)**4)
+15*q**2*chi2y*chi2z/(4*(q+1)**4))
+chi1z*(+15*q**2*(2*q+3)*chi2x**2/(4*(q+1)**4)
-15*q**2*( q+2)*chi2y**2/(4*(q+1)**4)
-15*q**2 *chi2z**2/(4*(q+1)**3)
-(103*q**5 + 145*q**4 - 27*q**3 + 252*q**2 + 670*q + 348)/(32*(q+1)**6))
-(+348*q**5 + 670*q**4 + 252*q**3 - 27*q**2 + 145*q + 103)*q*chi2z/(32*(q+1)**6)
+chi1x**2*(+5*(q+4)*chi1z/(2*(q+1)**4)
+15*q*(3*q+2)*chi2z/(4*(q+1)**4))
+chi1y**2*(-5*(q+4)*chi1z/(8*(q+1)**4)
-15*q*(2*q+1)*chi2z/(4*(q+1)**4))
-15*q*chi1z**2*chi2z/(4*(q+1)**3)
-5*(q+4)*chi1z**3/(8*(q+1)**4))
# Second version, for validation purposes only.
def p_t__a_7v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_7v2
a_7v2 = (+5*(4*q+1)*q**3*chi2x**2*chi2z/(2*(q+1)**4)
-5*(4*q+1)*q**3*chi2y**2*chi2z/(8*(q+1)**4)
-5*(4*q+1)*q**3*chi2z**3/(8*(q+1)**4)
+chi1x*(+(15*(2*q+1)*q**2*chi2x*chi2z)/(4*(q+1)**4)
+(15*( q+2)*q *chi2x*chi1z)/(4*(q+1)**4))
+chi1y*(+(15*q**2*chi2y*chi1z)/(4*(q+1)**4)
+(15*q**2*chi2y*chi2z)/(4*(q+1)**4))
+chi1z*(+(15*q**2*(2*q+3)*chi2x**2)/(4*(q+1)**4)
-(15*q**2*( q+2)*chi2y**2)/(4*(q+1)**4)
-(15*q**2* chi2z**2)/(4*(q+1)**3)
-(103*q**5+145*q**4-27*q**3+252*q**2+670*q+348)/(32*(q+1)**6))
-(348*q**5+670*q**4+252*q**3-27*q**2+145*q+103)*q*chi2z/(32*(q+1)**6)
+chi1x**2*(+5*(q+4)*chi1z/(2*(q+1)**4) + 15*q*(3*q+2)*chi2z/(4*(q+1)**4))
+chi1y**2*(-5*(q+4)*chi1z/(8*(q+1)**4) - 15*q*(2*q+1)*chi2z/(4*(q+1)**4))
-15*q*chi1z**2*chi2z/(4*(q+1)**3) - 5*(q+4)*chi1z**3/(8*(q+1)**4))
###Output
_____no_output_____
###Markdown
Putting it all together, recall that$$p_t = \frac{q}{(1+q)^2}\frac{1}{r^{1/2}}\left(1 + \sum_{k=2}^7 \frac{a_k}{r^{k/2}}\right),$$where $k/2$ is the post-Newtonian order.
###Code
# Finally, sum the expressions for a_k to construct p_t as prescribed:
# p_t = q/(sqrt(r)*(1+q)^2) (1 + \sum_{k=2}^7 (a_k/r^{k/2}))
def f_p_t(m1,m2, chi1U,chi2U, r):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
a = ixp.zerorank1(DIM=10)
p_t__a_2_thru_a_4(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[2] = a_2
a[3] = a_3
a[4] = a_4
p_t__a_5_thru_a_6(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[5] = a_5
a[6] = a_6
p_t__a_7( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[7] = a_7
global p_t
p_t = 1 # Term prior to the sum in parentheses
for k in range(8):
p_t += a[k]/r**div(k,2)
p_t *= q / (1+q)**2 * 1/r**div(1,2)
# Second version, for validation purposes only.
def f_p_tv2(m1,m2, chi1U,chi2U, r):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
a = ixp.zerorank1(DIM=10)
p_t__a_2_thru_a_4v2(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[2] = a_2v2
a[3] = a_3v2
a[4] = a_4v2
p_t__a_5_thru_a_6v2(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[5] = a_5v2
a[6] = a_6v2
p_t__a_7v2( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[7] = a_7v2
global p_tv2
p_tv2 = 1 # Term prior to the sum in parentheses
for k in range(8):
p_tv2 += a[k]/r**div(k,2)
p_tv2 *= q / (1+q)**2 * 1/r**div(1,2)
###Output
_____no_output_____
###Markdown
Part 2: Validation against second transcription and corresponding Python module \[Back to [top](toc)\]$$\label{code_validation}$$ As a code validation check, we verify agreement between * the SymPy expressions transcribed from the cited published work on two separate occasions, and* the SymPy expressions generated in this notebook, and the corresponding Python module.
###Code
from NRPyPN_shortcuts import q, num_eval # Import needed input variable & numerical evaluation routine
f_p_t(m1,m2, chi1U,chi2U, q)
def error(varname):
print("ERROR: When comparing Python module & notebook, "+varname+" was found not to match.")
sys.exit(1)
# Validation against second transcription of the expressions:
f_p_tv2(m1,m2, chi1U,chi2U, q)
if sp.simplify(p_t - p_tv2) != 0: error("p_tv2")
# Validation against corresponding Python module:
import PN_p_t as pt
pt.f_p_t(m1,m2, chi1U,chi2U, q)
if sp.simplify(p_t - pt.p_t) != 0: error("pt.p_t")
print("ALL TESTS PASS")
###Output
ALL TESTS PASS
###Markdown
Part 3: Validation against trusted numerical values (i.e., in Table V of [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)) \[Back to [top](toc)\]$$\label{code_validationv2}$$
###Code
# Useful function for comparing published & NRPyPN results
def compare_pub_NPN(desc, pub,NPN,NPN_with_a5_chi1z_sign_error):
print("##################################################")
print(" "+desc)
print("##################################################")
print(str(pub) + " <- Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018)")
print(str(NPN) + " <- Result from NRPyPN")
relerror = abs(pub-NPN)/pub
resultstring = "Relative error between NRPyPN & published: "+str(relerror*100)+"%"
if relerror > 1e-3:
resultstring += " <--- NOT GOOD! (see explanation below)"
else:
resultstring += " <--- EXCELLENT AGREEMENT!"
print(resultstring+"\n")
print(str(NPN_with_a5_chi1z_sign_error) + " <- Result from NRPyPN, with chi1z sign error in a_5 expression.")
# 1. Let's consider the case:
# * Mass ratio q=1, chi1=chi2=(0,0,0), radial separation r=12
pub_result = 0.850941e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0850940927209620 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 1.0, # must be >= 1
nr = 12.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.)
compare_pub_NPN("Case: q=1, nonspinning, initial separation 12",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 2. Let's consider the case:
# * Mass ratio q=1.5, chi1= (0,0,-0.6); chi2=(0,0,0.6), radial separation r=10.8
pub_result = 0.868557e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0867002374951143
NPN_result = num_eval(p_t,
qmassratio = 1.5, # must be >= 1
nr = 10.8, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = -0.6,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.6)
compare_pub_NPN("Case: q=1.5, chi1z=-0.6, chi2z=0.6, initial separation 10.8",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 3. Let's consider the case:
# * Mass ratio q=4, chi1= (0,0,-0.8); chi2=(0,0,0.8), radial separation r=11
pub_result = 0.559207e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0557629777874552
NPN_result = num_eval(p_t,
qmassratio = 4.0, # must be >= 1
nr = 11.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = -0.8,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.8)
compare_pub_NPN("Case: q=4.0, chi1z=-0.8, chi2z=0.8, initial separation 11.0",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
print("0.0558369 <- Second iteration value in pub result. Note that NRPyPN value is *closer* to this value.")
# 4. Let's consider the case:
# * Mass ratio q=2, chi1= (0,0,0); chi2=(−0.3535, 0.3535, 0.5), radial separation r=10.8
pub_result = 0.7935e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0793500403866190 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 2.0, # must be >= 1
nr = 10.8, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.,
nchi2x = -0.3535,
nchi2y = +0.3535,
nchi2z = +0.5)
compare_pub_NPN("Case: q=2.0, chi2x=-0.3535, chi2y=+0.3535, chi2z=+0.5, initial separation 10.8",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 5. Let's consider the case:
# * Mass ratio q=8, chi1= (0, 0, 0.5); chi2=(0, 0, 0.5), radial separation r=11
pub_result = 0.345755e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0345584951081129 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 8.0, # must be >= 1
nr = 11.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.5,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.5)
compare_pub_NPN("""
Case: q=8.0, chi1z=chi2z=+0.5, initial separation 11
Note: This one is weird. Clearly the value in the table
has a typo, such that the p_r and p_t values
should be interchanged; p_t is about 20% the
next smallest value in the table, and the
parameters aren't that different. We therefore
assume that this is the case, and find agreement
with the published result to about 0.07%, which
isn't the best, but given that the table values
seem to be clearly wrong, it's an encouraging
sign.
""",pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
###Output
##################################################
Case: q=8.0, chi1z=chi2z=+0.5, initial separation 11
Note: This one is weird. Clearly the value in the table
has a typo, such that the p_r and p_t values
should be interchanged; p_t is about 20% the
next smallest value in the table, and the
parameters aren't that different. We therefore
assume that this is the case, and find agreement
with the published result to about 0.07%, which
isn't the best, but given that the table values
seem to be clearly wrong, it's an encouraging
sign.
##################################################
0.0345755 <- Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018)
0.0345503689803291 <- Result from NRPyPN
Relative error between NRPyPN & published: 0.0726844721578464% <--- EXCELLENT AGREEMENT!
0.0345584951081129 <- Result from NRPyPN, with chi1z sign error in a_5 expression.
###Markdown
Part 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[PN-p_t.pdf](PN-p_t.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import os,sys # Standard Python modules for multiplatform OS-level functions
import cmdline_helperNRPyPN as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("PN-p_t",location_of_template_file=os.path.join(".."))
###Output
Created PN-p_t.tex, and compiled LaTeX file to PDF file PN-p_t.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); $p_t$, the tangential component of the momentum vector, up to and including 3.5 post-Newtonian order This notebook constructs the tangential component of the momentum vector**Notebook Status:** Validated **Validation Notes:** All expressions in this notebook were transcribed twice by hand on separate occasions, and expressions were corrected as needed to ensure consistency with published work. Published work was cross-validated and typo(s) in published work were corrected. In addition, this tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented.** Author: Zach Etienne This notebook exists as the following Python module:1. [PN_p_t.py](../../edit/NRPyPN/PN_p_t.py) This notebook & corresponding Python module depend on the following NRPy+/NRPyPN Python modules:1. [indexedexp.py](../../edit/indexedexp.py): [**documentation+tutorial**](../Tutorial-Indexed_Expressions.ipynb)1. [NRPyPN_shortcuts.py](../../edit/NRPyPN/NRPyPN_shortcuts.py): [**documentation**](NRPyPN_shortcuts.ipynb) Table of Contents$$\label{toc}$$1. Part 1: [$p_t$](p_t), up to and including 3.5PN order, as derived in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)1. Part 2: [Validation against second transcription and corresponding Python module](code_validation)1. Part 3: [Validation against trusted numerical values](code_validationv2) (i.e., in Table V of [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036))1. Part 4: [LaTeX PDF output](latex_pdf_output): $\LaTeX$ PDF Output Part 1: $p_t$, up to and including 3.5PN order, as derived in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036) \[Back to [top](toc)\]$$\label{p_t}$$ As described in the [nonspinning Hamiltonian notebook](PN-Hamiltonian-Nonspinning.ipynb), the basic physical system assumes two point particles of mass $m_1$ and $m_2$ with corresponding momentum vectors $\mathbf{P}_1$ and $\mathbf{P}_2$, and displacement vectors $\mathbf{X}_1$ and $\mathbf{X}_2$ with respect to the center of mass. Here we also consider the spin vectors of each point mass $\mathbf{S}_1$ and $\mathbf{S}_2$, respectively.To reduce possibility of copying error, the equation for $p_t$ is taken directly from the arXiv LaTeX source code of Eq A2 in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036), and only mildly formatted to (1) improve presentation in Jupyter notebooks, (2) to ensure some degree of consistency in notation across different terms in other NRPyPN notebooks, and (3) to correct any errors. In particular, the boxed negative sign at 2.5PN order ($a_5$ below) was missing in the original equation. We will later show that this negative sign is necessary for consistency with other expressions in the same paper, as well as with the expression up to 3PN order in [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872):$$p_t = \frac{q}{(1+q)^2}\frac{1}{r^{1/2}}\left(1 + \sum_{k=2}^7 \frac{a_k}{r^{k/2}}\right),$$where\begin{align}a_2 &= 2\\a_3 &= \left[-\frac{3 \left(4 q^2+3 q\right) \chi _{2z}}{4 (q+1)^2}-\frac{3 (3 q+4) \chi _{1z}}{4 (q+1)^2}\right]\\a_4 &= \left[ -\frac{3 q^2 \chi _{2x}^2}{2 (q+1)^2} +\frac{3 q^2 \chi _{2y}^2}{4 (q+1)^2}+\frac{3 q^2 \chi _{2z}^2}{4 (q+1)^2} +\frac{42 q^2+41 q+42}{16 (q+1)^2}-\frac{3 \chi _{1x}^2}{2 (q+1)^2} \right.\\&\quad\quad \left. -\frac{3 q \chi _{1x} \chi _{2x}}{(q+1)^2}+\frac{3 \chi _{1y}^2}{4 (q+1)^2}+\frac{3 q \chi _{1y}\chi _{2y}}{2 (q+1)^2}+\frac{3 \chi _{1z}^2}{4 (q+1)^2}+\frac{3 q \chi _{1z} \chi _{2z}}{2 (q+1)^2}\right]\\a_5 &= \left[ \boxed{-1} \frac{\left(13 q^3+60 q^2+116 q+72\right) \chi _{1z}}{16 (q+1)^4}+\frac{\left(-72 q^4-116 q^3-60 q^2-13 q\right) \chi _{2z}}{16 (q+1)^4} \right]\\a_6 &= \left[\frac{\left(472 q^2-640\right) \chi _{1x}^2}{128 (q+1)^4} + \frac{\left(-512 q^2-640 q-64\right) \chi _{1y}^2}{128 (q+1)^4}+\frac{\left(-108 q^2+224 q+512\right) \chi _{1z}^2}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(472 q^2-640 q^4\right) \chi _{2x}^2}{128 (q+1)^4}+\frac{\left(192 q^3+560 q^2+192 q\right) \chi _{1x} \chi _{2x}}{128 (q+1)^4} +\frac{\left(-864 q^3-1856 q^2-864 q\right) \chi _{1y} \chi _{2y}}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(480 q^3+1064 q^2+480 q\right) \chi _{1z} \chi _{2z}}{128 (q+1)^4}+\frac{\left(-64 q^4-640 q^3-512 q^2\right) \chi _{2y}^2}{128 (q+1)^4}+\frac{\left(512 q^4+224 q^3-108 q^2\right) \chi _{2z}^2}{128 (q+1)^4} \right. \nonumber\\&\quad\quad\left.+\frac{480 q^4+163 \pi ^2 q^3-2636 q^3+326 \pi ^2 q^2-6128 q^2+163 \pi ^2 q-2636 q+480}{128 (q+1)^4} \right]\\a_7 &= \left[ \frac{5 (4 q+1) q^3 \chi _{2 x}^2 \chi _{2 z}}{2 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 y}^2 \chi _{2 z}}{8 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 z}^3}{8 (q+1)^4}+\chi _{1x} \left(\frac{15 (2 q+1) q^2 \chi _{2 x} \chi _{2 z}}{4 (q+1)^4}+\frac{15 (q+2) q \chi _{2 x} \chi _{1z}}{4 (q+1)^4}\right)\right. \nonumber\\&\quad\quad \left.+\chi _{1y} \left(\frac{15 q^2 \chi _{2 y} \chi _{1z}}{4 (q+1)^4}+\frac{15 q^2 \chi _{2 y} \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1z} \left(\frac{15 q^2 (2 q+3) \chi _{2 x}^2}{4 (q+1)^4}-\frac{15 q^2 (q+2) \chi _{2 y}^2}{4 (q+1)^4}-\frac{15 q^2 \chi _{2 z}^2}{4 (q+1)^3} \right.\right. \nonumber\\&\quad\quad \left.\left. -\frac{103 q^5+145 q^4-27 q^3+252 q^2+670 q+348}{32 (q+1)^6}\right)-\frac{\left(348 q^5+670 q^4+252 q^3-27 q^2+145 q+103\right) q \chi _{2 z}}{32 (q+1)^6}\right.\nonumber\\&\quad\quad \left.+\chi _{1x}^2 \left(\frac{5 (q+4) \chi _{1z}}{2 (q+1)^4}+\frac{15 q (3 q+2) \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1y}^2 \left(-\frac{5 (q+4) \chi _{1z}}{8 (q+1)^4}-\frac{15 q (2 q+1) \chi _{2 z}}{4 (q+1)^4}\right)-\frac{15 q \chi _{1z}^2 \chi _{2 z}}{4 (q+1)^3}-\frac{5 (q+4) \chi _{1z}^3}{8 (q+1)^4} \right]\end{align} Let's divide and conquer, by tackling the coefficients one at a time:\begin{align}a_2 &= 2\\a_3 &= \left[-\frac{3 \left(4 q^2+3 q\right) \chi _{2z}}{4 (q+1)^2}-\frac{3 (3 q+4) \chi _{1z}}{4 (q+1)^2}\right]\\a_4 &= \left[ -\frac{3 q^2 \chi _{2x}^2}{2 (q+1)^2} +\frac{3 q^2 \chi _{2y}^2}{4 (q+1)^2}+\frac{3 q^2 \chi _{2z}^2}{4 (q+1)^2} +\frac{42 q^2+41 q+42}{16 (q+1)^2}-\frac{3 \chi _{1x}^2}{2 (q+1)^2} \right.\\&\quad\quad \left. -\frac{3 q \chi _{1x} \chi _{2x}}{(q+1)^2}+\frac{3 \chi _{1y}^2}{4 (q+1)^2}+\frac{3 q \chi _{1y}\chi _{2y}}{2 (q+1)^2}+\frac{3 \chi _{1z}^2}{4 (q+1)^2}+\frac{3 q \chi _{1z} \chi _{2z}}{2 (q+1)^2}\right]\end{align}
###Code
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys # Standard Python modules for multiplatform OS-level functions
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
from NRPyPN_shortcuts import div # NRPyPN: shortcuts for e.g., vector operations
# Step 1: Construct terms a_2, a_3, and a_4, from
# Eq A2 of Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
# These terms have been independently validated
# against the same terms in Eq 7 of
# Healy, Lousto, Nakano, and Zlochower (2017)
# https://arxiv.org/abs/1702.00872
def p_t__a_2_thru_a_4(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_2,a_3,a_4
a_2 = 2
a_3 = (-3*(4*q**2+3*q)*chi2z/(4*(q+1)**2) - 3*(3*q+4)*chi1z/(4*(q+1)**2))
a_4 = (-3*q**2*chi2x**2/(2*(q+1)**2)
+3*q**2*chi2y**2/(4*(q+1)**2)
+3*q**2*chi2z**2/(4*(q+1)**2)
+(+42*q**2 + 41*q + 42)/(16*(q+1)**2)
-3*chi1x**2/(2*(q+1)**2)
-3*q*chi1x*chi2x/(q+1)**2
+3*chi1y**2/(4*(q+1)**2)
+3*q*chi1y*chi2y/(2*(q+1)**2)
+3*chi1z**2/(4*(q+1)**2)
+3*q*chi1z*chi2z/(2*(q+1)**2))
# Second version, for validation purposes only.
def p_t__a_2_thru_a_4v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_2v2,a_3v2,a_4v2
# Validated against HLNZ2017 version
a_2v2 = 2
# Validated against HLNZ2017 version
a_3v2 = (-(3*(4*q**2+3*q)*chi2z)/(4*(q+1)**2)-(3*(3*q+4)*chi1z)/(4*(q+1)**2))
# Validated against HLNZ2017 version
a_4v2 = -(3*q**2*chi2x**2)/(2*(q+1)**2)+(3*q**2*chi2y**2)/(4*(q+1)**2)+(3*q**2*chi2z**2)/(4*(q+1)**2)+(42*q**2+41*q+42)/(16*(q+1)**2)-(3*chi1x**2)/(2*(q+1)**2)-(3*q*chi1x*chi2x)/((q+1)**2)+(3*chi1y**2)/(4*(q+1)**2)+(3*q*chi1y*chi2y)/(2*(q+1)**2)+(3*chi1z**2)/(4*(q+1)**2)+(3*q*chi1z*chi2z)/(2*(q+1)**2)
###Output
_____no_output_____
###Markdown
Next, $a_5$ and $a_6$:\begin{align}a_5 &= \left[ \boxed{-1} \frac{\left(13 q^3+60 q^2+116 q+72\right) \chi _{1z}}{16 (q+1)^4}+\frac{\left(-72 q^4-116 q^3-60 q^2-13 q\right) \chi _{2z}}{16 (q+1)^4} \right]\\a_6 &= \left[\frac{\left(472 q^2-640\right) \chi _{1x}^2}{128 (q+1)^4} + \frac{\left(-512 q^2-640 q-64\right) \chi _{1y}^2}{128 (q+1)^4}+\frac{\left(-108 q^2+224 q+512\right) \chi _{1z}^2}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(472 q^2-640 q^4\right) \chi _{2x}^2}{128 (q+1)^4}+\frac{\left(192 q^3+560 q^2+192 q\right) \chi _{1x} \chi _{2x}}{128 (q+1)^4} +\frac{\left(-864 q^3-1856 q^2-864 q\right) \chi _{1y} \chi _{2y}}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(480 q^3+1064 q^2+480 q\right) \chi _{1z} \chi _{2z}}{128 (q+1)^4}+\frac{\left(-64 q^4-640 q^3-512 q^2\right) \chi _{2y}^2}{128 (q+1)^4}+\frac{\left(512 q^4+224 q^3-108 q^2\right) \chi _{2z}^2}{128 (q+1)^4} \right. \nonumber\\&\quad\quad\left.+\frac{480 q^4+163 \pi ^2 q^3-2636 q^3+326 \pi ^2 q^2-6128 q^2+163 \pi ^2 q-2636 q+480}{128 (q+1)^4} \right]\\\end{align}
###Code
# Construct terms a_5 and a_6, from
# Eq A2 of Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
# These terms have been independently validated
# against the same terms in Eq 7 of
# Healy, Lousto, Nakano, and Zlochower (2017)
# https://arxiv.org/abs/1702.00872
# and a sign error was corrected in the a_5
# expression.
def p_t__a_5_thru_a_6(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z, FixSignError=True):
SignFix = sp.sympify(-1)
if FixSignError == False:
SignFix = sp.sympify(+1)
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_5,a_6
a_5 = (SignFix*(13*q**3 + 60*q**2 + 116*q + 72)*chi1z/(16*(q+1)**4)
+(-72*q**4 - 116*q**3 - 60*q**2 - 13*q)*chi2z/(16*(q+1)**4))
a_6 = (+(+472*q**2 - 640)*chi1x**2/(128*(q+1)**4)
+(-512*q**2 - 640*q - 64)*chi1y**2/(128*(q+1)**4)
+(-108*q**2 + 224*q +512)*chi1z**2/(128*(q+1)**4)
+(+472*q**2 - 640*q**4)*chi2x**2/(128*(q+1)**4)
+(+192*q**3 + 560*q**2 + 192*q)*chi1x*chi2x/(128*(q+1)**4)
+(-864*q**3 -1856*q**2 - 864*q)*chi1y*chi2y/(128*(q+1)**4)
+(+480*q**3 +1064*q**2 + 480*q)*chi1z*chi2z/(128*(q+1)**4)
+( -64*q**4 - 640*q**3 - 512*q**2)*chi2y**2/(128*(q+1)**4)
+(+512*q**4 + 224*q**3 - 108*q**2)*chi2z**2/(128*(q+1)**4)
+(+480*q**4 + 163*sp.pi**2*q**3 - 2636*q**3 + 326*sp.pi**2*q**2 - 6128*q**2 + 163*sp.pi**2*q-2636*q+480)
/(128*(q+1)**4))
# Second version, for validation purposes only.
def p_t__a_5_thru_a_6v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z, FixSignError=True):
SignFix = sp.sympify(-1)
if FixSignError == False:
SignFix = sp.sympify(+1)
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
pi = sp.pi
global a_5v2,a_6v2
# Validated (separately) against HLNZ2017, as well as row 3 of Table V in RHP2018
a_5v2 = SignFix*((13*q**3+60*q**2+116*q+72)*chi1z)/(16*(q+1)**4)+((-72*q**4-116*q**3-60*q**2-13*q)*chi2z)/(16*(q+1)**4)
# Validated (separately) against HLNZ2017 version
a_6v2 = (+(+472*q**2 - 640)*chi1x**2/(128*(q+1)**4)
+(-512*q**2 - 640*q - 64)*chi1y**2/(128*(q+1)**4)
+(-108*q**2 + 224*q + 512)*chi1z**2/(128*(q+1)**4)
+(+472*q**2 - 640*q**4)*chi2x**2/(128*(q+1)**4)
+(+192*q**3 + 560*q**2 + 192*q)*chi1x*chi2x/(128*(q+1)**4)
+(-864*q**3 -1856*q**2 - 864*q)*chi1y*chi2y/(128*(q+1)**4)
+(+480*q**3 +1064*q**2 + 480*q)*chi1z*chi2z/(128*(q+1)**4)
+(- 64*q**4 - 640*q**3 - 512*q**2)*chi2y**2/(128*(q+1)**4)
+(+512*q**4 + 224*q**3 - 108*q**2)*chi2z**2/(128*(q+1)**4)
+(+480*q**4 + 163*pi**2*q**3 - 2636*q**3 + 326*pi**2*q**2 - 6128*q**2 + 163*pi**2*q - 2636*q + 480)
/(128*(q+1)**4))
###Output
_____no_output_____
###Markdown
Next we compare the expression for $a_5$ with Eq. 7 of [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872), as additional validation that there at least is a sign inconsistency:To reduce possibility of copying error, the following equation for $a_5$ is taken directly from the arXiv LaTeX source code of Eq. 7 of [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872), and only mildly formatted to (1) improve presentation in Jupyter notebooks and (2) to ensure some degree of consistency in notation across different terms in other NRPyPN notebooks.**Important: Note that [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872) adopts notation such that particle labels are interchanged: $1\leftrightarrow 2$, with respect to [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)**\begin{align}a_5 &= + \left( -\frac{1}{16}\,{\frac {q \left( 72\,{q}^{3}+116\,{q}^{2}+60\,q+13 \right) {\chi_{1z}}}{ \left( 1+q \right) ^{4}}}-\frac{1}{16}\,{\frac { \left( 13\,{q}^{3}+60\,{q}^{2}+116\,q+72 \right) {\chi_{2z}}}{ \left( 1+q \right) ^{4}}} \right)\\\end{align}
###Code
# Third version, for addtional validation.
def p_t__a_5_thru_a_6_HLNZ2017(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_5_HLNZ2017
a_5_HLNZ2017 = (-div(1,16)*(q*(72*q**3 + 116*q**2 + 60*q + 13)*chi1z/(1+q)**4)
-div(1,16)*( (13*q**3 + 60*q**2 +116*q + 72)*chi2z/(1+q)**4))
###Output
_____no_output_____
###Markdown
Finally, we validate that all 3 expressions for $a_5$ agree. (At the bottom, we confirm that all v2 expressions for $a_i$ match.)
###Code
from NRPyPN_shortcuts import m1,m2, chi1U,chi2U # Import needed input variables
p_t__a_5_thru_a_6( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
p_t__a_5_thru_a_6v2( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
# Again, the particle labels are interchanged in Healy, Lousto, Nakano, and Zlochower (2017):
p_t__a_5_thru_a_6_HLNZ2017(m1,m2, chi2U[0],chi2U[1],chi2U[2], chi1U[0],chi1U[1],chi1U[2])
def error(varname):
print("ERROR: When comparing Python module & notebook, "+varname+" was found not to match.")
sys.exit(1)
if sp.simplify(a_5 - a_5v2) != 0: error("a_5v2")
if sp.simplify(a_5 - a_5_HLNZ2017) != 0: error("a_5_HLNZ2017")
###Output
_____no_output_____
###Markdown
Finally $a_7$:\begin{align}a_7 &= \left[ \frac{5 (4 q+1) q^3 \chi _{2 x}^2 \chi _{2 z}}{2 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 y}^2 \chi _{2 z}}{8 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 z}^3}{8 (q+1)^4}+\chi _{1x} \left(\frac{15 (2 q+1) q^2 \chi _{2 x} \chi _{2 z}}{4 (q+1)^4}+\frac{15 (q+2) q \chi _{2 x} \chi _{1z}}{4 (q+1)^4}\right)\right. \nonumber\\&\quad\quad \left.+\chi _{1y} \left(\frac{15 q^2 \chi _{2 y} \chi _{1z}}{4 (q+1)^4}+\frac{15 q^2 \chi _{2 y} \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1z} \left(\frac{15 q^2 (2 q+3) \chi _{2 x}^2}{4 (q+1)^4}-\frac{15 q^2 (q+2) \chi _{2 y}^2}{4 (q+1)^4}-\frac{15 q^2 \chi _{2 z}^2}{4 (q+1)^3} \right.\right. \nonumber\\&\quad\quad \left.\left. -\frac{103 q^5+145 q^4-27 q^3+252 q^2+670 q+348}{32 (q+1)^6}\right)-\frac{\left(348 q^5+670 q^4+252 q^3-27 q^2+145 q+103\right) q \chi _{2 z}}{32 (q+1)^6}\right.\nonumber\\&\quad\quad \left.+\chi _{1x}^2 \left(\frac{5 (q+4) \chi _{1z}}{2 (q+1)^4}+\frac{15 q (3 q+2) \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1y}^2 \left(-\frac{5 (q+4) \chi _{1z}}{8 (q+1)^4}-\frac{15 q (2 q+1) \chi _{2 z}}{4 (q+1)^4}\right)-\frac{15 q \chi _{1z}^2 \chi _{2 z}}{4 (q+1)^3}-\frac{5 (q+4) \chi _{1z}^3}{8 (q+1)^4} \right]\end{align}
###Code
# Construct term a_7, from Eq A2 of
# Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
def p_t__a_7(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_7
a_7 = (+5*(4*q+1)*q**3*chi2x**2*chi2z/(2*(q+1)**4)
-5*(4*q+1)*q**3*chi2y**2*chi2z/(8*(q+1)**4)
-5*(4*q+1)*q**3*chi2z**3 /(8*(q+1)**4)
+chi1x*(+15*(2*q+1)*q**2*chi2x*chi2z/(4*(q+1)**4)
+15*(1*q+2)*q *chi2x*chi1z/(4*(q+1)**4))
+chi1y*(+15*q**2*chi2y*chi1z/(4*(q+1)**4)
+15*q**2*chi2y*chi2z/(4*(q+1)**4))
+chi1z*(+15*q**2*(2*q+3)*chi2x**2/(4*(q+1)**4)
-15*q**2*( q+2)*chi2y**2/(4*(q+1)**4)
-15*q**2 *chi2z**2/(4*(q+1)**3)
-(103*q**5 + 145*q**4 - 27*q**3 + 252*q**2 + 670*q + 348)/(32*(q+1)**6))
-(+348*q**5 + 670*q**4 + 252*q**3 - 27*q**2 + 145*q + 103)*q*chi2z/(32*(q+1)**6)
+chi1x**2*(+5*(q+4)*chi1z/(2*(q+1)**4)
+15*q*(3*q+2)*chi2z/(4*(q+1)**4))
+chi1y**2*(-5*(q+4)*chi1z/(8*(q+1)**4)
-15*q*(2*q+1)*chi2z/(4*(q+1)**4))
-15*q*chi1z**2*chi2z/(4*(q+1)**3)
-5*(q+4)*chi1z**3/(8*(q+1)**4))
# Second version, for validation purposes only.
def p_t__a_7v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_7v2
a_7v2 = (+5*(4*q+1)*q**3*chi2x**2*chi2z/(2*(q+1)**4)
-5*(4*q+1)*q**3*chi2y**2*chi2z/(8*(q+1)**4)
-5*(4*q+1)*q**3*chi2z**3/(8*(q+1)**4)
+chi1x*(+(15*(2*q+1)*q**2*chi2x*chi2z)/(4*(q+1)**4)
+(15*( q+2)*q *chi2x*chi1z)/(4*(q+1)**4))
+chi1y*(+(15*q**2*chi2y*chi1z)/(4*(q+1)**4)
+(15*q**2*chi2y*chi2z)/(4*(q+1)**4))
+chi1z*(+(15*q**2*(2*q+3)*chi2x**2)/(4*(q+1)**4)
-(15*q**2*( q+2)*chi2y**2)/(4*(q+1)**4)
-(15*q**2* chi2z**2)/(4*(q+1)**3)
-(103*q**5+145*q**4-27*q**3+252*q**2+670*q+348)/(32*(q+1)**6))
-(348*q**5+670*q**4+252*q**3-27*q**2+145*q+103)*q*chi2z/(32*(q+1)**6)
+chi1x**2*(+5*(q+4)*chi1z/(2*(q+1)**4) + 15*q*(3*q+2)*chi2z/(4*(q+1)**4))
+chi1y**2*(-5*(q+4)*chi1z/(8*(q+1)**4) - 15*q*(2*q+1)*chi2z/(4*(q+1)**4))
-15*q*chi1z**2*chi2z/(4*(q+1)**3) - 5*(q+4)*chi1z**3/(8*(q+1)**4))
###Output
_____no_output_____
###Markdown
Putting it all together, recall that$$p_t = \frac{q}{(1+q)^2}\frac{1}{r^{1/2}}\left(1 + \sum_{k=2}^7 \frac{a_k}{r^{k/2}}\right),$$where $k/2$ is the post-Newtonian order.
###Code
# Finally, sum the expressions for a_k to construct p_t as prescribed:
# p_t = q/(sqrt(r)*(1+q)^2) (1 + \sum_{k=2}^7 (a_k/r^{k/2}))
def f_p_t(m1,m2, chi1U,chi2U, r):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
a = ixp.zerorank1(DIM=10)
p_t__a_2_thru_a_4(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[2] = a_2
a[3] = a_3
a[4] = a_4
p_t__a_5_thru_a_6(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[5] = a_5
a[6] = a_6
p_t__a_7( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[7] = a_7
global p_t
p_t = 1 # Term prior to the sum in parentheses
for k in range(8):
p_t += a[k]/r**div(k,2)
p_t *= q / (1+q)**2 * 1/r**div(1,2)
# Second version, for validation purposes only.
def f_p_tv2(m1,m2, chi1U,chi2U, r):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
a = ixp.zerorank1(DIM=10)
p_t__a_2_thru_a_4v2(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[2] = a_2v2
a[3] = a_3v2
a[4] = a_4v2
p_t__a_5_thru_a_6v2(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[5] = a_5v2
a[6] = a_6v2
p_t__a_7v2( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[7] = a_7v2
global p_tv2
p_tv2 = 1 # Term prior to the sum in parentheses
for k in range(8):
p_tv2 += a[k]/r**div(k,2)
p_tv2 *= q / (1+q)**2 * 1/r**div(1,2)
###Output
_____no_output_____
###Markdown
Part 2: Validation against second transcription and corresponding Python module \[Back to [top](toc)\]$$\label{code_validation}$$ As a code validation check, we verify agreement between * the SymPy expressions transcribed from the cited published work on two separate occasions, and* the SymPy expressions generated in this notebook, and the corresponding Python module.
###Code
from NRPyPN_shortcuts import q, num_eval # Import needed input variable & numerical evaluation routine
f_p_t(m1,m2, chi1U,chi2U, q)
def error(varname):
print("ERROR: When comparing Python module & notebook, "+varname+" was found not to match.")
sys.exit(1)
# Validation against second transcription of the expressions:
f_p_tv2(m1,m2, chi1U,chi2U, q)
if sp.simplify(p_t - p_tv2) != 0: error("p_tv2")
# Validation against corresponding Python module:
import PN_p_t as pt
pt.f_p_t(m1,m2, chi1U,chi2U, q)
if sp.simplify(p_t - pt.p_t) != 0: error("pt.p_t")
print("ALL TESTS PASS")
###Output
ALL TESTS PASS
###Markdown
Part 3: Validation against trusted numerical values (i.e., in Table V of [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)) \[Back to [top](toc)\]$$\label{code_validationv2}$$
###Code
# Useful function for comparing published & NRPyPN results
def compare_pub_NPN(desc, pub,NPN,NPN_with_a5_chi1z_sign_error):
print("##################################################")
print(" "+desc)
print("##################################################")
print(str(pub) + " <- Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018)")
print(str(NPN) + " <- Result from NRPyPN")
relerror = abs(pub-NPN)/pub
resultstring = "Relative error between NRPyPN & published: "+str(relerror*100)+"%"
if relerror > 1e-3:
resultstring += " <--- NOT GOOD! (see explanation below)"
else:
resultstring += " <--- EXCELLENT AGREEMENT!"
print(resultstring+"\n")
print(str(NPN_with_a5_chi1z_sign_error) + " <- Result from NRPyPN, with chi1z sign error in a_5 expression.")
# 1. Let's consider the case:
# * Mass ratio q=1, chi1=chi2=(0,0,0), radial separation r=12
pub_result = 0.850941e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0850940927209620 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 1.0, # must be >= 1
nr = 12.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.)
compare_pub_NPN("Case: q=1, nonspinning, initial separation 12",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 2. Let's consider the case:
# * Mass ratio q=1.5, chi1= (0,0,-0.6); chi2=(0,0,0.6), radial separation r=10.8
pub_result = 0.868557e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0867002374951143
NPN_result = num_eval(p_t,
qmassratio = 1.5, # must be >= 1
nr = 10.8, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = -0.6,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.6)
compare_pub_NPN("Case: q=1.5, chi1z=-0.6, chi2z=0.6, initial separation 10.8",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 3. Let's consider the case:
# * Mass ratio q=4, chi1= (0,0,-0.8); chi2=(0,0,0.8), radial separation r=11
pub_result = 0.559207e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0557629777874552
NPN_result = num_eval(p_t,
qmassratio = 4.0, # must be >= 1
nr = 11.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = -0.8,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.8)
compare_pub_NPN("Case: q=4.0, chi1z=-0.8, chi2z=0.8, initial separation 11.0",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
print("0.0558369 <- Second iteration value in pub result. Note that NRPyPN value is *closer* to this value.")
# 4. Let's consider the case:
# * Mass ratio q=2, chi1= (0,0,0); chi2=(−0.3535, 0.3535, 0.5), radial separation r=10.8
pub_result = 0.7935e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0793500403866190 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 2.0, # must be >= 1
nr = 10.8, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.,
nchi2x = -0.3535,
nchi2y = +0.3535,
nchi2z = +0.5)
compare_pub_NPN("Case: q=2.0, chi2x=-0.3535, chi2y=+0.3535, chi2z=+0.5, initial separation 10.8",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 5. Let's consider the case:
# * Mass ratio q=8, chi1= (0, 0, 0.5); chi2=(0, 0, 0.5), radial separation r=11
pub_result = 0.345755e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0345584951081129 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 8.0, # must be >= 1
nr = 11.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.5,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.5)
compare_pub_NPN("""
Case: q=8.0, chi1z=chi2z=+0.5, initial separation 11
Note: This one is weird. Clearly the value in the table
has a typo, such that the p_r and p_t values
should be interchanged; p_t is about 20% the
next smallest value in the table, and the
parameters aren't that different. We therefore
assume that this is the case, and find agreement
with the published result to about 0.07%, which
isn't the best, but given that the table values
seem to be clearly wrong, it's an encouraging
sign.
""",pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
###Output
##################################################
Case: q=8.0, chi1z=chi2z=+0.5, initial separation 11
Note: This one is weird. Clearly the value in the table
has a typo, such that the p_r and p_t values
should be interchanged; p_t is about 20% the
next smallest value in the table, and the
parameters aren't that different. We therefore
assume that this is the case, and find agreement
with the published result to about 0.07%, which
isn't the best, but given that the table values
seem to be clearly wrong, it's an encouraging
sign.
##################################################
0.0345755 <- Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018)
0.0345503689803291 <- Result from NRPyPN
Relative error between NRPyPN & published: 0.0726844721578464% <--- EXCELLENT AGREEMENT!
0.0345584951081129 <- Result from NRPyPN, with chi1z sign error in a_5 expression.
###Markdown
Part 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[PN-p_t.pdf](PN-p_t.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import os,sys # Standard Python modules for multiplatform OS-level functions
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("PN-p_t",location_of_template_file=os.path.join(".."))
###Output
Created PN-p_t.tex, and compiled LaTeX file to PDF file PN-p_t.pdf
###Markdown
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); $p_t$, the tangential component of the momentum vector, up to and including 3.5 post-Newtonian order This notebook constructs the tangential component of the momentum vector**Notebook Status:** Validated **Validation Notes:** All expressions in this notebook were transcribed twice by hand on separate occasions, and expressions were corrected as needed to ensure consistency with published work. Published work was cross-validated and typo(s) in published work were corrected. In addition, this tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](code_validation). **Additional validation tests may have been performed, but are as yet, undocumented.** Author: Zach Etienne This notebook exists as the following Python module:1. [PN_p_t.py](../../edit/NRPyPN/PN_p_t.py) This notebook & corresponding Python module depend on the following NRPy+/NRPyPN Python modules:1. [indexedexp.py](../../edit/indexedexp.py): [**documentation+tutorial**](../Tutorial-Indexed_Expressions.ipynb)1. [NRPyPN_shortcuts.py](../../edit/NRPyPN/NRPyPN_shortcuts.py): [**documentation**](NRPyPN_shortcuts.ipynb) Table of Contents$$\label{toc}$$1. Part 1: [$p_t$](p_t), up to and including 3.5PN order, as derived in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)1. Part 2: [Validation against second transcription and corresponding Python module](code_validation)1. Part 3: [Validation against trusted numerical values](code_validationv2) (i.e., in Table V of [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036))1. Part 4: [LaTeX PDF output](latex_pdf_output): $\LaTeX$ PDF Output Part 1: $p_t$, up to and including 3.5PN order, as derived in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036) \[Back to [top](toc)\]$$\label{p_t}$$ As described in the [nonspinning Hamiltonian notebook](PN-Hamiltonian-Nonspinning.ipynb), the basic physical system assumes two point particles of mass $m_1$ and $m_2$ with corresponding momentum vectors $\mathbf{P}_1$ and $\mathbf{P}_2$, and displacement vectors $\mathbf{X}_1$ and $\mathbf{X}_2$ with respect to the center of mass. Here we also consider the spin vectors of each point mass $\mathbf{S}_1$ and $\mathbf{S}_2$, respectively.To reduce possibility of copying error, the equation for $p_t$ is taken directly from the arXiv LaTeX source code of Eq A2 in [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036), and only mildly formatted to (1) improve presentation in Jupyter notebooks, (2) to ensure some degree of consistency in notation across different terms in other NRPyPN notebooks, and (3) to correct any errors. In particular, the boxed negative sign at 2.5PN order ($a_5$ below) was missing in the original equation. We will later show that this negative sign is necessary for consistency with other expressions in the same paper, as well as with the expression up to 3PN order in [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872):$$p_t = \frac{q}{(1+q)^2}\frac{1}{r^{1/2}}\left(1 + \sum_{k=2}^7 \frac{a_k}{r^{k/2}}\right),$$where\begin{align}a_2 &= 2\\a_3 &= \left[-\frac{3 \left(4 q^2+3 q\right) \chi _{2z}}{4 (q+1)^2}-\frac{3 (3 q+4) \chi _{1z}}{4 (q+1)^2}\right]\\a_4 &= \left[ -\frac{3 q^2 \chi _{2x}^2}{2 (q+1)^2} +\frac{3 q^2 \chi _{2y}^2}{4 (q+1)^2}+\frac{3 q^2 \chi _{2z}^2}{4 (q+1)^2} +\frac{42 q^2+41 q+42}{16 (q+1)^2}-\frac{3 \chi _{1x}^2}{2 (q+1)^2} \right.\\&\quad\quad \left. -\frac{3 q \chi _{1x} \chi _{2x}}{(q+1)^2}+\frac{3 \chi _{1y}^2}{4 (q+1)^2}+\frac{3 q \chi _{1y}\chi _{2y}}{2 (q+1)^2}+\frac{3 \chi _{1z}^2}{4 (q+1)^2}+\frac{3 q \chi _{1z} \chi _{2z}}{2 (q+1)^2}\right]\\a_5 &= \left[ \boxed{-1} \frac{\left(13 q^3+60 q^2+116 q+72\right) \chi _{1z}}{16 (q+1)^4}+\frac{\left(-72 q^4-116 q^3-60 q^2-13 q\right) \chi _{2z}}{16 (q+1)^4} \right]\\a_6 &= \left[\frac{\left(472 q^2-640\right) \chi _{1x}^2}{128 (q+1)^4} + \frac{\left(-512 q^2-640 q-64\right) \chi _{1y}^2}{128 (q+1)^4}+\frac{\left(-108 q^2+224 q+512\right) \chi _{1z}^2}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(472 q^2-640 q^4\right) \chi _{2x}^2}{128 (q+1)^4}+\frac{\left(192 q^3+560 q^2+192 q\right) \chi _{1x} \chi _{2x}}{128 (q+1)^4} +\frac{\left(-864 q^3-1856 q^2-864 q\right) \chi _{1y} \chi _{2y}}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(480 q^3+1064 q^2+480 q\right) \chi _{1z} \chi _{2z}}{128 (q+1)^4}+\frac{\left(-64 q^4-640 q^3-512 q^2\right) \chi _{2y}^2}{128 (q+1)^4}+\frac{\left(512 q^4+224 q^3-108 q^2\right) \chi _{2z}^2}{128 (q+1)^4} \right. \nonumber\\&\quad\quad\left.+\frac{480 q^4+163 \pi ^2 q^3-2636 q^3+326 \pi ^2 q^2-6128 q^2+163 \pi ^2 q-2636 q+480}{128 (q+1)^4} \right]\\a_7 &= \left[ \frac{5 (4 q+1) q^3 \chi _{2 x}^2 \chi _{2 z}}{2 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 y}^2 \chi _{2 z}}{8 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 z}^3}{8 (q+1)^4}+\chi _{1x} \left(\frac{15 (2 q+1) q^2 \chi _{2 x} \chi _{2 z}}{4 (q+1)^4}+\frac{15 (q+2) q \chi _{2 x} \chi _{1z}}{4 (q+1)^4}\right)\right. \nonumber\\&\quad\quad \left.+\chi _{1y} \left(\frac{15 q^2 \chi _{2 y} \chi _{1z}}{4 (q+1)^4}+\frac{15 q^2 \chi _{2 y} \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1z} \left(\frac{15 q^2 (2 q+3) \chi _{2 x}^2}{4 (q+1)^4}-\frac{15 q^2 (q+2) \chi _{2 y}^2}{4 (q+1)^4}-\frac{15 q^2 \chi _{2 z}^2}{4 (q+1)^3} \right.\right. \nonumber\\&\quad\quad \left.\left. -\frac{103 q^5+145 q^4-27 q^3+252 q^2+670 q+348}{32 (q+1)^6}\right)-\frac{\left(348 q^5+670 q^4+252 q^3-27 q^2+145 q+103\right) q \chi _{2 z}}{32 (q+1)^6}\right.\nonumber\\&\quad\quad \left.+\chi _{1x}^2 \left(\frac{5 (q+4) \chi _{1z}}{2 (q+1)^4}+\frac{15 q (3 q+2) \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1y}^2 \left(-\frac{5 (q+4) \chi _{1z}}{8 (q+1)^4}-\frac{15 q (2 q+1) \chi _{2 z}}{4 (q+1)^4}\right)-\frac{15 q \chi _{1z}^2 \chi _{2 z}}{4 (q+1)^3}-\frac{5 (q+4) \chi _{1z}^3}{8 (q+1)^4} \right]\end{align} Let's divide and conquer, by tackling the coefficients one at a time:\begin{align}a_2 &= 2\\a_3 &= \left[-\frac{3 \left(4 q^2+3 q\right) \chi _{2z}}{4 (q+1)^2}-\frac{3 (3 q+4) \chi _{1z}}{4 (q+1)^2}\right]\\a_4 &= \left[ -\frac{3 q^2 \chi _{2x}^2}{2 (q+1)^2} +\frac{3 q^2 \chi _{2y}^2}{4 (q+1)^2}+\frac{3 q^2 \chi _{2z}^2}{4 (q+1)^2} +\frac{42 q^2+41 q+42}{16 (q+1)^2}-\frac{3 \chi _{1x}^2}{2 (q+1)^2} \right.\\&\quad\quad \left. -\frac{3 q \chi _{1x} \chi _{2x}}{(q+1)^2}+\frac{3 \chi _{1y}^2}{4 (q+1)^2}+\frac{3 q \chi _{1y}\chi _{2y}}{2 (q+1)^2}+\frac{3 \chi _{1z}^2}{4 (q+1)^2}+\frac{3 q \chi _{1z} \chi _{2z}}{2 (q+1)^2}\right]\end{align}
###Code
# Step 0: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys # Standard Python modules for multiplatform OS-level functions
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
from NRPyPN_shortcuts import div # NRPyPN: shortcuts for e.g., vector operations
# Step 1: Construct terms a_2, a_3, and a_4, from
# Eq A2 of Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
# These terms have been independently validated
# against the same terms in Eq 7 of
# Healy, Lousto, Nakano, and Zlochower (2017)
# https://arxiv.org/abs/1702.00872
def p_t__a_2_thru_a_4(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_2,a_3,a_4
a_2 = 2
a_3 = (-3*(4*q**2+3*q)*chi2z/(4*(q+1)**2) - 3*(3*q+4)*chi1z/(4*(q+1)**2))
a_4 = (-3*q**2*chi2x**2/(2*(q+1)**2)
+3*q**2*chi2y**2/(4*(q+1)**2)
+3*q**2*chi2z**2/(4*(q+1)**2)
+(+42*q**2 + 41*q + 42)/(16*(q+1)**2)
-3*chi1x**2/(2*(q+1)**2)
-3*q*chi1x*chi2x/(q+1)**2
+3*chi1y**2/(4*(q+1)**2)
+3*q*chi1y*chi2y/(2*(q+1)**2)
+3*chi1z**2/(4*(q+1)**2)
+3*q*chi1z*chi2z/(2*(q+1)**2))
# Second version, for validation purposes only.
def p_t__a_2_thru_a_4v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_2v2,a_3v2,a_4v2
# Validated against HLNZ2017 version
a_2v2 = 2
# Validated against HLNZ2017 version
a_3v2 = (-(3*(4*q**2+3*q)*chi2z)/(4*(q+1)**2)-(3*(3*q+4)*chi1z)/(4*(q+1)**2))
# Validated against HLNZ2017 version
a_4v2 = -(3*q**2*chi2x**2)/(2*(q+1)**2)+(3*q**2*chi2y**2)/(4*(q+1)**2)+(3*q**2*chi2z**2)/(4*(q+1)**2)+(42*q**2+41*q+42)/(16*(q+1)**2)-(3*chi1x**2)/(2*(q+1)**2)-(3*q*chi1x*chi2x)/((q+1)**2)+(3*chi1y**2)/(4*(q+1)**2)+(3*q*chi1y*chi2y)/(2*(q+1)**2)+(3*chi1z**2)/(4*(q+1)**2)+(3*q*chi1z*chi2z)/(2*(q+1)**2)
###Output
_____no_output_____
###Markdown
Next, $a_5$ and $a_6$:\begin{align}a_5 &= \left[ \boxed{-1} \frac{\left(13 q^3+60 q^2+116 q+72\right) \chi _{1z}}{16 (q+1)^4}+\frac{\left(-72 q^4-116 q^3-60 q^2-13 q\right) \chi _{2z}}{16 (q+1)^4} \right]\\a_6 &= \left[\frac{\left(472 q^2-640\right) \chi _{1x}^2}{128 (q+1)^4} + \frac{\left(-512 q^2-640 q-64\right) \chi _{1y}^2}{128 (q+1)^4}+\frac{\left(-108 q^2+224 q+512\right) \chi _{1z}^2}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(472 q^2-640 q^4\right) \chi _{2x}^2}{128 (q+1)^4}+\frac{\left(192 q^3+560 q^2+192 q\right) \chi _{1x} \chi _{2x}}{128 (q+1)^4} +\frac{\left(-864 q^3-1856 q^2-864 q\right) \chi _{1y} \chi _{2y}}{128 (q+1)^4}\right.\\&\quad\quad \left.+\frac{\left(480 q^3+1064 q^2+480 q\right) \chi _{1z} \chi _{2z}}{128 (q+1)^4}+\frac{\left(-64 q^4-640 q^3-512 q^2\right) \chi _{2y}^2}{128 (q+1)^4}+\frac{\left(512 q^4+224 q^3-108 q^2\right) \chi _{2z}^2}{128 (q+1)^4} \right. \nonumber\\&\quad\quad\left.+\frac{480 q^4+163 \pi ^2 q^3-2636 q^3+326 \pi ^2 q^2-6128 q^2+163 \pi ^2 q-2636 q+480}{128 (q+1)^4} \right]\\\end{align}
###Code
# Construct terms a_5 and a_6, from
# Eq A2 of Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
# These terms have been independently validated
# against the same terms in Eq 7 of
# Healy, Lousto, Nakano, and Zlochower (2017)
# https://arxiv.org/abs/1702.00872
# and a sign error was corrected in the a_5
# expression.
def p_t__a_5_thru_a_6(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z, FixSignError=True):
SignFix = sp.sympify(-1)
if FixSignError == False:
SignFix = sp.sympify(+1)
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_5,a_6
a_5 = (SignFix*(13*q**3 + 60*q**2 + 116*q + 72)*chi1z/(16*(q+1)**4)
+(-72*q**4 - 116*q**3 - 60*q**2 - 13*q)*chi2z/(16*(q+1)**4))
a_6 = (+(+472*q**2 - 640)*chi1x**2/(128*(q+1)**4)
+(-512*q**2 - 640*q - 64)*chi1y**2/(128*(q+1)**4)
+(-108*q**2 + 224*q +512)*chi1z**2/(128*(q+1)**4)
+(+472*q**2 - 640*q**4)*chi2x**2/(128*(q+1)**4)
+(+192*q**3 + 560*q**2 + 192*q)*chi1x*chi2x/(128*(q+1)**4)
+(-864*q**3 -1856*q**2 - 864*q)*chi1y*chi2y/(128*(q+1)**4)
+(+480*q**3 +1064*q**2 + 480*q)*chi1z*chi2z/(128*(q+1)**4)
+( -64*q**4 - 640*q**3 - 512*q**2)*chi2y**2/(128*(q+1)**4)
+(+512*q**4 + 224*q**3 - 108*q**2)*chi2z**2/(128*(q+1)**4)
+(+480*q**4 + 163*sp.pi**2*q**3 - 2636*q**3 + 326*sp.pi**2*q**2 - 6128*q**2 + 163*sp.pi**2*q-2636*q+480)
/(128*(q+1)**4))
# Second version, for validation purposes only.
def p_t__a_5_thru_a_6v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z, FixSignError=True):
SignFix = sp.sympify(-1)
if FixSignError == False:
SignFix = sp.sympify(+1)
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
pi = sp.pi
global a_5v2,a_6v2
# Validated (separately) against HLNZ2017, as well as row 3 of Table V in RHP2018
a_5v2 = SignFix*((13*q**3+60*q**2+116*q+72)*chi1z)/(16*(q+1)**4)+((-72*q**4-116*q**3-60*q**2-13*q)*chi2z)/(16*(q+1)**4)
# Validated (separately) against HLNZ2017 version
a_6v2 = (+(+472*q**2 - 640)*chi1x**2/(128*(q+1)**4)
+(-512*q**2 - 640*q - 64)*chi1y**2/(128*(q+1)**4)
+(-108*q**2 + 224*q + 512)*chi1z**2/(128*(q+1)**4)
+(+472*q**2 - 640*q**4)*chi2x**2/(128*(q+1)**4)
+(+192*q**3 + 560*q**2 + 192*q)*chi1x*chi2x/(128*(q+1)**4)
+(-864*q**3 -1856*q**2 - 864*q)*chi1y*chi2y/(128*(q+1)**4)
+(+480*q**3 +1064*q**2 + 480*q)*chi1z*chi2z/(128*(q+1)**4)
+(- 64*q**4 - 640*q**3 - 512*q**2)*chi2y**2/(128*(q+1)**4)
+(+512*q**4 + 224*q**3 - 108*q**2)*chi2z**2/(128*(q+1)**4)
+(+480*q**4 + 163*pi**2*q**3 - 2636*q**3 + 326*pi**2*q**2 - 6128*q**2 + 163*pi**2*q - 2636*q + 480)
/(128*(q+1)**4))
###Output
_____no_output_____
###Markdown
Next we compare the expression for $a_5$ with Eq. 7 of [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872), as additional validation that there at least is a sign inconsistency:To reduce possibility of copying error, the following equation for $a_5$ is taken directly from the arXiv LaTeX source code of Eq. 7 of [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872), and only mildly formatted to (1) improve presentation in Jupyter notebooks and (2) to ensure some degree of consistency in notation across different terms in other NRPyPN notebooks.**Important: Note that [Healy, Lousto, Nakano, and Zlochower (2017)](https://arxiv.org/abs/1702.00872) adopts notation such that particle labels are interchanged: $1\leftrightarrow 2$, with respect to [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)**\begin{align}a_5 &= + \left( -\frac{1}{16}\,{\frac {q \left( 72\,{q}^{3}+116\,{q}^{2}+60\,q+13 \right) {\chi_{1z}}}{ \left( 1+q \right) ^{4}}}-\frac{1}{16}\,{\frac { \left( 13\,{q}^{3}+60\,{q}^{2}+116\,q+72 \right) {\chi_{2z}}}{ \left( 1+q \right) ^{4}}} \right)\\\end{align}
###Code
# Third version, for addtional validation.
def p_t__a_5_thru_a_6_HLNZ2017(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_5_HLNZ2017
a_5_HLNZ2017 = (-div(1,16)*(q*(72*q**3 + 116*q**2 + 60*q + 13)*chi1z/(1+q)**4)
-div(1,16)*( (13*q**3 + 60*q**2 +116*q + 72)*chi2z/(1+q)**4))
###Output
_____no_output_____
###Markdown
Finally, we validate that all 3 expressions for $a_5$ agree. (At the bottom, we confirm that all v2 expressions for $a_i$ match.)
###Code
from NRPyPN_shortcuts import m1,m2, chi1U,chi2U # Import needed input variables
p_t__a_5_thru_a_6( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
p_t__a_5_thru_a_6v2( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
# Again, the particle labels are interchanged in Healy, Lousto, Nakano, and Zlochower (2017):
p_t__a_5_thru_a_6_HLNZ2017(m1,m2, chi2U[0],chi2U[1],chi2U[2], chi1U[0],chi1U[1],chi1U[2])
def error(varname):
print("ERROR: When comparing Python module & notebook, "+varname+" was found not to match.")
sys.exit(1)
if sp.simplify(a_5 - a_5v2) != 0: error("a_5v2")
if sp.simplify(a_5 - a_5_HLNZ2017) != 0: error("a_5_HLNZ2017")
###Output
_____no_output_____
###Markdown
Finally $a_7$:\begin{align}a_7 &= \left[ \frac{5 (4 q+1) q^3 \chi _{2 x}^2 \chi _{2 z}}{2 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 y}^2 \chi _{2 z}}{8 (q+1)^4}-\frac{5 (4 q+1) q^3 \chi _{2 z}^3}{8 (q+1)^4}+\chi _{1x} \left(\frac{15 (2 q+1) q^2 \chi _{2 x} \chi _{2 z}}{4 (q+1)^4}+\frac{15 (q+2) q \chi _{2 x} \chi _{1z}}{4 (q+1)^4}\right)\right. \nonumber\\&\quad\quad \left.+\chi _{1y} \left(\frac{15 q^2 \chi _{2 y} \chi _{1z}}{4 (q+1)^4}+\frac{15 q^2 \chi _{2 y} \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1z} \left(\frac{15 q^2 (2 q+3) \chi _{2 x}^2}{4 (q+1)^4}-\frac{15 q^2 (q+2) \chi _{2 y}^2}{4 (q+1)^4}-\frac{15 q^2 \chi _{2 z}^2}{4 (q+1)^3} \right.\right. \nonumber\\&\quad\quad \left.\left. -\frac{103 q^5+145 q^4-27 q^3+252 q^2+670 q+348}{32 (q+1)^6}\right)-\frac{\left(348 q^5+670 q^4+252 q^3-27 q^2+145 q+103\right) q \chi _{2 z}}{32 (q+1)^6}\right.\nonumber\\&\quad\quad \left.+\chi _{1x}^2 \left(\frac{5 (q+4) \chi _{1z}}{2 (q+1)^4}+\frac{15 q (3 q+2) \chi _{2 z}}{4 (q+1)^4}\right)+\chi _{1y}^2 \left(-\frac{5 (q+4) \chi _{1z}}{8 (q+1)^4}-\frac{15 q (2 q+1) \chi _{2 z}}{4 (q+1)^4}\right)-\frac{15 q \chi _{1z}^2 \chi _{2 z}}{4 (q+1)^3}-\frac{5 (q+4) \chi _{1z}^3}{8 (q+1)^4} \right]\end{align}
###Code
# Construct term a_7, from Eq A2 of
# Ramos-Buades, Husa, and Pratten (2018)
# https://arxiv.org/abs/1810.00036
def p_t__a_7(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_7
a_7 = (+5*(4*q+1)*q**3*chi2x**2*chi2z/(2*(q+1)**4)
-5*(4*q+1)*q**3*chi2y**2*chi2z/(8*(q+1)**4)
-5*(4*q+1)*q**3*chi2z**3 /(8*(q+1)**4)
+chi1x*(+15*(2*q+1)*q**2*chi2x*chi2z/(4*(q+1)**4)
+15*(1*q+2)*q *chi2x*chi1z/(4*(q+1)**4))
+chi1y*(+15*q**2*chi2y*chi1z/(4*(q+1)**4)
+15*q**2*chi2y*chi2z/(4*(q+1)**4))
+chi1z*(+15*q**2*(2*q+3)*chi2x**2/(4*(q+1)**4)
-15*q**2*( q+2)*chi2y**2/(4*(q+1)**4)
-15*q**2 *chi2z**2/(4*(q+1)**3)
-(103*q**5 + 145*q**4 - 27*q**3 + 252*q**2 + 670*q + 348)/(32*(q+1)**6))
-(+348*q**5 + 670*q**4 + 252*q**3 - 27*q**2 + 145*q + 103)*q*chi2z/(32*(q+1)**6)
+chi1x**2*(+5*(q+4)*chi1z/(2*(q+1)**4)
+15*q*(3*q+2)*chi2z/(4*(q+1)**4))
+chi1y**2*(-5*(q+4)*chi1z/(8*(q+1)**4)
-15*q*(2*q+1)*chi2z/(4*(q+1)**4))
-15*q*chi1z**2*chi2z/(4*(q+1)**3)
-5*(q+4)*chi1z**3/(8*(q+1)**4))
# Second version, for validation purposes only.
def p_t__a_7v2(m1,m2, chi1x,chi1y,chi1z, chi2x,chi2y,chi2z):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
global a_7v2
a_7v2 = (+5*(4*q+1)*q**3*chi2x**2*chi2z/(2*(q+1)**4)
-5*(4*q+1)*q**3*chi2y**2*chi2z/(8*(q+1)**4)
-5*(4*q+1)*q**3*chi2z**3/(8*(q+1)**4)
+chi1x*(+(15*(2*q+1)*q**2*chi2x*chi2z)/(4*(q+1)**4)
+(15*( q+2)*q *chi2x*chi1z)/(4*(q+1)**4))
+chi1y*(+(15*q**2*chi2y*chi1z)/(4*(q+1)**4)
+(15*q**2*chi2y*chi2z)/(4*(q+1)**4))
+chi1z*(+(15*q**2*(2*q+3)*chi2x**2)/(4*(q+1)**4)
-(15*q**2*( q+2)*chi2y**2)/(4*(q+1)**4)
-(15*q**2* chi2z**2)/(4*(q+1)**3)
-(103*q**5+145*q**4-27*q**3+252*q**2+670*q+348)/(32*(q+1)**6))
-(348*q**5+670*q**4+252*q**3-27*q**2+145*q+103)*q*chi2z/(32*(q+1)**6)
+chi1x**2*(+5*(q+4)*chi1z/(2*(q+1)**4) + 15*q*(3*q+2)*chi2z/(4*(q+1)**4))
+chi1y**2*(-5*(q+4)*chi1z/(8*(q+1)**4) - 15*q*(2*q+1)*chi2z/(4*(q+1)**4))
-15*q*chi1z**2*chi2z/(4*(q+1)**3) - 5*(q+4)*chi1z**3/(8*(q+1)**4))
###Output
_____no_output_____
###Markdown
Putting it all together, recall that$$p_t = \frac{q}{(1+q)^2}\frac{1}{r^{1/2}}\left(1 + \sum_{k=2}^7 \frac{a_k}{r^{k/2}}\right),$$where $k/2$ is the post-Newtonian order.
###Code
# Finally, sum the expressions for a_k to construct p_t as prescribed:
# p_t = q/(sqrt(r)*(1+q)^2) (1 + \sum_{k=2}^7 (a_k/r^{k/2}))
def f_p_t(m1,m2, chi1U,chi2U, r):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
a = ixp.zerorank1(DIM=10)
p_t__a_2_thru_a_4(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[2] = a_2
a[3] = a_3
a[4] = a_4
p_t__a_5_thru_a_6(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[5] = a_5
a[6] = a_6
p_t__a_7( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[7] = a_7
global p_t
p_t = 1 # Term prior to the sum in parentheses
for k in range(8):
p_t += a[k]/r**div(k,2)
p_t *= q / (1+q)**2 * 1/r**div(1,2)
# Second version, for validation purposes only.
def f_p_tv2(m1,m2, chi1U,chi2U, r):
q = m2/m1 # It is assumed that q >= 1, so m2 >= m1.
a = ixp.zerorank1(DIM=10)
p_t__a_2_thru_a_4v2(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[2] = a_2v2
a[3] = a_3v2
a[4] = a_4v2
p_t__a_5_thru_a_6v2(m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[5] = a_5v2
a[6] = a_6v2
p_t__a_7v2( m1,m2, chi1U[0],chi1U[1],chi1U[2], chi2U[0],chi2U[1],chi2U[2])
a[7] = a_7v2
global p_tv2
p_tv2 = 1 # Term prior to the sum in parentheses
for k in range(8):
p_tv2 += a[k]/r**div(k,2)
p_tv2 *= q / (1+q)**2 * 1/r**div(1,2)
###Output
_____no_output_____
###Markdown
Part 2: Validation against second transcription and corresponding Python module \[Back to [top](toc)\]$$\label{code_validation}$$ As a code validation check, we verify agreement between * the SymPy expressions transcribed from the cited published work on two separate occasions, and* the SymPy expressions generated in this notebook, and the corresponding Python module.
###Code
from NRPyPN_shortcuts import q, num_eval # Import needed input variable & numerical evaluation routine
f_p_t(m1,m2, chi1U,chi2U, q)
def error(varname):
print("ERROR: When comparing Python module & notebook, "+varname+" was found not to match.")
sys.exit(1)
# Validation against second transcription of the expressions:
f_p_tv2(m1,m2, chi1U,chi2U, q)
if sp.simplify(p_t - p_tv2) != 0: error("p_tv2")
# Validation against corresponding Python module:
import PN_p_t as pt
pt.f_p_t(m1,m2, chi1U,chi2U, q)
if sp.simplify(p_t - pt.p_t) != 0: error("pt.p_t")
print("ALL TESTS PASS")
###Output
ALL TESTS PASS
###Markdown
Part 3: Validation against trusted numerical values (i.e., in Table V of [Ramos-Buades, Husa, and Pratten (2018)](https://arxiv.org/abs/1810.00036)) \[Back to [top](toc)\]$$\label{code_validationv2}$$
###Code
# Useful function for comparing published & NRPyPN results
def compare_pub_NPN(desc, pub,NPN,NPN_with_a5_chi1z_sign_error):
print("##################################################")
print(" "+desc)
print("##################################################")
print(str(pub) + " <- Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018)")
print(str(NPN) + " <- Result from NRPyPN")
relerror = abs(pub-NPN)/pub
resultstring = "Relative error between NRPyPN & published: "+str(relerror*100)+"%"
if relerror > 1e-3:
resultstring += " <--- NOT GOOD! (see explanation below)"
else:
resultstring += " <--- EXCELLENT AGREEMENT!"
print(resultstring+"\n")
print(str(NPN_with_a5_chi1z_sign_error) + " <- Result from NRPyPN, with chi1z sign error in a_5 expression.")
# 1. Let's consider the case:
# * Mass ratio q=1, chi1=chi2=(0,0,0), radial separation r=12
pub_result = 0.850941e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0850940927209620 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 1.0, # must be >= 1
nr = 12.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.)
compare_pub_NPN("Case: q=1, nonspinning, initial separation 12",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 2. Let's consider the case:
# * Mass ratio q=1.5, chi1= (0,0,-0.6); chi2=(0,0,0.6), radial separation r=10.8
pub_result = 0.868557e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0867002374951143
NPN_result = num_eval(p_t,
qmassratio = 1.5, # must be >= 1
nr = 10.8, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = -0.6,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.6)
compare_pub_NPN("Case: q=1.5, chi1z=-0.6, chi2z=0.6, initial separation 10.8",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 3. Let's consider the case:
# * Mass ratio q=4, chi1= (0,0,-0.8); chi2=(0,0,0.8), radial separation r=11
pub_result = 0.559207e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0557629777874552
NPN_result = num_eval(p_t,
qmassratio = 4.0, # must be >= 1
nr = 11.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = -0.8,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.8)
compare_pub_NPN("Case: q=4.0, chi1z=-0.8, chi2z=0.8, initial separation 11.0",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
print("0.0558369 <- Second iteration value in pub result. Note that NRPyPN value is *closer* to this value.")
# 4. Let's consider the case:
# * Mass ratio q=2, chi1= (0,0,0); chi2=(−0.3535, 0.3535, 0.5), radial separation r=10.8
pub_result = 0.7935e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0793500403866190 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 2.0, # must be >= 1
nr = 10.8, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.,
nchi2x = -0.3535,
nchi2y = +0.3535,
nchi2z = +0.5)
compare_pub_NPN("Case: q=2.0, chi2x=-0.3535, chi2y=+0.3535, chi2z=+0.5, initial separation 10.8",
pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
# 5. Let's consider the case:
# * Mass ratio q=8, chi1= (0, 0, 0.5); chi2=(0, 0, 0.5), radial separation r=11
pub_result = 0.345755e-1 # Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018) https://arxiv.org/abs/1810.00036
NPN_with_a5_chi1z_sign_error = 0.0345584951081129 # should be unaffected by sign error, as chi1z=0.
NPN_result = num_eval(p_t,
qmassratio = 8.0, # must be >= 1
nr = 11.0, # Orbital separation
nchi1x = +0.,
nchi1y = +0.,
nchi1z = +0.5,
nchi2x = +0.,
nchi2y = +0.,
nchi2z = +0.5)
compare_pub_NPN("""
Case: q=8.0, chi1z=chi2z=+0.5, initial separation 11
Note: This one is weird. Clearly the value in the table
has a typo, such that the p_r and p_t values
should be interchanged; p_t is about 20% the
next smallest value in the table, and the
parameters aren't that different. We therefore
assume that this is the case, and find agreement
with the published result to about 0.07%, which
isn't the best, but given that the table values
seem to be clearly wrong, it's an encouraging
sign.
""",pub_result,NPN_result,NPN_with_a5_chi1z_sign_error)
###Output
##################################################
Case: q=8.0, chi1z=chi2z=+0.5, initial separation 11
Note: This one is weird. Clearly the value in the table
has a typo, such that the p_r and p_t values
should be interchanged; p_t is about 20% the
next smallest value in the table, and the
parameters aren't that different. We therefore
assume that this is the case, and find agreement
with the published result to about 0.07%, which
isn't the best, but given that the table values
seem to be clearly wrong, it's an encouraging
sign.
##################################################
0.0345755 <- Expected result, from Table V of Ramos-Buades, Husa, and Pratten (2018)
0.0345503689803291 <- Result from NRPyPN
Relative error between NRPyPN & published: 0.0726844721578464% <--- EXCELLENT AGREEMENT!
0.0345584951081129 <- Result from NRPyPN, with chi1z sign error in a_5 expression.
###Markdown
Part 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[PN-p_t.pdf](PN-p_t.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
###Code
import os,sys # Standard Python modules for multiplatform OS-level functions
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("PN-p_t",location_of_template_file=os.path.join(".."))
###Output
Created PN-p_t.tex, and compiled LaTeX file to PDF file PN-p_t.pdf
|
notebooks/2016-10-09(Time constant effects for learning in time).ipynb | ###Markdown
Time constant effects for learning in timeIn this notebook I intend to illustrate by the mean of visualization the effect of the time constant in the learning process when we are learning in time (k > 0). We start as usual by loading all the required libraries
###Code
from __future__ import print_function
import subprocess
import sys
sys.path.append('../')
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
from connectivity_functions import get_beta, get_w
from connectivity_functions import calculate_probability, calculate_coactivations
from data_transformer import build_ortogonal_patterns
from network import BCPNN
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
# np.set_printoptions(suppress=True)
%matplotlib inline
matplotlib.rcParams.update({'font.size': 22})
###Output
_____no_output_____
###Markdown
After this all the mechanisms for reading from the correct version control statement should be loaded
###Code
run_old_version = False
if run_old_version:
hash_when_file_was_written = 'e8360ad5746b3094ee2c2cbe5591946e25f9eea3'
hash_at_the_moment = subprocess.check_output(["git", 'rev-parse', 'HEAD']).strip()
print('Actual hash', hash_at_the_moment)
print('Hash of the commit used to run the simulation', hash_when_file_was_written)
subprocess.call(['git', 'checkout', hash_when_file_was_written])
###Output
_____no_output_____
###Markdown
We first build the network and set the parameters, this should be controlled to see the effects on the plots bellow
###Code
hypercolumns = 10
minicolumns = 10
N = 10 # Number of patterns
patterns_dic = build_ortogonal_patterns(hypercolumns, minicolumns)
patterns = list(patterns_dic.values())
patterns = patterns[:N]
P_ideal = calculate_coactivations(patterns)
p_ideal = calculate_probability(patterns)
w_ideal = get_w(P_ideal, p_ideal)
beta_ideal = get_beta(p_ideal)
dt = 0.001
T_training = 1.0
training_time = np.arange(0, T_training + dt, dt)
prng = np.random.RandomState(seed=0)
nn = BCPNN(hypercolumns, minicolumns, g_a=97.0, g_beta=1.0, g_w=1.0, g_I=10.0, prng=prng)
w_end = []
p_co_end = []
###Output
_____no_output_____
###Markdown
Then we load the trials
###Code
nn.empty_history()
nn.randomize_pattern()
nn.k = 1.0
aux_counter = 0
for pattern in patterns:
history = nn.run_network_simulation(training_time, I=pattern, save=True)
w_end.append(history['w'][-1, ...])
p_co_end.append(history['p_co'][-1, ...])
aux_counter += 1
history = nn.history
o = history['o']
s = history['s']
z_pre = history['z_pre']
p_pre = history['p_pre']
p_post = history['p_post']
p_co = history['p_co']
beta = history['beta']
w = history['w']
adaptation = history['a']
distance_p = np.abs(p_pre - p_ideal)
distance_P = np.abs(p_co[-1, ...] - P_ideal)
distance_w = np.abs(w[-1, ...] - w_ideal)
print(distance_p.shape)
print(distance_w.shape)
###Output
(10010, 100)
(100, 100)
###Markdown
Plot the history
###Code
cmap = 'magma'
extent = [0, minicolumns * hypercolumns, aux_counter * T_training, 0]
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(221)
im1 = ax1.imshow(o, aspect='auto', interpolation='None', cmap=cmap, vmax=1, vmin=0, extent=extent)
ax1.set_title('Unit activation')
ax2 = fig.add_subplot(222)
im2 = ax2.imshow(z_pre, aspect='auto', interpolation='None', cmap=cmap, vmax=1, vmin=0, extent=extent)
ax2.set_title('Traces of activity')
ax3 = fig.add_subplot(223)
im3 = ax3.imshow(adaptation, aspect='auto', interpolation='None', cmap=cmap, vmax=1, vmin=0, extent=extent)
ax3.set_title('Adaptation')
ax4 = fig.add_subplot(224)
im4 = ax4.imshow(p_pre, aspect='auto', interpolation='None', cmap=cmap, vmax=1, vmin=0, extent=extent)
ax4.set_title('Probability')
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.12, 0.05, 0.79])
fig.colorbar(im1, cax=cbar_ax)
print('Final probability', nn.p_pre)
###Output
Final probability [ 0.07840648 0.08161071 0.08622409 0.09133426 0.09699043 0.10322921
0.11014483 0.11776511 0.12579847 0.10849641 0.07840211 0.08161292
0.08622086 0.09133707 0.0969872 0.10323045 0.11013933 0.11776938
0.12579981 0.10850089 0.07840739 0.08161863 0.08621856 0.09133643
0.09698087 0.10324362 0.11013667 0.11776455 0.12579734 0.10849594
0.07840557 0.08161507 0.08622119 0.09132978 0.0969896 0.10323714
0.11014388 0.11777253 0.12578691 0.10849835 0.07841053 0.081614
0.08621745 0.09133272 0.09698902 0.10324308 0.1101268 0.11777428
0.12579332 0.10849882 0.07838884 0.08161773 0.08622069 0.09133192
0.09698997 0.10324284 0.1101464 0.11776837 0.12579584 0.10849739
0.07840708 0.08161133 0.08622493 0.09133898 0.09698382 0.10323014
0.11015128 0.11776585 0.12579622 0.10849037 0.07841327 0.0816046
0.08622315 0.09133748 0.09698738 0.10322685 0.11014568 0.11777233
0.12578841 0.10850086 0.07838355 0.08162418 0.08621803 0.09133533
0.09699349 0.1032405 0.11013811 0.11777291 0.12578633 0.10850757
0.07840552 0.08161245 0.08622106 0.09133983 0.09698608 0.10324246
0.11013643 0.11776386 0.12579499 0.10849732]
###Markdown
Plot the final weight matrixHere we plot how the weight matrix looks at the end of every learning step. That is, after the network has been running clamped to a particular pattern for T_training time.
###Code
cmap1 = 'coolwarm'
cmap2 = 'magma'
gs = gridspec.GridSpec(aux_counter, 2)
fig = plt.figure(figsize=(16, 12))
for index, (w, p_co) in enumerate(zip(w_end, p_co_end)):
ax = fig.add_subplot(gs[index, 0])
im = ax.imshow(w, cmap=cmap1, interpolation='None')
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, ax=ax, cax=cax)
ax = fig.add_subplot(gs[index, 1])
im = ax.imshow(p_co, cmap=cmap2, interpolation='None', vmin=0, vmax=1)
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, ax=ax, cax=cax)
fig = plt.figure(figsize=(16, 12))
plt.imshow(w, cmap=cmap1, interpolation='None')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Ideal w and PWe plot the ideal versions of w and P (not trained in time) for reference
###Code
cmap1 = 'coolwarm'
cmap2 = 'magma'
gs = gridspec.GridSpec(1, 2)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(gs[0, 0])
im = ax.imshow(w_ideal, cmap=cmap1, interpolation='None')
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, ax=ax, cax=cax)
ax = fig.add_subplot(gs[0, 1])
im = ax.imshow(P_ideal, cmap=cmap2, interpolation='None', vmin=0, vmax=1)
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, ax=ax, cax=cax)
###Output
_____no_output_____
###Markdown
Convergence of w and p_coHere we plot the difference between w and p_co and their ideal versions (not trained in time).
###Code
cmap1 = 'coolwarm'
cmap2 = 'magma'
gs = gridspec.GridSpec(1, 2)
fig = plt.figure(figsize=(16, 12))
ax = fig.add_subplot(gs[0, 0])
im = ax.imshow(distance_w, cmap=cmap1, interpolation='None')
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, ax=ax, cax=cax)
ax = fig.add_subplot(gs[0, 1])
im = ax.imshow(distance_P, cmap=cmap2, interpolation='None', vmin=0, vmax=1)
divider = make_axes_locatable(ax)
cax = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im, ax=ax, cax=cax)
###Output
_____no_output_____
###Markdown
RetrievalNow that we have trained our weights we can see what happen when we retrieve patterns from it for a long time.
###Code
T_retrieval = 30.0
retrieval_time = np.arange(0, T_retrieval + dt, dt)
# First empty the history
nn.empty_history()
nn.reset_values(keep_connectivity=True)
# Run in retrival mode
nn.randomize_pattern()
nn.k = 0
nn.g_a = 97.0
nn.run_network_simulation(retrieval_time, I=None, save=True)
o = nn.history['o']
s = nn.history['s']
z_pre = nn.history['z_pre']
p_pre = nn.history['p_pre']
cmap = 'magma'
extent = [0, minicolumns * hypercolumns, T_retrieval, 0]
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(221)
im1 = ax1.imshow(o, aspect='auto', interpolation='None', cmap=cmap, vmax=1, vmin=0, extent=extent)
ax1.set_title('Unit activation')
ax2 = fig.add_subplot(222)
im2 = ax2.imshow(z_pre, aspect='auto', interpolation='None', cmap=cmap, vmax=1, vmin=0, extent=extent)
ax2.set_title('Traces of activity')
ax3 = fig.add_subplot(223)
im3 = ax3.imshow(adaptation, aspect='auto', interpolation='None', cmap=cmap, vmax=1, vmin=0, extent=extent)
ax3.set_title('Adaptation')
ax4 = fig.add_subplot(224)
im4 = ax4.imshow(p_pre, aspect='auto', interpolation='None', cmap=cmap, vmax=1, vmin=0, extent=extent)
ax4.set_title('Probability')
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.12, 0.05, 0.79])
fig.colorbar(im1, cax=cbar_ax)
print(nn.history['o'].shape)
print(nn.g_a)
print(nn.o)
n_trials = 10
final_patterns = []
for i in range(n_trials):
nn.randomize_pattern()
nn.k = 0
nn.run_network_simulation(retrieval_time)
final_patterns.append(nn.o)
final_patterns
###Output
_____no_output_____
###Markdown
Git recoverHere we checkout the latest working branch again
###Code
if run_old_version:
subprocess.call(['git', 'checkout', 'master'])
###Output
_____no_output_____ |
Notebooks/Part_4/.ipynb_checkpoints/LSTMs-checkpoint.ipynb | ###Markdown
A short & practical introduction to Tensor Flow!Part 4The goal of this notebook is to train a LSTM character prediction model over [Text8](http://mattmahoney.net/dc/textdata) data.This is a personal wrap-up of all the material provided by [Google's Deep Learning course on Udacity](https://www.udacity.com/course/deep-learning--ud730), so all credit goes to them. Author: Pablo M. Olmos ([email protected])Date: March 2017
###Code
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import os
import numpy as np
import random
import string
import tensorflow as tf
import zipfile
from six.moves import range
from six.moves.urllib.request import urlretrieve
# Lets check what version of tensorflow we have installed. The provided scripts should run with tf 1.0 and above
print(tf.__version__)
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
"""Download a file if not present, and make sure it's the right size."""
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('XXX/textWordEmbeddings/text8.zip', 31344016) ## Change according to the folder where you saved the dataset provided
def read_data(filename):
with zipfile.ZipFile(filename) as f:
name = f.namelist()[0]
data = tf.compat.as_str(f.read(name))
return data
text = read_data(filename)
print('Data size %d' % len(text))
text[0:20]
###Output
_____no_output_____
###Markdown
Create a small validation set
###Code
valid_size = 1000
valid_text = text[:valid_size]
train_text = text[valid_size:]
train_size = len(train_text)
print(train_size, train_text[:64])
print(valid_size, valid_text[:64])
###Output
_____no_output_____
###Markdown
Utility functions to map characters to vocabulary IDs and back
###Code
vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' '
first_letter = ord(string.ascii_lowercase[0])
def char2id(char):
if char in string.ascii_lowercase:
return ord(char) - first_letter + 1
elif char == ' ':
return 0
else:
print('Unexpected character: %s' % char)
return 0
def id2char(dictid):
if dictid > 0:
return chr(dictid + first_letter - 1)
else:
return ' '
print(char2id('a'), char2id('z'), char2id(' '), char2id('ï'))
print(id2char(1), id2char(26), id2char(0))
###Output
_____no_output_____
###Markdown
Function to generate a training batch for the LSTM model.
###Code
batch_size=64 ## Number of batches, but also number of segments in which we divide the text. We read batch_size
## batches in parallel, each read from a different segment. The implementation is not obvious, the
## key seems to be the zip function inside the for loop below
num_unrollings=10 ## Each sequence is num_unrolling character long
### NOW I GET IT!! Every batch is a batch_size times 27 (num letters) matrix. Every row correspond to a letter. Each letter
### comes from a different sequence of (num_unrollings) so that the 64 letters cannot be read together.
## In the next batch, we have the following letter for each of the 64 training sequences!!
class BatchGenerator(object):
def __init__(self, text, batch_size, num_unrollings):
self._text = text
self._text_size = len(text)
self._batch_size = batch_size
self._num_unrollings = num_unrollings
segment = self._text_size // batch_size #We split the text into batch_size pieces
self._cursor = [ offset * segment for offset in range(batch_size)] #Cursor pointing every piece
self._last_batch = self._next_batch()
#
def _next_batch(self):
"""Generate a single batch from the current cursor position in the data."""
batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float)
for b in range(self._batch_size):
batch[b, char2id(self._text[self._cursor[b]])] = 1.0 #One hot encoding
#print(self._text[self._cursor[b]])
self._cursor[b] = (self._cursor[b] + 1) % self._text_size
return batch
def next(self):
"""Generate the next array of batches from the data. The array consists of
the last batch of the previous array, followed by num_unrollings new ones.
"""
batches = [self._last_batch]
for step in range(self._num_unrollings):
batches.append(self._next_batch())
self._last_batch = batches[-1]
return batches
def characters(probabilities):
"""Turn a 1-hot encoding or a probability distribution over the possible
characters back into its (mostl likely) character representation."""
return [id2char(c) for c in np.argmax(probabilities, 1)]
def batches2string(batches):
"""Convert a sequence of batches back into their (most likely) string
representation."""
s = [''] * batches[0].shape[0]
for b in batches:
s = [''.join(x) for x in zip(s, characters(b))] #Clever! The ZIP is the key function here!
return s
train_batches = BatchGenerator(train_text, batch_size, 10)
valid_batches = BatchGenerator(valid_text, 1, 1)
print(batches2string(train_batches.next()))
print(batches2string(train_batches.next()))
#OK with this one
def logprob(predictions, labels):
"""Log-probability of the true labels in a predicted batch."""
predictions[predictions < 1e-10] = 1e-10
return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0]
#OK with this one
def sample_distribution(distribution):
"""Sample one element from a distribution assumed to be an array of normalized
probabilities.
"""
r = random.uniform(0,1)
s = 0
for i in range(len(distribution)):
s += distribution[i]
if s >= r:
return i
return len(distribution) - 1
#OK with this one
def sample(prediction):
"""Turn a (column) prediction into 1-hot encoded samples."""
p = np.zeros(shape=[1, vocabulary_size], dtype=np.float)
p[0, sample_distribution(prediction[0])] = 1.0
return p
def random_distribution():
"""Generate a random column of probabilities."""
b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size])
return b / np.sum(b, 1)[:, None]
train_batches.next()[0].shape
###Output
_____no_output_____
###Markdown
Simple LSTM ModelRecall the fundamental modelAlso, the un-regularized cost function is\begin{align}J(\boldsymbol{\theta})=\frac{1}{N}\sum_{n=1}^N\sum_{t=1}^{T_n}d(\boldsymbol{y}_t^{(n)},\sigma(\boldsymbol{h}_t^{(n)}))\end{align}where $d(\cdot,\cdot)$ is the cross-entropy loss function. About the TF implementation below, see the following excellent [post](http://www.thushv.com/sequential_modelling/long-short-term-memory-lstm-networks-implementing-with-tensorflow-part-2/)> Now calculating logits for softmax is a little bit tricky. This a temporal (time-based) network. So after each processing each num_unrolling batches through the LSTM cell, we update h_{t-1}=h_t and c_{t-1}=c_t before calculating logits and the loss. This is done by using tf.control_dependencies. What this does is that, logits will not be calculated until saved_output and saved_states are updated. Finally, as you can see, num_unrolling acts as the amount of history we are remembering.In other words, in the computation graph everytime something is updated, all the dependent op nodes are updated and this is propagated through the graph. If we want to wait until the very end to compute the loss, we wait using the command tf.control_dependencies.About the zip() and zip(*) operators, see this [post](https://docs.python.org/2/library/functions.htmlzip)
###Code
num_nodes = 64
graph = tf.Graph()
with graph.as_default():
# Parameters:
#i(t) parameters
# Input gate: input, previous output, and bias.
ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) ##W^ix
im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ## W^ih
ib = tf.Variable(tf.zeros([1, num_nodes])) ##b_i
#f(t) parameters
# Forget gate: input, previous output, and bias.
fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) ##W^fx
fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ##W^fh
fb = tf.Variable(tf.zeros([1, num_nodes])) ##b_f
#g(t) parameters
# Memory cell: input, state and bias.
cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) ##W^gx
cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ##W^gh
cb = tf.Variable(tf.zeros([1, num_nodes])) ##b_g
#o(t) parameters
# Output gate: input, previous output, and bias.
ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1)) ##W^ox
om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1)) ##W^oh
ob = tf.Variable(tf.zeros([1, num_nodes])) ##b_o
# Variables saving state across unrollings.
saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) #h(t)
saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) #s(t)
# Classifier weights and biases (over h(t) to labels)
w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1))
b = tf.Variable(tf.zeros([vocabulary_size]))
# Definition of the cell computation.
def lstm_cell(i, o, state):
"""Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf
Note that in this formulation, we omit the various connections between the
previous state and the gates."""
input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib)
forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb)
update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb
state = forget_gate * state + input_gate * tf.tanh(update) #tf.tanh(update) is g(t)
output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob)
return output_gate * tf.tanh(state), state #h(t) is output_gate * tf.tanh(state)
# Input data. Now it makes sense!!!
train_data = list()
for _ in range(num_unrollings + 1):
train_data.append(tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size]))
train_inputs = train_data[:num_unrollings]
train_labels = train_data[1:] # labels are inputs shifted by one time step.
# Unrolled LSTM loop.
outputs = list()
output = saved_output
aux = output
state = saved_state
for i in train_inputs:
output, state = lstm_cell(i, output, state)
outputs.append(output)
# State saving across unrollings.
with tf.control_dependencies([saved_output.assign(output),saved_state.assign(state)]):
#Classifier.
logits = tf.nn.xw_plus_b(tf.concat(axis=0,values=outputs), w, b)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=tf.concat(axis=0, values=train_labels),logits=logits))
# Optimizer.
"""Next, we are implementing the optimizer. Remember! we should use “gradient clipping” (tf.clip_by_global_norm)
to avoid “Exploding gradient” phenomenon. Also, we decay the learning_rate over time."""
global_step = tf.Variable(0)
learning_rate = tf.train.exponential_decay(10.0, global_step, 5000, 0.1, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
""" optimizer.compute_gradients(loss) yields (gradient, value) tuples. gradients, v = zip(*optimizer.compute_gradients(loss))
performs a transposition, creating a list of gradients and a list of values.
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
then clips the gradients, and optimizer = optimizer.apply_gradients(zip(gradients, v), global_step=global_step)
re-zips the gradient and value lists back into an iterable of (gradient, value)
tuples which is then passed to the optimizer.apply_gradients method."""
gradients, v = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
optimizer = optimizer.apply_gradients(zip(gradients, v), global_step=global_step)
# Predictions.
train_prediction = tf.nn.softmax(logits)
# Sampling and validation eval: batch 1, no unrolling.
sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size])
saved_sample_output = tf.Variable(tf.zeros([1, num_nodes]))
saved_sample_state = tf.Variable(tf.zeros([1, num_nodes]))
# Create an op that groups multiple operations.
reset_sample_state = tf.group(saved_sample_output.assign(tf.zeros([1, num_nodes])),
saved_sample_state.assign(tf.zeros([1, num_nodes])))
sample_output, sample_state = lstm_cell(sample_input, saved_sample_output, saved_sample_state)
with tf.control_dependencies([saved_sample_output.assign(sample_output),saved_sample_state.assign(sample_state)]):
sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b))
num_steps = 1001
summary_frequency = 100
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
mean_loss = 0
for step in range(num_steps):
batches = train_batches.next()
feed_dict = dict()
for i in range(num_unrollings + 1):
feed_dict[train_data[i]] = batches[i]
_, l, predictions, lr = session.run(
[optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict)
mean_loss += l
if step % summary_frequency == 0:
if step > 0:
mean_loss /= summary_frequency
# The mean loss is an estimate of the loss over the last few batches.
print(
'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr))
mean_loss = 0
labels = np.concatenate(list(batches)[1:])
print('Minibatch perplexity: %.2f' % float(
np.exp(logprob(predictions, labels))))
if step % (summary_frequency * 10) == 0:
# Generate some samples.
print('=' * 80)
for _ in range(5):
feed = sample(random_distribution())
sentence = characters(feed)[0]
reset_sample_state.run()
for _ in range(79):
prediction = sample_prediction.eval({sample_input: feed})
feed = sample(prediction)
sentence += characters(feed)[0]
print(sentence)
print('=' * 80)
# Measure validation set perplexity.
reset_sample_state.run()
valid_logprob = 0
for _ in range(valid_size):
b = valid_batches.next()
predictions = sample_prediction.eval({sample_input: b[0]})
valid_logprob = valid_logprob + logprob(predictions, b[1])
print('Validation set perplexity: %.2f' % float(np.exp(
valid_logprob / valid_size)))
batches = train_batches.next()
batches[0]
###Output
_____no_output_____ |
Regression/Linear Models/HuberRegressor_Normalize_QuantileTransformer.ipynb | ###Markdown
HuberRegressor with Normalize & QuantileTransformer This Code template is for the regression analysis using a HuberRegressor with feature transformation technique QuantileTransformer and feature rescaling technique Normalize Required Packages
###Code
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from sklearn.linear_model import HuberRegressor
from sklearn.preprocessing import Normalizer,QuantileTransformer
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features = []
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
Data RescalingFor rescaling the data **normalize** function of Sklearn is used.Normalization is the process of scaling individual samples to have unit norm. This process can be useful if you plan to use a quadratic form such as the dot-product or any other kernel to quantify the similarity of any pair of samples.The function normalize provides a quick and easy way to scale input vectors individually to unit norm (vector length). For more information on normalize [ click here](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html)
###Code
normalize = Normalizer()
x_train = normalize.fit_transform(x_train)
x_test = normalize.transform(x_test)
###Output
_____no_output_____
###Markdown
ModelModelLinear regression model that is robust to outliers.The Huber Regressor optimizes the squared loss for the samples where |(y - X'w) / sigma| epsilon, where w and sigma are parameters to be optimized. The parameter sigma makes sure that if y is scaled up or down by a certain factor, one does not need to rescale epsilon to achieve the same robustness. Note that this does not take into account the fact that the different features of X may be of different scales.This makes sure that the loss function is not heavily influenced by the outliers while not completely ignoring their effect. Feature Transformation QuantileTransformer Transform features using quantiles information.This method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.[For more information](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html)
###Code
model=make_pipeline(QuantileTransformer(),HuberRegressor())
model.fit(x_train,y_train)
###Output
_____no_output_____
###Markdown
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.score: The score function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
###Output
Accuracy score 93.01 %
###Markdown
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
###Code
y_pred=model.predict(x_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
###Output
R2 Score: 93.01 %
Mean Absolute Error 9.80
Mean Squared Error 162.44
###Markdown
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),y_pred[0:20], color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
###Output
_____no_output_____ |
04_ANOVA_enrichment.ipynb | ###Markdown
Enrichment testWhat is the probability to randomly select at least k "changed" reactions out of n "changed" reactions when selecting N out of M reactions. * k: number of diferentially expressed reactions in a subsystem,* n: number of diferentially expressed reactions in the model,* N: number of reactions in a subsystem,* M: number of reactions in the model.$P(x \geq k) = 1 - hypergeom.cdf(k-1, M, n, N)$
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from itertools import permutations, product, combinations
from scipy.stats import pearsonr, spearmanr, mannwhitneyu, hypergeom
from itertools import permutations
from itertools import combinations
#https://www.scribbr.com/statistics/two-way-anova/
import statsmodels.api as sm
from statsmodels.formula.api import ols
import statsmodels.stats.multitest as multi
import warnings
from statsmodels.tools.sm_exceptions import ConvergenceWarning, HessianInversionWarning, ValueWarning
# ignore these warning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=HessianInversionWarning)
warnings.filterwarnings("ignore", category=ValueWarning)
warnings.filterwarnings("ignore", category=RuntimeWarning)
###Output
_____no_output_____
###Markdown
Settings
###Code
#analysis = "Fastcore"
analysis = "iMAT"
#analysis = "gimme"
#analysis = "init"
#analysis = "tinit"
analysis_type = "FVA"
#analysis_type = "pFBA"
fdr = True
randomization = False
###Output
_____no_output_____
###Markdown
Read the data
###Code
reactions = pd.read_csv("data\\"+analysis_type+"_"+analysis+".csv", sep=";").iloc[:,0]
if randomization:
if fdr:
df = pd.read_csv("results_ANOVA\\"+analysis_type+"_"+analysis+"_randomization_q.csv")
else:
df = pd.read_csv("results_ANOVA\\"+analysis_type+"_"+analysis+"_randomization_p.csv")
else:
if fdr:
df = pd.read_csv("results_ANOVA\\"+analysis_type+"_"+analysis+"_basic_q.csv")
else:
df = pd.read_csv("results_ANOVA\\"+analysis_type+"_"+analysis+"_basic_q.csv")
tests = list(df.columns[1:])
###Output
_____no_output_____
###Markdown
Fill the analysis data with all the reactionsAs a basis I take the union of the reactions included in the selected group of models.
###Code
df_reactions = pd.DataFrame(columns=["rxn"])
df_reactions["rxn"] = reactions
df = pd.merge(df, df_reactions, how="outer").fillna(1)
###Output
_____no_output_____
###Markdown
Get the subsystems data
###Code
df_subsystems = pd.read_csv("models\\iMM865_subsystems.txt", sep=";")
df_subsystems_f = df_subsystems.copy()
df_subsystems_f['rxn'] = df_subsystems_f['rxn']+'_f'
df_subsystems_b = df_subsystems.copy()
df_subsystems_b['rxn'] = df_subsystems_b['rxn']+'_b'
df_subsystems = pd.concat((df_subsystems, df_subsystems_b, df_subsystems_f), ignore_index=True).reindex()
df_subsystems.head()
###Output
_____no_output_____
###Markdown
Keep only the reactions that are present in the observed models
###Code
df_subsystems = df_subsystems[df_subsystems.rxn.isin(reactions)]
subsystems = df_subsystems.subsystem.dropna().unique()
#df_subsystems[df_subsystems['rxn'].str.endswith("_f")]
###Output
_____no_output_____
###Markdown
Merge
###Code
df = pd.merge(df, df_subsystems, how="left")
df = df[['rxn', 'subsystem'] + tests]
df.head()
###Output
_____no_output_____
###Markdown
Analysis
###Code
df_enrich = pd.DataFrame(columns = ['subsystem'] + tests)
df_enrich['subsystem'] = subsystems
n_all = len(reactions)
for test in tests:
df_test = df[[test,'subsystem']]
n_signif_all = (df_test[test] < 0.05).sum()
for subsystem in subsystems:
df_sub = df_test[df_test.subsystem == subsystem]
n_sub = len(df_sub)
n_signif_sub = (df_sub[test] < 0.05).sum()
M = n_all # all reactions in a model
n = n_signif_all # all significant
N = n_sub # reactions in a subsystem
k = n_signif_sub # significant in a subsystem
if n:
p = 1 - hypergeom.cdf(k-1, M, n, N)
else:
p = 1.0
df_enrich.loc[(df_enrich['subsystem'] == subsystem), test] = p
#print(k, M, n, N)
1-hypergeom.cdf(10, 4000, 30, 100)
###Output
_____no_output_____
###Markdown
Save the results
###Code
df_enrich_q = df_enrich.copy()
for c in df_enrich_q.columns[1:]:
df_enrich_q[c] = multi.multipletests(df_enrich_q[c], method = 'fdr_bh')[1]
df_enrich.columns = list(map(lambda x: x.replace("q(", "p("), df_enrich.columns))
df_enrich.to_csv("results_enrich\\" + analysis_type + "_" + analysis + "_ANOVA_enrich.csv", index=False)
df_enrich_q.to_csv("results_enrich\\" + analysis_type + "_" + analysis + "_ANOVA_enrich_q.csv", index=False)
df_enrich[(df_enrich[df_enrich.columns[1:]]<0.05).any(axis=1)]
df_enrich_q[(df_enrich_q[df_enrich_q.columns[1:]]<0.05).any(axis=1)]
###Output
_____no_output_____ |
tutorials/spark-da-cse255/004_Word_Count.ipynb | ###Markdown
Setup Notebook for Exercises IMPORTANT: Only modify cells which have the following comment:```python Modify this cell``` Do not add any new cells when you submit the homework
###Code
import findspark
findspark.init()
from pyspark import SparkContext
sc=SparkContext(master="local[4]")
import Tester.WordCount as WordCount
pickleFile="Tester/WordCount.pkl"
###Output
_____no_output_____
###Markdown
Importing all packages necessary to complete the homework
###Code
import numpy as np
WordCount.get_data()
###Output
_____no_output_____
###Markdown
ExerciseA `k`-mer is a sequence of `k` consecutive words. For example, the `3`-mers in the line `you are my sunshine my only sunsine` are* `you are my`* `are my sunshine`* `my sunshine my`* `sunshine my only`* `my only sunsine`For the sake of simplicity we consider only the `k`-mers that appear in a single line. In other words, we ignore `k`-mers that span more than one line.Write a function, using spark all the way to the end, to find to top 10 `k`-mers in a given text for a given `k`.Specifically write functions with the following signatures:```pythondef map_kmers(text,k): \\ text: an RDD of text lines. Lines contain only lower-case letters and spaces. Spaces should be ignored. \\ k: length of `k`-mers return singles \\ singles: an RDD of pairs of the form (tuple of k words,1)def count_kmers(singles): \\ singles: as above return counts \\ count: RDD of the form: (tuple of k words, number of occurances)def sort_counts(count): \\ count: as above return sorted_counts \\ sorted_counts: RDD of the form (number of occurances, tuple of k words) sorted in decreasing number of occurances.``` Code:```python text_file = sc.textFile(u'../../Data/Moby-Dick.txt')print getkmers(text_file,5,2, map_kmers, count_kmers, sort_counts)``` Output:most common 2-mers1796: (u'of', u'the')1145: (u'in', u'the')708: (u'to', u'the')408: (u'from', u'the')376: (u'the', u'whale')
###Code
def map_kmers(text,k):
# text: an RDD of text lines. Lines contain only lower-case letters and spaces. Spaces should be ignored.
# k: length of `k`-mers
def generateKmers(line):
result = [];
words = [w for w in line.split() if w != "" and w != " "];
for i in range(len(words) - k + 1):
result.append((tuple(words[i: i + k]), 1));
return result;
singles = text.flatMap(generateKmers);
return singles
# singles: an RDD of pairs of the form (tuple of k words,1)
def count_kmers(singles):
# singles: as above
count = singles.reduceByKey(lambda a, b: a + b);
return count
# count: RDD of the form: (tuple of k words, number of occurances)
def sort_counts(count):
# count: as above
sorted_count = count.map(lambda (v, c): (c, v)).sortByKey(False);
return sorted_count
# sorted_counts: RDD of the form (number of occurances, tuple of k words) sorted in decreasing number of
# Do Not modify this cell
def getkmers(text_file, l,k, map_kmers, count_kmers, sort_counts):
# text_file: the text_file RDD read above
# k: k-mers
# l: l most common k-mers
import re
def removePunctuation(text):
return re.sub("[^0-9a-zA-Z ]", " ", text)
text = text_file.map(removePunctuation)\
.map(lambda x: x.lower())
singles=map_kmers(text,k)
count=count_kmers(singles)
sorted_counts=sort_counts(count)
C=sorted_counts.take(l)
print 'most common %d-mers\n'%k,'\n'.join(['%d:\t%s'%c for c in C])
# First, check that the text file is where we expect it to be
%ls -l ../../Data/Moby-Dick.txt
text_file = sc.textFile(u'../../Data/Moby-Dick.txt')
# Print the output of the aggregate function for top 5 2-mers
getkmers(text_file,5,2, map_kmers, count_kmers, sort_counts)
import Tester.WordCount as WordCount
WordCount.exercise(pickleFile, map_kmers, count_kmers, sort_counts, sc)
###Output
_____no_output_____ |
archive/2015/week17/Divide and conquer.ipynb | ###Markdown
Решаваме на сллжни проблеми чрез разбиването им на по-малки
###Code
def max2(x1, x2):
if x1 < x2:
return x2
else:
return x1
def max4(x1, x2, x3, x4):
pass
###Output
_____no_output_____
###Markdown
1. Разделяме на две двойки по две числа (x1, x2) и (x3, x4).2. Намираме максимума на двойките. Остават ни две числа (максимумите на двойките `pair1_max` и `pair2_max`).3. Вече занаме как да намерима максимума на две числа; използвайки това намираме резултата.
###Code
def max4(x1, x2, x3, x4):
pair1_max = max2(x1, x2)
pair2_max = max2(x3, x4)
result = max2(pair1_max, pair2_max)
return result
max4(1, 2, 3, 4)
###Output
_____no_output_____
###Markdown
С `return` можем да върнем какъвто и да е израз; не е нужно да се дефинира променлива за резултата
###Code
def max4(x1, x2, x3, x4):
pair1_max = max2(x1, x2)
pair2_max = max2(x3, x4)
return max2(pair1_max, pair2_max)
max4(1, 2, 3, 4)
###Output
_____no_output_____
###Markdown
Извикването на функция (напр. `max2(x1, x2)`) е израз, който може да се използва като аргумент на друга функция.T.e не е нужно да дефинираме допълнителни променливи `pair1_max` и `pair2_max`.Тук първо ще се изчислят вътрешните изрази `max2(x1, x2)` и `max2(x3, x4)`, и с резултата от тях ще се изпълнивъншната функция.
###Code
def max4(x1, x2, x3, x4):
return max2(max2(x1, x2), max2(x3, x4))
max4(1, 2, 3, 4)
###Output
_____no_output_____
###Markdown
Използвайки `max4` лесно можем да дефинираме `avg_min3`, която намира средно-аритмитечното на трите по-малки чилсаот четири.
###Code
def avg_min3(x1, x2, x3, x4):
sum4 = x1 + x2 + x3 + x4
sum_min3 = sum4 - max4(x1, x2, x3, x4)
return sum_min3 / 3
avg_min3(1, 2, 3, 4)
###Output
_____no_output_____ |
flair/flair_ner.ipynb | ###Markdown
Flair NER Tagging Pipeline Navigation:* [General Info](info)* [Preparing Dataset](prepare)* [Adding BIOES Annotation](bioes)* [Training with Flair](train)* [Using Trained Model for Prediction](predict)* [Prediction and Saving to CONLL-U](save) General Info `Libraries needed:` `corpuscula.conllu` (conllu parsing); `flair` (training); `tqdm` (displaying progress)`Pre-Trained Embeddings used in this example:` [DeepPavlov Wiki+Lenta](http://files.deeppavlov.ai/embeddings/ft_native_300_ru_wiki_lenta_nltk_wordpunct_tokenize/ft_native_300_ru_wiki_lenta_nltk_wordpunct_tokenize.bin). Preprocessing included: `nltk wordpunсt_tokenize``Pipeline Input:` CONLL-U parsed text file.`Processing:` Extracting tokens and named entities as separate lists of lists of strings, and adding BIOES tags to entities.`Train Input:` `{train,dev,test}.txt` files in BIOES format as shown [here](https://en.wikipedia.org/wiki/Inside–outside–beginning_(tagging))`Sample train input:````здравствуйте Oрасскажите Oсправочной S-Departmentаэропорта S-Organizationгород B-Geoтомск E-Geo````Sample inference (predict) result:````4 больница детская городская больница номер 4 города сочи приемный покой ````Pipeline Output:` JSON with NER Parsing (list of lists of dict)`Sample pipeline output:````[[{'word': 'здравствуйте', 'entity': None}, {'word': 'будьте', 'entity': None}, {'word': 'добры', 'entity': None}, {'word': 'подскажите', 'entity': None}, {'word': 'мне', 'entity': None}, {'word': 'регистратуру', 'entity': 'Department'}, {'word': 'кожно', 'entity': 'Organization'}, {'word': 'венерического', 'entity': 'Organization'}, {'word': 'диспансера', 'entity': 'Organization'}], ]``` Preparing Dataset
###Code
from corpuscula.conllu import Conllu
def read_corpus(corpus=None, silent=False):
if isinstance(corpus, str):
corpus = Conllu.load(corpus, **({'log_file': None} if silent else{}))
elif callable(corpus):
corpus = corpus()
parsed_corpus = []
parsed_ne = []
for sent in corpus:
curr_sent = [x['FORM'] for x in sent[0] if x['FORM'] and '-' not in x['ID']]
curr_ne = [x['MISC']['NE'] if 'NE' in x['MISC'].keys() else 'O' for x in sent[0]]
parsed_corpus.append(curr_sent)
parsed_ne.append(curr_ne)
return parsed_corpus, parsed_ne
# replace file names, if necessary
parsed_corpus_train, named_entities_train = read_corpus('result_ner_train.conllu')
parsed_corpus_dev, named_entities_dev = read_corpus('result_ner_dev.conllu')
parsed_corpus_test, named_entities_test = read_corpus('result_ner_test.conllu')
parsed_corpus_train[:1], named_entities_train[:1]
###Output
_____no_output_____
###Markdown
Adding BIOES Annotation
###Code
def bioes_annotation(ne_list):
# Adding BIOES-annotation for future training with Flair
prev_ne = 'O'
bioes_ne = []
for i, ne in enumerate(ne_list):
if ne == 'O':
prev_ne = 'O'
elif prev_ne == 'O' or ne != prev_ne.split('-')[1]:
if i < len(ne_list)-1 and ne == ne_list[i+1]:
ne = 'B-' + ne
else:
ne = 'S-' + ne
elif ne == prev_ne.split('-')[1] and prev_ne.split('-')[0] in ['B', 'I']:
if i < len(ne_list)-1 and ne == ne_list[i+1]:
ne = 'I-' + ne
else:
ne = 'E-' + ne
prev_ne = ne
bioes_ne.append(ne)
return bioes_ne
bio_ne_train = [bioes_annotation(ne_seq) for ne_seq in named_entities_train]
bio_ne_dev = [bioes_annotation(ne_seq) for ne_seq in named_entities_dev]
bio_ne_test = [bioes_annotation(ne_seq) for ne_seq in named_entities_test]
bio_ne_train[:1]
# Modify paths and file names, if necessary
import os
dn = './ner_bioes/'
if not os.path.isdir(dn):
os.mkdir(dn)
with open(os.path.join(dn, 'train.txt'), 'wt', encoding='utf-8') as f:
for i in range(len(parsed_corpus_train)):
[print('\n'.join([' '.join(pair) for pair in list(zip(parsed_corpus_train[i],
bio_ne_train[i]))]),
file=f)]
print(file=f)
with open(os.path.join(dn, 'dev.txt'), 'wt', encoding='utf-8') as f:
for i in range(len(parsed_corpus_dev)):
[print('\n'.join([' '.join(pair) for pair in list(zip(parsed_corpus_dev[i],
bio_ne_dev[i]))]),
file=f)]
print(file=f)
with open(os.path.join(dn, 'test.txt'), 'wt', encoding='utf-8') as f:
for i in range(len(parsed_corpus_test)):
[print('\n'.join([' '.join(pair) for pair in list(zip(parsed_corpus_test[i],
bio_ne_test[i]))]),
file=f)]
print(file=f)
###Output
_____no_output_____
###Markdown
Training with Flair
###Code
# Uncomment lines below to install Flair and download pre-trained
# embeddings if not done yet
#!pip install flair
#!wget -P ./resources http://files.deeppavlov.ai/embeddings/ft_native_300_ru_wiki_lenta_nltk_wordpunct_tokenize/ft_native_300_ru_wiki_lenta_nltk_wordpunct_tokenize.bin
import flair, torch
device = 'cuda:2'
flair.device = torch.device(device)
from flair.data import Corpus
from flair.datasets import ColumnCorpus
# need to figure out if these can be used with custom embeddings. Use FastTest for now.
# from flair.embeddings import TokenEmbeddings, WordEmbeddings, StackedEmbeddings
from flair.embeddings import FastTextEmbeddings
from flair.models import SequenceTagger
from flair.trainers import ModelTrainer
import torch
import sys
from typing import List
# 1. Loading our corpus
# define columns (it is possible to add more columns, example: pos)
columns = {0: 'text', 1: 'ner'}
# this is the folder in which train, test and dev files reside
data_folder = './ner_bioes/'
# init a corpus using column format, data folder and the names
# of the train, dev and test files
print('Loading a corpus...')
corpus: Corpus = ColumnCorpus(data_folder, columns,
train_file='train.txt',
test_file='test.txt',
dev_file='dev.txt')
print(corpus)
print()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make a tag dictionary from the corpus
print('Make a tag dictionary...')
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
print(tag_dictionary)
print()
# 4. initialize embeddings
print('Loading embeddings...', end='')
embeddings = FastTextEmbeddings(
'./resources/ft_native_300_ru_wiki_lenta_nltk_wordpunct_tokenize.bin'
)
print(' done.')
# 5. initialize sequence tagger
tagger: SequenceTagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type,
use_crf=True)
# 6. initialize trainer
"""
Initialize a model trainer
:param model: The model that you want to train. The model should
inherit from flair.nn.Model
:param corpus: The dataset used to train the model, should be of type Corpus
:param optimizer: The optimizer to use (typically SGD or Adam) [SGD by default]
:param epoch: The starting epoch (normally 0 but could be higher
if you continue training model)
:param use_tensorboard: If True, writes out tensorboard information
"""
trainer: ModelTrainer = ModelTrainer(model=tagger,
corpus=corpus)
#checkpoint = 'resources/taggers/example-ner/checkpoint.pt'
#trainer = ModelTrainer.load_checkpoint(checkpoint, corpus)
# 7. start training
'''
All possible parameters (with default values):
learning_rate: float = 0.1,
mini_batch_size: int = 32,
mini_batch_chunk_size: int = None,
max_epochs: int = 100,
anneal_factor: float = 0.5,
patience: int = 3,
min_learning_rate: float = 0.0001,
train_with_dev: bool = False,
monitor_train: bool = False,
monitor_test: bool = False,
embeddings_storage_mode: str = 'cpu' (other modes: 'none', 'gpu')
checkpoint: bool = False, # if True, model training can be resumed later
save_final_model: bool = True,
anneal_with_restarts: bool = False,
batch_growth_annealing: bool = False,
shuffle: bool = True,
param_selection_mode: bool = False,
num_workers: int = 6,
sampler=None,
use_amp: bool = False,
amp_opt_level: str = "O1",
eval_on_train_fraction=0.0,
eval_on_train_shuffle=False,
'''
trainer.train('resources/taggers/example-ner',
learning_rate=0.1,
mini_batch_size=32,
embeddings_storage_mode='gpu',
max_epochs=150)
# 8. plot weight traces (optional)
from flair.visual.training_curves import Plotter
plotter = Plotter()
plotter.plot_weights('resources/taggers/example-ner/weights.txt')
###Output
Weights plots are saved in resources/taggers/example-ner/weights.png
###Markdown
Using Trained Model for Prediction
###Code
import flair, torch
device = 'cuda:2'
flair.device = torch.device(device)
from flair.data import Sentence
from flair.models import SequenceTagger
# load the model you trained
print('Loading model...')
model = SequenceTagger.load('resources/taggers/example-ner/best-model.pt')
print('done.')
# create example sentence
sentence = Sentence('Москва - город в России')
# predict tags and print
model.predict(sentence)
print(sentence.to_tagged_string())
# Expected output: `Москва <S-Geo> - город в России <S-Geo>`
from collections import OrderedDict
from tqdm import tqdm
def flair_parse(sents):
sents = [' '.join(sent) for sent in sents]
for idx, sent in enumerate(tqdm(sents)):
sent = Sentence(sent)
model.predict(sent)
sent = sent.to_tagged_string().split()
last_idx = len(sent) - 1
res = []
for idx, token in enumerate(sent, start=1):
if not token.startswith('<'):
next_token = sent[idx] if idx <= last_idx else ''
res.append({
'ID': str(idx),
'FORM': token,
'LEMMA': None,
'UPOS': None,
'XPOS': None,
'FEATS': OrderedDict(),
'HEAD': None,
'DEPREL': None,
'DEPS': None,
'MISC': OrderedDict(
[('NE', next_token[3:-1])] if next_token.startswith('<') else []
)
})
yield res
###Output
_____no_output_____
###Markdown
Prediction and Saving Results to CONLL-U
###Code
from corpuscula import Conllu
Conllu.save(flair_parse(parsed_corpus_test), 'flair_syntagrus.conllu',
fix=True, log_file=None)
###Output
100%|██████████| 3798/3798 [00:07<00:00, 503.58it/s]
|
coding-activities/Sieve_of_Eratosthenes.ipynb | ###Markdown
The code below is a literal replication of the Sieve of Eratosthenes. The program makes a list up to n. It starts with 2 and then ~~crosses off~~ removes all the remaining multiples of 2. Moves to the next number, and removes the remaining multuples of that number until it reaches n.
###Code
#ask for number
n = input("Enter a number: ")
#create a list up to n
num_list = []
elem = 1
for i in range(int(n)-1):
elem = elem + 1
num_list.append(elem)
#remove composites
for mod in num_list:
for num in num_list:
if num > mod and num % mod == 0:
num_list.remove(num)
print("The prime numbers up to " + n + " are: " + str(num_list))
###Output
Enter a number: 20
The prime numbers up to 20 are: [2, 3, 5, 7, 11, 13, 17, 19]
###Markdown
u/17291 on reddit was kind enough to point out that the code above is inefficient for n > $10^5$ for the following reasons.>1. You're doing a whole bunch of unnecessary checks to see if a number is divisible by `mod`. For example, if num is divisible by 23, there is no point in checking if (num + 1) is divisible by 23—you can just skip ahead to (num + 23).>>2. `remove` is going to slow you down considerably because you it has to check every element in the list to see if it matches `num`>>A better solution is to create a list of booleans, where True means that it's a prime and False means it isn't. Start with 2 and then skip-count by 2s setting every multiple of 2 to False (other than 2 * 1, of course).>>Once you've reached the end of the list, increment `prime` until you've found the next prime (i.e., the next number that's still True). Now, skip-count by that number. Rinse and repeat.u/17291 was also kind enought to give some sample code found below.
###Code
def sieve(n):
# All numbers are prime to begin with
num_list = [True] * (n + 1)
prime = 2
while prime < n + 1:
# Skip count, setting all multiples of `prime` to be composite
for i in range(prime * 2, n + 1, prime):
num_list[i] = False
# Advance `prime` until we've found the next prime
prime += 1
while prime < n + 1 and not num_list[prime]:
prime += 1
# Return a list of all primes (i.e., every value from num_list that's True)
return [n for n in range(2, n + 1) if num_list[n]]
###Output
_____no_output_____ |
BoeingCamp-Day1 (Answer).ipynb | ###Markdown
Boeing Programming Camp Day 1In this lecture we do a quick review on Python.Keep in mind that no single lecture (or course!) can teach you how to code. We weould highly suggest reading [How to Think Like a Computer Scientist: Learning with Python 3 Documentation](https://media.readthedocs.org/pdf/howtothink/latest/howtothink.pdf) textbook for more information!
###Code
from numpy.random import randint
###Output
_____no_output_____
###Markdown
AlgorithmA list of steps to finish a task. PrintLet's write our first program.
###Code
print('Hello World!')
###Output
Hello World!
###Markdown
Exercise-1print your name
###Code
print('Chris')
###Output
Chris
###Markdown
VariablesA placeholder for a piece of information that can change. Example
###Code
leg_height = 10
torso_height = 8
head_height = 2
robot_height = leg_height + torso_height + head_height
print(robot_height)
###Output
20
###Markdown
Exercise-2 Let's change the variables
###Code
leg_height = 7
torso_height = 5
head_height = 2
robot_height = leg_height + torso_height + head_height
print(robot_height)
###Output
14
###Markdown
Exercise-3 Without using a loop, print numbers between 1 and 10
###Code
print(1)
print(2)
print(3)
print(4)
print(5)
print(6)
print(7)
print(8)
print(9)
print(10)
###Output
10
###Markdown
Ohhhhh, it was too painful and repetitive!!! We want to do more than this one task over the next five days. LoopsSometimes we want to repeat things a certain number of times, but we want to keep track of values as we do. This is where a loop comes in handy. When you use a loop, you know right from the start what your beginning value is, what your ending value is, and how much the value changes each time through the loop. Example
###Code
count = 1
while count <= 10:
print(count)
count = count + 1
###Output
1
2
3
4
5
6
7
8
9
10
###Markdown
Modify above example
###Code
count = 0
while count <= 8:
print(count)
count = count + 2
###Output
0
2
4
6
8
###Markdown
Making a fun game with loop
###Code
starting_value = randint(1, 6)
print(starting_value)
stopping_value = randint(1, 6) + randint(1, 6) + randint(1, 6)
print(stopping_value)
interval = randint(1, 6)
print(interval)
counter = starting_value
print(counter)
total = 0
while counter < stopping_value:
total = total + counter
print('Total = ', total)
counter = counter + interval
print('Counter = ', counter)
###Output
Total = 3
Counter = 6
Total = 9
Counter = 9
###Markdown
Snack Time !!! IF statementIt checks if something is true or false. Example
###Code
time = 7
if time < 7:
print('Take bus')
if time >= 7:
print('Take subway')
###Output
Take subway
###Markdown
Programmers are lazy, so instead of typing multiple if statements, we can use if/else.
###Code
if time < 7:
print('Take bus')
else:
print('Take subway')
###Output
Take subway
###Markdown
Different ways to compare two thingsequal ==not equal !=grather/less than <grather and equal/less than and equal <=
###Code
time = 11
if time < 7:
print('Take bus')
elif time >= 6.5 and time <= 9:
print('Take subway')
else:
print('Take time machine')
###Output
Take time machine
###Markdown
Exercise-4 Make a new variable call it "cals", and try to implement this table. | cals | print ||---------|------------------|| less than 100 | you are Mr. Burns||||| between 100 and 1000 | you are Maggie||||| between 1000 and 1500 | you are Lisa||||| between 1500 and 2000 | you are Bart||||| between 2000 and 25000 | you are Marge||||| greater than 25000 | you are Homer|
###Code
cal = 3453
if cal < 100:
print('you are Mr. Burns')
elif 100 <= cal and cal < 1000:
print('you are Maggie')
elif 1000 <= cal and cal < 1500:
print('you are Lisa')
elif 1500 <= cal and cal < 2000:
print('you are BArt')
elif 2000 <= cal and cal < 25000:
print('you are Marge')
else:
print('you are Homer')
###Output
you are Marge
###Markdown
How to get input from the user
###Code
name = input('please enter your name: ')
print(name)
###Output
Ali
###Markdown
Problems Problem-1:Ask the user for their name. If it is your name or name of your best friend print 'Hello [entered name]'. Otherwise, print 'access denied!'.
###Code
name = input('please enter your name: ')
if name == 'Jake':
print('Hello Jake!')
else:
print('Access denied!')
###Output
Hello Jake!
###Markdown
Converting String to Integer int()is the Python standard built-in function to convert a string into an integer value. Example
###Code
num = input('please enter a number: ')
num = int(num)
num
###Output
_____no_output_____
###Markdown
Problem 2: Get a number from the user. Output the sum of all numbers from 1 to the entered number.Examples: Input: 5, program calculates 1+2+3+4+5, Output: 15Input: 10, program calculates 1+2+3+4+5+6+7+8+9+10, Output: 55
###Code
num = int(input("Enter a number: "))
count = 1
total = 0
while(count <= num):
total = total + count
count = count + 1
print(total)
###Output
Enter a number: 5
15
###Markdown
Problem-3:Make a simple calculator. First, get two numbers from the user, and a function ('add', 'sub', 'mul', 'div'). Then print the result.
###Code
n1 = input('please enter the first number: ')
n1 = int(n1)
n2 = input('please enter the second number: ')
n2 = int(n2)
function = input('please enter a function, you can choose one of these functions (add, sub, mul, div): ')
if function == 'add':
print(n1+n2)
elif function == 'sub':
print(n1-n2)
elif function == 'mul':
print(n1*n2)
elif function == 'div':
print(n1/n2)
else:
print('Wrong function')
###Output
0.8333333333333334
|
notebooks/preprocessing-v2.ipynb | ###Markdown
Preprocessing:
###Code
import numpy as np
import pandas as pd
import logging
import os
from dotenv import find_dotenv, load_dotenv
import datetime
import glob
from os.path import abspath
from pathlib import Path
from inspect import getsourcefile
from datetime import datetime
import math
import argparse
import sys
import tensorflow as tf
from sklearn.preprocessing import QuantileTransformer
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import OneHotEncoder
nb_dir = os.path.join(Path(os.getcwd()).parents[0], 'src', 'data')
if nb_dir not in sys.path:
sys.path.insert(0, nb_dir)
import get_raw_data as grd
import data_classes
import Normalizer
DT_FLOAT = np.float32
DT_BOOL = np.uint8
RANDOM_SEED = 123
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
# logger.propagate = False # it will not log to console.
RAW_DIR = os.path.join(Path(os.getcwd()).parents[0], 'data', 'raw')
PRO_DIR = os.path.join(Path(os.getcwd()).parents[0], 'data', 'processed')
print(RAW_DIR, PRO_DIR)
def update_parser(parser):
"""Parse the arguments from the CLI and update the parser."""
parser.add_argument(
'--prepro_step',
type=str,
default='preprocessing', #'slicing', 'preprocessing'
help='To execute a preprocessing method')
#this is for allfeatures_preprocessing:
parser.add_argument(
'--train_period',
type=int,
nargs='*',
default=[121,323], #[121,279], #[156, 180], [121,143], # 279],
help='Training Period')
parser.add_argument(
'--valid_period',
type=int,
nargs='*',
default=[324,329], #[280,285], #[181,185], [144,147],
help='Validation Period')
parser.add_argument(
'--test_period',
type=int,
nargs='*',
default=[330, 342], #[286, 304], # [186,191], [148, 155],
help='Testing Period')
parser.add_argument(
'--prepro_dir',
type=str,
default='chuncks_random_c1mill',
help='Directory with raw data inside data/raw/ and it will be the output directory inside data/processed/')
parser.add_argument(
'--prepro_chunksize',
type=int,
default=500000,
help='Chunk size to put into the h5 file...')
parser.add_argument(
'--prepro_with_index',
type=bool,
default=True,
help='To keep indexes for each record')
parser.add_argument(
'--ref_norm',
type=bool,
default=True,
help='To execute the normalization over the raw inputs')
return parser.parse_known_args()
FLAGS, UNPARSED = update_parser(argparse.ArgumentParser())
#these are the more important parameters for preprocessing:
FLAGS.prepro_dir='chuncks_random_c1mill' #this directory must be the same inside 'raw' and processed directories.
FLAGS.prepro_chunksize=500000
FLAGS.train_period=[121,323] #[121,279] #[121, 143]
FLAGS.valid_period=[324,329] #[280,285] #[144, 147]
FLAGS.test_period=[330,342] #[286,304] #[148, 155]
FLAGS.prepro_with_index = True
print(FLAGS)
glob.glob(os.path.join(RAW_DIR, FLAGS.prepro_dir,"*.txt"))
# from IPython.core.debugger import Tracer; Tracer()()
def allfeatures_extract_labels(data, columns='MBA_DELINQUENCY_STATUS_next'):
'''Extract the labels from Dataset, order-and-transform them into one-hot matrix of labels.
Args:
data (DataFrame): Input Dataset which is modified in place.
columns (string): Name of the class column.
Returns:
one-hot matrix of labels of shape: [data.shape[0], 7].
Raises:
'''
logger.name = 'allfeatures_extract_labels'
if (type(columns)==str):
indices = [i for i, elem in enumerate(data.columns) if columns in elem] # (alphabetically ordered)
else:
indices = columns
if indices:
labels = data[data.columns[indices]]
data.drop(data.columns[indices], axis=1, inplace=True)
logger.info('...Labels extracted from Dataset...')
return labels
else: return None
def tag_chunk(tag, label, chunk, chunk_periods, tag_period, log_file, with_index, tag_index, hdf=None, tfrec=None):
'''Extract records filtering by chunk_periods parameter, define indexes in case of with_index=True,
extract labels and save the results into the target file.
Args:
chunk (DataFrame): Input Dataset which is modified in place.
tag (string): 'train', 'valid' or 'test'
chunk_periods (integer array): an array containing all periods into the chunk.
tag_period (integer array): an array of form [init_period, end_period] for the correspond tag.
log_file (Logger): An object of the log file.
with_index (boolean): If true it will be saved the indexes.
tag_index (int): an index that accumulates the size of the processed chunk.
hdf or tfrec (HDFStore or TFRecords): an object of the target file. Only one must be distint of None.
Returns:
tag_index (int): tag_index updated.
Raises:
'''
inter_periods = list(chunk_periods.intersection(set(range(tag_period[0], tag_period[1]+1))))
log_file.write('Periods corresponding to ' + tag +' period: %s\r\n' % str(inter_periods))
p_chunk = chunk.loc[(slice(None), inter_periods), :]
log_file.write('Records for ' + tag + ' Set - Number of rows: %d\r\n' % (p_chunk.shape[0]))
print('Records for ' + tag + ' Set - Number of rows:', p_chunk.shape[0])
if (p_chunk.shape[0] > 0):
if (with_index==True):
# p_chunk.index = pd.MultiIndex.from_tuples([(i, x[1], x[2],x[3]) for x,i in zip(p_chunk.index, range(tag_index, tag_index + p_chunk.shape[0]))])
p_chunk.reset_index(inplace=True)
allfeatures_drop_cols(p_chunk, ['PERIOD'])
p_chunk.set_index('DELINQUENCY_STATUS_NEXT', inplace=True) #1 index
else:
p_chunk.reset_index(drop=True, inplace=True)
labels = allfeatures_extract_labels(p_chunk, columns=label)
p_chunk = p_chunk.astype(DT_FLOAT)
labels = labels.astype(np.int8)
if (p_chunk.shape[0] != labels.shape[0]) :
print('Error in shapes:', p_chunk.shape, labels.shape)
else :
if (hdf!=None):
hdf.put(tag + '/features', p_chunk, append=True, index=True) #data_columns=p_chunk.columns.values), index=False
hdf.put(tag + '/labels', labels, append=True, index=True) #data_columns=labels.columns.values)
hdf.flush()
elif (tfrec!=None):
for row, lab in zip(p_chunk.values, labels.values):
feature = {tag + '/labels': _int64_feature(lab),
tag + '/features': _float_feature(row)}
# Create an example protocol buffer
example = tf.train.Example(features=tf.train.Features(feature=feature))
tfrec.write(example.SerializeToString())
tfrec.flush()
tag_index += p_chunk.shape[0]
return tag_index
def allfeatures_drop_cols(data, columns):
'''Exclude from the dataset 'data' the descriptive columns as parameters.
Args:
data (DataFrame): Input Dataset which is modified in place.
Returns:
None
Raises:
'''
logger.name = 'allfeatures_drop_cols'
data.drop(columns, axis=1, inplace=True)
logger.info('...Columns Excluded from dataset...')
return None
def oneHotDummies_column(column, categories):
'''Convert categorical variable into dummy/indicator variables.
Args:
column (Series): Input String Categorical Column.
Returns:
DataFrame. Integer Sparse binary matrix of categorical features.
Raises:
'''
logger.name = 'oneHotDummies_column: ' + column.name
cat_column = pd.Categorical(column.astype('str'), categories=categories)
cat_column = pd.get_dummies(cat_column) # in the same order as categories! (alphabetically ordered)
cat_column = cat_column.add_prefix(column.name + '_')
if (cat_column.isnull().any().any()):
null_cols = cat_column.columns[cat_column.isnull().any()]
print(cat_column[null_cols].isnull().sum())
print(cat_column[cat_column.isnull().any(axis=1)][null_cols].head(50))
return cat_column
def imputing_nan_values(nan_dict, distribution):
'''Replace nan values with a value according the nan_dict dictionary and distribution of this feature.
Args:
nan_dict (Dictionary): the key values are the name of features, the values could be a literal or
values belonging to the distribution.
distribution (DataFrame): Contains the median value for numerical features.
Returns:
new_dict (Dictionary): contains the values updated.
Raises:
'''
new_dict = {}
for k,v in nan_dict.items():
if v=='median':
new_dict[k] = float(distribution[k+'_MEDIAN'])
elif v=='mean':
new_dict[k] = float(distribution[k+'_MEAN'])
else:
new_dict[k] = v
return new_dict
def drop_invalid_delinquency_status(data, gflag, log_file):
'''Delete all subsecuent records of a loan when the feature delinquency_status_next
contains any of the following invalid status: S,T,X or Z.
Args:
data (DataFrame): Input Dataset which is modified in place.
gflag (int): Loan_id of the last loan in previous data, in case this contains some invalid status,
to delete all records inside the current data.
log_file (Logger): An object of the log file.
Returns:
gflag (int): Loan_id of the last loan in current data, in case this contains some invalid status.
Raises:
'''
logger.name = 'drop_invalid_delinquency_status'
delinq_ids = data[data['MBA_DELINQUENCY_STATUS'].isin(['0', 'R', 'S', 'T', 'X', 'Z'])]['LOAN_ID']
groups = data[data['LOAN_ID'].isin(delinq_ids)][['LOAN_ID', 'PERIOD', 'MBA_DELINQUENCY_STATUS', 'DELINQUENCY_STATUS_NEXT']].groupby('LOAN_ID')
groups_list = list(groups)
iuw= pd.Index([])
if gflag != '':
try:
iuw= iuw.union(groups.get_group(gflag).index[0:])
except Exception as e:
print(str(e))
if data.iloc[-1]['LOAN_ID'] in groups.groups.keys():
gflag = data.iloc[-1]['LOAN_ID']
else:
gflag = ''
for k, group in groups_list:
li= group.index[(group['MBA_DELINQUENCY_STATUS'] =='S') | (group['MBA_DELINQUENCY_STATUS'] =='T')
| (group['MBA_DELINQUENCY_STATUS'] =='X') | (group['MBA_DELINQUENCY_STATUS'] =='Z')].tolist()
if li: iuw= iuw.union(group.index[group.index.get_loc(li[0]):])
# In case of REO or Paid-Off, we need to exclude since the next record:
df_delinq_01 = group[(group['MBA_DELINQUENCY_STATUS'] =='0') | (group['MBA_DELINQUENCY_STATUS'] =='R')]
if df_delinq_01.shape[0]>0:
track_i = df_delinq_01.index[0]
iuw= iuw.union(group.index[group.index.get_loc(track_i)+1:])
if iuw!=[]:
log_file.write('drop_invalid_delinquency_status - Total rows: %d\r\n' % len(iuw)) # (log_df.shape[0])
data.drop(iuw, inplace=True)
logger.info('invalid_delinquency_status dropped')
return gflag
def zscore(x,mean,stdd):
return (x - mean) / stdd
def zscore_apply(dist_file, data):
stddv_0 = []
nnorm_cols = []
for col_name in data.columns.values:
mean = pd.Series(dist_file.iloc[0, np.where(pd.DataFrame(dist_file.columns.values)[0].str.contains(col_name+'_MEAN'))[0]], dtype='float32')
stddev = dist_file.iloc[0, np.where(pd.DataFrame(dist_file.columns.values)[0].str.contains(col_name+'_STDD'))[0]]
if not mean.empty and not stddev.empty:
mean = np.float32(mean.values[0])
stddev = np.float32(stddev.values[0])
if stddev == 0:
stddv_0.append(col_name)
else:
data[col_name] = data[col_name].apply(lambda x: zscore(x, mean, stddev))
else:
nnorm_cols.append(col_name)
print('STANDARD DEV zero: ', stddv_0)
return data, nnorm_cols
def prepro_chunk(file_name, file_path, chunksize, label, log_file, nan_cols, categorical_cols, descriptive_cols, time_cols,
dist_file, with_index, refNorm, train_period, valid_period, test_period, robust_cols,
minmax_cols=None, hdf=None, tfrec=None, filtering_cols=None):
gflag = ''
i = 1
train_index = 0
valid_index = 0
test_index = 0
for chunk in pd.read_csv(file_path, chunksize = chunksize, sep=',', low_memory=False):
print('chunk: ', i, ' chunk size: ', chunk.shape[0])
log_file.write('chunk: %d, chunk size: %d \n' % (i, chunk.shape[0]))
chunk.columns = chunk.columns.str.upper()
log_df = chunk[chunk[label].isnull()]
log_file.write('Dropping Rows with Null Labels - Number of rows: %d\r\n' % (log_df.shape[0]))
chunk.drop(chunk.index[chunk[label].isnull()], axis=0, inplace=True)
log_df = chunk[chunk['INVALID_TRANSITIONS']==1]
log_file.write('Dropping Rows with Invalid Transitions - Number of rows: %d\r\n' % (log_df.shape[0]))
chunk.drop(chunk.index[chunk['INVALID_TRANSITIONS']==1], axis=0, inplace=True)
#print('chunk with missing MBA_DELINQUENCY_STATUS', chunk[(chunk['MBA_DELINQUENCY_STATUS']=='') | (chunk['MBA_DELINQUENCY_STATUS'].isna())])
chunk.drop(chunk.index[(chunk['MBA_DELINQUENCY_STATUS'].astype('str')=='')], axis=0, inplace=True) #| (chunk['MBA_DELINQUENCY_STATUS'].isna())
gflag = drop_invalid_delinquency_status(chunk, gflag, log_file)
null_columns=chunk.columns[chunk.isnull().any()]
log_df = chunk[chunk.isnull().any(axis=1)][null_columns]
log_file.write('Filling NULL values - (rows, cols) : %d, %d\r\n' % (log_df.shape[0], log_df.shape[1]))
log_df = chunk[null_columns].isnull().sum().to_frame().reset_index()
log_df.to_csv(log_file, index=False, mode='a')
nan_cols = imputing_nan_values(nan_cols, dist_file)
chunk.fillna(value=nan_cols, inplace=True)
chunk.drop_duplicates(inplace=True) # Follow this instruction!!
logger.info('dropping invalid transitions and delinquency status, fill nan values, drop duplicates')
log_file.write('Drop duplicates - new size : %d\r\n' % (chunk.shape[0]))
chunk.reset_index(drop=True, inplace=True) #don't remove this line! otherwise NaN values appears.
#chunk['ORIGINATION_YEAR'][chunk['ORIGINATION_YEAR']<1995] = "B1995"
#chunk['ORIGINATION_YEAR'][(chunk['ORIGINATION_YEAR']<>"B1995") & (chunk['ORIGINATION_YEAR']>2018)] = "nan"
chunk['ORIGINATION_YEAR'] = chunk['ORIGINATION_YEAR'].apply(lambda x: "B1995" if x<1995 else '' if (x>2018 or x is None) else x) #.isna()
for k,v in categorical_cols.items():
# if (chunk[k].dtype=='O'):
chunk[k] = chunk[k].astype('str')
chunk[k] = chunk[k].str.strip()
chunk[k].replace(['\.0$'], [''], regex=True, inplace=True)
new_cols = oneHotDummies_column(chunk[k], v)
if (chunk[k].value_counts().sum()!=new_cols.sum().sum()):
print('Error at categorization, different sizes', k)
print(chunk[k].value_counts(), new_cols.sum())
log_file.write('Error at categorization, different sizes %s\r\n' % str(k))
chunk[new_cols.columns] = new_cols
else:
chunk[new_cols.columns] = new_cols
log_file.write('New columns added: %s\r\n' % str(new_cols.columns.values))
allfeatures_drop_cols(chunk, descriptive_cols)
#np.savetxt(log_file, descriptive_cols, header='descriptive_cols dropped:', newline=" ")
log_file.write('descriptive_cols dropped: %s\r\n' % str(descriptive_cols))
allfeatures_drop_cols(chunk, time_cols)
#np.savetxt(log_file, time_cols, header='time_cols dropped:', newline=" ")
log_file.write('time_cols dropped: %s\r\n' % str(time_cols))
cat_list = list(categorical_cols.keys())
cat_list.remove('DELINQUENCY_STATUS_NEXT')
#np.savetxt(log_file, cat_list, header='categorical_cols dropped:', newline=" ")
log_file.write('categorical_cols dropped: %s\r\n' % str(cat_list))
allfeatures_drop_cols(chunk, cat_list)
chunk.reset_index(drop=True, inplace=True)
chunk.set_index(['DELINQUENCY_STATUS_NEXT', 'PERIOD'], append=False, inplace=True) #2 indexes
# np.savetxt(log_file, str(chunk.index.names), header='Indexes created:', newline=" ")
log_file.write('Indexes created: %s\r\n' % str(chunk.index.names))
if (filtering_cols!=None):
chunk = chunk[filtering_cols]
robust_cols = list(set(robust_cols).intersection(filtering_cols))
log_file.write('Columns Filtered: %s\r\n' % str(chunk.columns.values))
if chunk.isnull().any().any():
# from IPython.core.debugger import Tracer; Tracer()()
raise ValueError('There are null values...File: ' + file_name)
if (refNorm==True):
chunk[robust_cols], nnorm_cols = zscore_apply(dist_file, chunk[robust_cols]) #robust_normalizer.transform(chunk[robust_cols])
log_file.write('Columns not normalized: %s\r\n' % str(nnorm_cols))
log_file.write('Columns normalized: %s\r\n' % str(set(robust_cols)-set(nnorm_cols)))
if chunk.isnull().any().any(): raise ValueError('There are null values...File: ' + file_name)
chunk_periods = set(list(chunk.index.get_level_values('PERIOD')))
#print(tfrec)
if (tfrec!=None):
train_index = tag_chunk('train', label, chunk, chunk_periods, train_period, log_file, with_index, train_index, tfrec=tfrec[0])
valid_index = tag_chunk('valid', label, chunk, chunk_periods, valid_period, log_file, with_index, valid_index, tfrec=tfrec[1])
test_index = tag_chunk('test', label, chunk, chunk_periods, test_period, log_file, with_index, test_index, tfrec=tfrec[2])
sys.stdout.flush()
elif (hdf!=None):
train_index = tag_chunk('train', label, chunk, chunk_periods, train_period, log_file, with_index, train_index, hdf=hdf[0])
valid_index = tag_chunk('valid', label, chunk, chunk_periods, valid_period, log_file, with_index, valid_index, hdf=hdf[1])
test_index = tag_chunk('test', label, chunk, chunk_periods, test_period, log_file, with_index, test_index, hdf=hdf[2])
inter_periods = list(chunk_periods.intersection(set(range(test_period[1]+1,355))))
log_file.write('Periods greater than test_period: %s\r\n' % str(inter_periods))
p_chunk = chunk.loc[(slice(None), inter_periods), :]
log_file.write('Records greater than test_period - Number of rows: %d\r\n' % (p_chunk.shape[0]))
del chunk
i += 1
return train_index, valid_index, test_index
def custom_robust_normalizer(ncols, dist_file, normalizer_type='robust_scaler_sk', center_value='median'):
norm_cols = []
scales = []
centers = []
scales_0 =[]
for i, x in enumerate (ncols):
x_frame = dist_file.iloc[:, np.where(pd.DataFrame(dist_file.columns.values)[0].str.contains(x+'_Q'))[0]]
if not x_frame.empty and (x_frame.shape[1]>1):
iqr = float(pd.to_numeric(x_frame[x+'_Q3'], errors='coerce').subtract(pd.to_numeric(x_frame[x+'_Q1'], errors='coerce')))
if iqr == 0: scales_0.append(x)
if iqr!=0:
norm_cols.append(x)
scales.append(iqr)
if center_value == 'median':
centers.append( float(x_frame[x+'_MEDIAN']) )
else:
centers.append( float(x_frame[x+'_Q1']) )
if (normalizer_type == 'robust_scaler_sk'):
normalizer = RobustScaler()
normalizer.scale_ = scales
normalizer.center_ = centers
elif (normalizer_type == 'percentile_scaler'):
normalizer = Normalizer.Normalizer(scales, centers)
else: normalizer=None
print(scales_0)
return norm_cols, normalizer
def custom_minmax_normalizer(ncols, scales, dist_file):
norm_cols = []
minmax_scales = []
centers = []
for i, x in enumerate (ncols):
x_min = dist_file.iloc[0, np.where(pd.DataFrame(dist_file.columns.values)[0].str.contains(x+'_MIN'))[0]]
x_max = dist_file.iloc[0, np.where(pd.DataFrame(dist_file.columns.values)[0].str.contains(x+'_MAX'))[0]]
if not(x_min.empty) and not(x_max.empty):
x_min = np.float32(x_min.values[0])
x_max = np.float32(x_max.values[0])
minmax_scales.append(x_max - x_min)
centers.append(x_min)
norm_cols.append(x)
# to_delete.append(i)
normalizer = Normalizer.Normalizer(minmax_scales, centers)
return norm_cols, normalizer #, to_delete
def allfeatures_preprocessing(RAW_DIR, PRO_DIR, raw_dir, train_period, valid_period, test_period, dividing='percentage',
chunksize=500000, refNorm=True, with_index=True, output_hdf=True,
label='DELINQUENCY_STATUS_NEXT', filtering_cols=None):
descriptive_cols = [
'LOAN_ID',
'ASOFMONTH',
'PERIOD_NEXT',
'MOD_PER_FROM',
'MOD_PER_TO',
'PROPERTY_ZIP',
'INVALID_TRANSITIONS',
'CONSECUTIVE'
]
numeric_cols = ['MBA_DAYS_DELINQUENT', 'MBA_DAYS_DELINQUENT_NAN',
'CURRENT_INTEREST_RATE', 'CURRENT_INTEREST_RATE_NAN', 'LOANAGE', 'LOANAGE_NAN',
'CURRENT_BALANCE', 'CURRENT_BALANCE_NAN', 'SCHEDULED_PRINCIPAL',
'SCHEDULED_PRINCIPAL_NAN', 'SCHEDULED_MONTHLY_PANDI',
'SCHEDULED_MONTHLY_PANDI_NAN',
'LLMA2_CURRENT_INTEREST_SPREAD', 'LLMA2_CURRENT_INTEREST_SPREAD_NAN',
'LLMA2_C_IN_LAST_12_MONTHS',
'LLMA2_30_IN_LAST_12_MONTHS', 'LLMA2_60_IN_LAST_12_MONTHS',
'LLMA2_90_IN_LAST_12_MONTHS', 'LLMA2_FC_IN_LAST_12_MONTHS',
'LLMA2_REO_IN_LAST_12_MONTHS', 'LLMA2_0_IN_LAST_12_MONTHS',
'NUM_MODIF', 'NUM_MODIF_NAN', 'P_RATE_TO_MOD', 'P_RATE_TO_MOD_NAN', 'MOD_RATE',
'MOD_RATE_NAN', 'DIF_RATE', 'DIF_RATE_NAN', 'P_MONTHLY_PAY',
'P_MONTHLY_PAY_NAN', 'MOD_MONTHLY_PAY', 'MOD_MONTHLY_PAY_NAN',
'DIF_MONTHLY_PAY', 'DIF_MONTHLY_PAY_NAN', 'CAPITALIZATION_AMT',
'CAPITALIZATION_AMT_NAN', 'MORTGAGE_RATE', 'MORTGAGE_RATE_NAN',
'FICO_SCORE_ORIGINATION', 'INITIAL_INTEREST_RATE', 'ORIGINAL_LTV',
'ORIGINAL_BALANCE', 'BACKEND_RATIO', 'BACKEND_RATIO_NAN',
'ORIGINAL_TERM', 'ORIGINAL_TERM_NAN', 'SALE_PRICE', 'SALE_PRICE_NAN',
'PREPAY_PENALTY_TERM', 'PREPAY_PENALTY_TERM_NAN',
'NUMBER_OF_UNITS', 'NUMBER_OF_UNITS_NAN', 'MARGIN',
'MARGIN_NAN', 'PERIODIC_RATE_CAP', 'PERIODIC_RATE_CAP_NAN',
'PERIODIC_RATE_FLOOR', 'PERIODIC_RATE_FLOOR_NAN', 'LIFETIME_RATE_CAP',
'LIFETIME_RATE_CAP_NAN', 'LIFETIME_RATE_FLOOR',
'LIFETIME_RATE_FLOOR_NAN', 'RATE_RESET_FREQUENCY',
'RATE_RESET_FREQUENCY_NAN', 'PAY_RESET_FREQUENCY',
'PAY_RESET_FREQUENCY_NAN', 'FIRST_RATE_RESET_PERIOD',
'FIRST_RATE_RESET_PERIOD_NAN',
'LLMA2_ORIG_RATE_SPREAD', 'LLMA2_ORIG_RATE_SPREAD_NAN',
'AGI', 'AGI_NAN', 'UR', 'UR_NAN', 'COUNT_INT_RATE_LESS', 'LLMA2_ORIG_RATE_ORIG_MR_SPREAD',
'LLMA2_ORIG_RATE_ORIG_MR_SPREAD_NAN', 'NUM_PRIME_ZIP', 'NUM_PRIME_ZIP_NAN'
]
binary_cols = ['LLMA2_HIST_LAST_12_MONTHS_MIS', 'LLMA2_PRIME',
'LLMA2_SUBPRIME', 'LLMA2_APPVAL_LT_SALEPRICE']
'''
nan_cols = {'MBA_DAYS_DELINQUENT': 'median', 'CURRENT_INTEREST_RATE': 'median', 'LOANAGE': 'median',
'CURRENT_BALANCE' : 'median', 'SCHEDULED_PRINCIPAL': 'median', 'SCHEDULED_MONTHLY_PANDI': 'median',
'LLMA2_CURRENT_INTEREST_SPREAD': 'median', 'NUM_MODIF': 0, 'P_RATE_TO_MOD': 0, 'MOD_RATE': 0,
'DIF_RATE': 0, 'P_MONTHLY_PAY': 0, 'MOD_MONTHLY_PAY': 0, 'DIF_MONTHLY_PAY': 0, 'CAPITALIZATION_AMT': 0,
'MORTGAGE_RATE': 'median', 'FICO_SCORE_ORIGINATION': 'median', 'INITIAL_INTEREST_RATE': 'median', 'ORIGINAL_LTV': 'median',
'ORIGINAL_BALANCE': 'median', 'BACKEND_RATIO': 'median', 'ORIGINAL_TERM': 'median', 'SALE_PRICE': 'median', 'PREPAY_PENALTY_TERM': 'median',
'NUMBER_OF_UNITS': 'median', 'MARGIN': 'median', 'PERIODIC_RATE_CAP': 'median', 'PERIODIC_RATE_FLOOR': 'median', 'LIFETIME_RATE_CAP': 'median',
'LIFETIME_RATE_FLOOR': 'median', 'RATE_RESET_FREQUENCY': 'median', 'PAY_RESET_FREQUENCY': 'median',
'FIRST_RATE_RESET_PERIOD': 'median', 'LLMA2_ORIG_RATE_SPREAD': 'median', 'AGI': 'median', 'UR': 'median',
'LLMA2_C_IN_LAST_12_MONTHS': 'median', 'LLMA2_30_IN_LAST_12_MONTHS': 'median', 'LLMA2_60_IN_LAST_12_MONTHS': 'median',
'LLMA2_90_IN_LAST_12_MONTHS': 'median', 'LLMA2_FC_IN_LAST_12_MONTHS': 'median',
'LLMA2_REO_IN_LAST_12_MONTHS': 'median', 'LLMA2_0_IN_LAST_12_MONTHS': 'median',
'LLMA2_ORIG_RATE_ORIG_MR_SPREAD':0, 'NUM_PRIME_ZIP':'median'
}
'''
'''
set(nan_cols) - set(nan_cols_nonan)
Out[56]:
{'COUNT_INT_RATE_LESS', # never missed
'FICO_SCORE_ORIGINATION', # never missed
'INITIAL_INTEREST_RATE', # never missed
'LLMA2_0_IN_LAST_12_MONTHS', #In average, 14% of missing data!
'LLMA2_30_IN_LAST_12_MONTHS',
'LLMA2_60_IN_LAST_12_MONTHS',
'LLMA2_90_IN_LAST_12_MONTHS',
'LLMA2_C_IN_LAST_12_MONTHS',
'LLMA2_FC_IN_LAST_12_MONTHS',
'LLMA2_REO_IN_LAST_12_MONTHS',
'ORIGINAL_BALANCE', # never missed
'ORIGINAL_LTV'} # never missed
'''
nan_cols = {'MBA_DAYS_DELINQUENT': 'mean', 'CURRENT_INTEREST_RATE': 'mean', 'LOANAGE': 'mean',
'CURRENT_BALANCE' : 'mean', 'SCHEDULED_PRINCIPAL': 'mean', 'SCHEDULED_MONTHLY_PANDI': 'mean',
'LLMA2_CURRENT_INTEREST_SPREAD': 'mean', 'NUM_MODIF': 0, 'P_RATE_TO_MOD': 0, 'MOD_RATE': 0,
'DIF_RATE': 0, 'P_MONTHLY_PAY': 0, 'MOD_MONTHLY_PAY': 0, 'DIF_MONTHLY_PAY': 0, 'CAPITALIZATION_AMT': 0,
'MORTGAGE_RATE': 'mean', 'FICO_SCORE_ORIGINATION': 'mean', 'INITIAL_INTEREST_RATE': 'mean', 'ORIGINAL_LTV': 'mean',
'ORIGINAL_BALANCE': 'mean', 'BACKEND_RATIO': 'mean', 'ORIGINAL_TERM': 'mean', 'SALE_PRICE': 'mean', 'PREPAY_PENALTY_TERM': 'mean',
'NUMBER_OF_UNITS': 'mean', 'MARGIN': 'mean', 'PERIODIC_RATE_CAP': 'mean', 'PERIODIC_RATE_FLOOR': 'mean', 'LIFETIME_RATE_CAP': 'mean',
'LIFETIME_RATE_FLOOR': 'mean', 'RATE_RESET_FREQUENCY': 'mean', 'PAY_RESET_FREQUENCY': 'mean',
'FIRST_RATE_RESET_PERIOD': 'mean', 'LLMA2_ORIG_RATE_SPREAD': 'mean', 'AGI': 'mean', 'UR': 'mean',
'LLMA2_C_IN_LAST_12_MONTHS': 'mean', 'LLMA2_30_IN_LAST_12_MONTHS': 'mean', 'LLMA2_60_IN_LAST_12_MONTHS': 'mean',
'LLMA2_90_IN_LAST_12_MONTHS': 'mean', 'LLMA2_FC_IN_LAST_12_MONTHS': 'mean',
'LLMA2_REO_IN_LAST_12_MONTHS': 'mean', 'LLMA2_0_IN_LAST_12_MONTHS': 'mean',
'LLMA2_ORIG_RATE_ORIG_MR_SPREAD':0, 'COUNT_INT_RATE_LESS' :'median', 'NUM_PRIME_ZIP':'mean'
}
categorical_cols = {'MBA_DELINQUENCY_STATUS': ['0','3','6','9','C','F','R'], 'DELINQUENCY_STATUS_NEXT': ['0','3','6','9','C','F','R'], #,'S','T','X'
'BUYDOWN_FLAG': ['N','U','Y'], 'NEGATIVE_AMORTIZATION_FLAG': ['N','U','Y'], 'PREPAY_PENALTY_FLAG': ['N','U','Y'],
'OCCUPANCY_TYPE': ['1','2','3','U'], 'PRODUCT_TYPE': ['10','20','30','40','50','51','52','53','54','5A','5Z',
'60','61','62','63','6Z','70','80','81','82','83','84','8Z','U'],
'PROPERTY_TYPE': ['1','2','3','4','5','6','7','8','9','L','M','U','Z'], 'LOAN_PURPOSE_CATEGORY': ['P','R','U'],
'DOCUMENTATION_TYPE': ['1','2','3','U'], 'CHANNEL': ['1','2','3','4','5','6','7','8','9','A','B','C','D','U'],
'LOAN_TYPE': ['1','2','3','4','5','6','7','U'], 'IO_FLAG': ['N','U','Y'],
'CONVERTIBLE_FLAG': ['N','U','Y'], 'POOL_INSURANCE_FLAG': ['N','U','Y'], 'STATE': ['AK', 'AL', 'AR', 'AZ', 'CA', 'CO',
'CT', 'DC', 'DE', 'FL', 'GA', 'HI', 'IA', 'ID', 'IL', 'IN', 'KS', 'KY', 'LA', 'MA',
'MD', 'ME', 'MI', 'MN', 'MO', 'MS', 'MT', 'NC', 'ND', 'NE', 'NH', 'NJ', 'NM', 'NV',
'NY', 'OH', 'OK', 'OR', 'PA', 'PR', 'RI', 'SC', 'SD', 'TN', 'TX', 'UT', 'VA', 'VT',
'WA', 'WI', 'WV', 'WY'],
'CURRENT_INVESTOR_CODE': ['240', '250', '253', 'U'], 'ORIGINATION_YEAR': ['B1995','1995','1996','1997','1998','1999','2000','2001','2002','2003',
'2004','2005','2006','2007','2008','2009','2010','2011','2012','2013','2014','2015','2016','2017','2018','nan']}
time_cols = ['YEAR', 'MONTH'] #, 'PERIOD'] #no nan values
total_cols = numeric_cols.copy()
total_cols.extend(descriptive_cols)
total_cols.extend(categorical_cols.keys())
total_cols.extend(time_cols)
print('total_cols size: ', len(total_cols)) #110 !=112?? set(chunk_cols) - set(total_cols): {'LOAN_ID', 'PERIOD'}
pd.set_option('io.hdf.default_format','table')
dist_file = pd.read_csv(os.path.join(RAW_DIR, "percentile features3-mean.csv"), sep=';', low_memory=False)
dist_file.columns = dist_file.columns.str.upper()
ncols = [x for x in numeric_cols if x.find('NAN')<0]
print(ncols)
#sum = 0
#for elem in categorical_cols.values():
# sum += len(elem)
#print('total categorical values: ', sum) #181
for file_path in glob.glob(os.path.join(RAW_DIR, raw_dir,"*.txt")):
file_name = os.path.basename(file_path)
if with_index==True:
target_path = os.path.join(PRO_DIR, raw_dir,file_name[:-4])
else:
target_path = os.path.join(PRO_DIR, raw_dir,file_name[:-4]+'_non_index')
log_file=open(target_path+'-log.txt', 'w+', 1)
print('Preprocessing File: ' + file_path)
log_file.write('Preprocessing File: %s\r\n' % file_path)
startTime = datetime.now()
if (output_hdf == True):
#with pd.HDFStore(target_path +'-pp.h5', complib='lzo', complevel=9) as hdf: #complib='lzo', complevel=9
train_writer = pd.HDFStore(target_path +'-train_.h5', complib='lzo', complevel=9)
valid_writer = pd.HDFStore(target_path +'-valid_.h5', complib='lzo', complevel=9)
test_writer = pd.HDFStore(target_path +'-test_.h5', complib='lzo', complevel=9)
print('generating: ', target_path +'-pp.h5')
train_index, valid_index, test_index = prepro_chunk(file_name, file_path, chunksize, label, log_file,
nan_cols, categorical_cols, descriptive_cols, time_cols,
dist_file, with_index,
refNorm, train_period, valid_period, test_period, ncols,
hdf=[train_writer, valid_writer, test_writer], tfrec=None,
filtering_cols=filtering_cols)
if train_writer.get_storer('train/features').nrows != train_writer.get_storer('train/labels').nrows:
raise ValueError('Train-DataSet: Sizes should match!')
if valid_writer.get_storer('valid/features').nrows != valid_writer.get_storer('valid/labels').nrows:
raise ValueError('Valid-DataSet: Sizes should match!')
if test_writer.get_storer('test/features').nrows != test_writer.get_storer('test/labels').nrows:
raise ValueError('Test-DataSet: Sizes should match!')
print('train/features size: ', train_writer.get_storer('train/features').nrows)
print('valid/features size: ', valid_writer.get_storer('valid/features').nrows)
print('test/features size: ', test_writer.get_storer('test/features').nrows)
log_file.write('***SUMMARY***\n')
log_file.write('train/features size: %d\r\n' %(train_writer.get_storer('train/features').nrows))
log_file.write('valid/features size: %d\r\n' %(valid_writer.get_storer('valid/features').nrows))
log_file.write('test/features size: %d\r\n' %(test_writer.get_storer('test/features').nrows))
logger.info('training, validation and testing set into .h5 file')
else:
train_writer = tf.python_io.TFRecordWriter(target_path +'-train_.tfrecords')
valid_writer = tf.python_io.TFRecordWriter(target_path +'-valid_.tfrecords')
test_writer = tf.python_io.TFRecordWriter(target_path +'-test_.tfrecords')
train_index, valid_index, test_index = prepro_chunk(file_name, file_path, chunksize, label, log_file,
nan_cols, categorical_cols, descriptive_cols, time_cols,
dist_file, with_index,
refNorm, train_period, valid_period, test_period, ncols,
hdf=None, tfrec=[train_writer, valid_writer, test_writer],
filtering_cols=filtering_cols)
print(train_index, valid_index, test_index)
train_writer.close()
valid_writer.close()
test_writer.close()
#def allfeatures_prepro_file(RAW_DIR, file_path, raw_dir, file_name, target_path, train_period, valid_period, test_period, log_file, dividing='percentage', chunksize=500000,
# refNorm=True, , with_index=True, output_hdf=True):
#allfeatures_prepro_file(RAW_DIR, file_path, raw_dir, file_name, target_path, train_num, valid_num, test_num, log_file, dividing=dividing, chunksize=chunksize,
# refNorm=refNorm, with_index=with_index, output_hdf=output_hdf)
startTime = datetime.now() - startTime
print('Preprocessing Time per file: ', startTime)
log_file.write('Preprocessing Time per file: %s\r\n' % str(startTime))
log_file.close()
def allclasses_Ncomp_71feat():
cols = ['PRODUCT_TYPE_20',
'IO_FLAG_U',
'NEGATIVE_AMORTIZATION_FLAG_N',
'LOAN_TYPE_1',
'NEGATIVE_AMORTIZATION_FLAG_U',
'IO_FLAG_N',
'CURRENT_INVESTOR_CODE_250',
'NEGATIVE_AMORTIZATION_FLAG_Y',
'LOAN_PURPOSE_CATEGORY_U',
'PREPAY_PENALTY_FLAG_U',
'LOAN_PURPOSE_CATEGORY_P',
'CHANNEL_D',
'CONVERTIBLE_FLAG_N',
'IO_FLAG_Y',
'CONVERTIBLE_FLAG_U',
'LOAN_PURPOSE_CATEGORY_R',
'ORIGINATION_YEAR_B1995',
'CHANNEL_U',
'POOL_INSURANCE_FLAG_U',
'CHANNEL_2',
'PREPAY_PENALTY_FLAG_Y',
'PROPERTY_TYPE_6',
'DOCUMENTATION_TYPE_U',
'PRODUCT_TYPE_10',
'CURRENT_INVESTOR_CODE_U',
'PERIODIC_RATE_FLOOR_NAN',
'PERIODIC_RATE_CAP_NAN',
'LIFETIME_RATE_FLOOR_NAN',
'PAY_RESET_FREQUENCY_NAN',
'CONVERTIBLE_FLAG_Y',
'DOCUMENTATION_TYPE_2',
'POOL_INSURANCE_FLAG_N',
'RATE_RESET_FREQUENCY_NAN',
'FIRST_RATE_RESET_PERIOD_NAN',
'PROPERTY_TYPE_2',
'CURRENT_INVESTOR_CODE_253',
'LOAN_TYPE_3',
'LIFETIME_RATE_CAP_NAN',
'PREPAY_PENALTY_FLAG_N',
'OCCUPANCY_TYPE_U',
'SCHEDULED_MONTHLY_PANDI_NAN',
'ORIGINATION_YEAR_2012',
'BUYDOWN_FLAG_N',
'ORIGINATION_YEAR_2008',
'BUYDOWN_FLAG_U',
'MARGIN',
'LOAN_TYPE_2',
'ORIGINATION_YEAR_2007',
'LLMA2_ORIG_RATE_ORIG_MR_SPREAD',
'AGI_NAN',
'ORIGINATION_YEAR_2006',
'DOCUMENTATION_TYPE_1',
'CHANNEL_1',
'ORIGINATION_YEAR_1999',
'CURRENT_INVESTOR_CODE_240',
'PROPERTY_TYPE_U',
'MARGIN_NAN',
'ORIGINATION_YEAR_2013',
'ORIGINATION_YEAR_2004',
'ORIGINATION_YEAR_1998',
'OCCUPANCY_TYPE_2',
'CHANNEL_3',
'LIFETIME_RATE_FLOOR',
'PROPERTY_TYPE_1',
'PERIODIC_RATE_CAP',
'ORIGINATION_YEAR_2005',
'PRODUCT_TYPE_82',
'LLMA2_HIST_LAST_12_MONTHS_MIS',
'LOANAGE',
'PROPERTY_TYPE_5',
'SCHEDULED_PRINCIPAL_NAN']
return cols
def perclass_Ncomp_71feat():
# 71 selected features from allcols(size=257) using a per-class dataset with n_components=None:
cols = [
'PRODUCT_TYPE_20',
'NEGATIVE_AMORTIZATION_FLAG_N',
'NEGATIVE_AMORTIZATION_FLAG_U',
'CONVERTIBLE_FLAG_N',
'CONVERTIBLE_FLAG_U',
'IO_FLAG_U',
'NEGATIVE_AMORTIZATION_FLAG_Y',
'LOAN_TYPE_1',
'CHANNEL_U',
'LOAN_PURPOSE_CATEGORY_U',
'PRODUCT_TYPE_10',
'BUYDOWN_FLAG_N',
'BUYDOWN_FLAG_U',
'DOCUMENTATION_TYPE_U',
'CHANNEL_2',
'LOAN_PURPOSE_CATEGORY_R',
'PREPAY_PENALTY_FLAG_Y',
'IO_FLAG_N',
'LOAN_PURPOSE_CATEGORY_P',
'CHANNEL_D',
'POOL_INSURANCE_FLAG_U',
'LOAN_TYPE_3',
'PREPAY_PENALTY_FLAG_U',
'PROPERTY_TYPE_6',
'LIFETIME_RATE_CAP_NAN',
'CURRENT_INVESTOR_CODE_253',
'POOL_INSURANCE_FLAG_N',
'CURRENT_INVESTOR_CODE_U',
'PERIODIC_RATE_FLOOR_NAN',
'OCCUPANCY_TYPE_U',
'IO_FLAG_Y',
'DOCUMENTATION_TYPE_2',
'LIFETIME_RATE_FLOOR_NAN',
'RATE_RESET_FREQUENCY_NAN',
'PERIODIC_RATE_CAP_NAN',
'PROPERTY_TYPE_2',
'OCCUPANCY_TYPE_3',
'PAY_RESET_FREQUENCY_NAN',
'PREPAY_PENALTY_FLAG_N',
'FIRST_RATE_RESET_PERIOD_NAN',
'CHANNEL_1',
'PROPERTY_TYPE_U',
'ORIGINATION_YEAR_2007',
'CURRENT_INVESTOR_CODE_240',
'CHANNEL_3',
'DOCUMENTATION_TYPE_1',
'ORIGINATION_YEAR_B1995',
'LLMA2_ORIG_RATE_ORIG_MR_SPREAD',
'ORIGINATION_YEAR_2008',
'PRODUCT_TYPE_80',
'CURRENT_INVESTOR_CODE_250',
'MARGIN_NAN',
'ORIGINATION_YEAR_2006',
'PERIODIC_RATE_CAP',
'ORIGINATION_YEAR_2005',
'SCHEDULED_MONTHLY_PANDI_NAN',
'ORIGINATION_YEAR_2003',
'ORIGINATION_YEAR_2000',
'ORIGINATION_YEAR_2004',
'PROPERTY_TYPE_1',
'LOAN_TYPE_2',
'SCHEDULED_PRINCIPAL_NAN',
'BUYDOWN_FLAG_Y',
'CONVERTIBLE_FLAG_Y',
'STATE_CA',
'PERIODIC_RATE_FLOOR',
'AGI_NAN',
'OCCUPANCY_TYPE_1',
'PRODUCT_TYPE_82',
'LIFETIME_RATE_FLOOR',
'MARGIN']
return cols
def filtering_allfeatures(cols):
allcols = cols + ['DELINQUENCY_STATUS_NEXT_0', 'DELINQUENCY_STATUS_NEXT_3',
'DELINQUENCY_STATUS_NEXT_6', 'DELINQUENCY_STATUS_NEXT_9',
'DELINQUENCY_STATUS_NEXT_C', 'DELINQUENCY_STATUS_NEXT_F',
'DELINQUENCY_STATUS_NEXT_R']
return allcols
def allclass_Ncomp_26numfeat():
# 26 selected features from numerical_cols(size=50) using the whole dataset with n_components=None:
cols = ['LOANAGE',
'COUNT_INT_RATE_LESS',
'MORTGAGE_RATE',
'LLMA2_ORIG_RATE_ORIG_MR_SPREAD',
'LLMA2_HIST_LAST_12_MONTHS_MIS',
'ORIGINAL_LTV',
'ORIGINAL_BALANCE',
'UR',
'INITIAL_INTEREST_RATE',
'CURRENT_BALANCE',
'ORIGINAL_TERM',
'LLMA2_PRIME',
'MARGIN',
'LLMA2_90_IN_LAST_12_MONTHS',
'LLMA2_ORIG_RATE_SPREAD',
'LLMA2_30_IN_LAST_12_MONTHS',
'LLMA2_SUBPRIME',
'NUM_PRIME_ZIP',
'LLMA2_FC_IN_LAST_12_MONTHS',
'LLMA2_CURRENT_INTEREST_SPREAD',
'AGI',
'MBA_DAYS_DELINQUENT',
'LLMA2_C_IN_LAST_12_MONTHS',
'CURRENT_INTEREST_RATE',
'LIFETIME_RATE_FLOOR',
'LLMA2_60_IN_LAST_12_MONTHS']
return cols
def perclass_Ncomp_26numfeat():
# 26 selected features from numerical_cols(size=50) using a per-class dataset with n_components=None:
cols = ['LOANAGE',
'MARGIN',
'MORTGAGE_RATE',
'LLMA2_ORIG_RATE_ORIG_MR_SPREAD',
'LLMA2_HIST_LAST_12_MONTHS_MIS',
'COUNT_INT_RATE_LESS',
'LIFETIME_RATE_FLOOR',
'INITIAL_INTEREST_RATE',
'LIFETIME_RATE_CAP',
'LLMA2_PRIME',
'LLMA2_ORIG_RATE_SPREAD',
'ORIGINAL_BALANCE',
'CURRENT_BALANCE',
'UR',
'LLMA2_SUBPRIME',
'MOD_RATE',
'LLMA2_CURRENT_INTEREST_SPREAD',
'RATE_RESET_FREQUENCY',
'CURRENT_INTEREST_RATE',
'PAY_RESET_FREQUENCY',
'DIF_RATE',
'NUM_MODIF',
'AGI',
'PERIODIC_RATE_FLOOR',
'LLMA2_30_IN_LAST_12_MONTHS',
'LLMA2_C_IN_LAST_12_MONTHS']
return cols
def filtering_num_features(ncols):
all_nan_cols = ['MBA_DAYS_DELINQUENT_NAN',
'CURRENT_INTEREST_RATE_NAN',
'LOANAGE_NAN',
'CURRENT_BALANCE_NAN',
'SCHEDULED_PRINCIPAL_NAN',
'SCHEDULED_MONTHLY_PANDI_NAN',
'LLMA2_CURRENT_INTEREST_SPREAD_NAN',
'NUM_MODIF_NAN',
'P_RATE_TO_MOD_NAN',
'MOD_RATE_NAN',
'DIF_RATE_NAN',
'P_MONTHLY_PAY_NAN',
'MOD_MONTHLY_PAY_NAN',
'DIF_MONTHLY_PAY_NAN',
'CAPITALIZATION_AMT_NAN',
'MORTGAGE_RATE_NAN',
'BACKEND_RATIO_NAN',
'ORIGINAL_TERM_NAN',
'SALE_PRICE_NAN',
'PREPAY_PENALTY_TERM_NAN',
'NUMBER_OF_UNITS_NAN',
'MARGIN_NAN',
'PERIODIC_RATE_CAP_NAN',
'PERIODIC_RATE_FLOOR_NAN',
'LIFETIME_RATE_CAP_NAN',
'LIFETIME_RATE_FLOOR_NAN',
'RATE_RESET_FREQUENCY_NAN',
'PAY_RESET_FREQUENCY_NAN',
'FIRST_RATE_RESET_PERIOD_NAN',
'LLMA2_ORIG_RATE_SPREAD_NAN',
'AGI_NAN',
'UR_NAN',
'LLMA2_ORIG_RATE_ORIG_MR_SPREAD_NAN',
'NUM_PRIME_ZIP_NAN']
sel_nan_cols = [x for x in all_nan_cols for y in ncols if x.find(y)==0]
cat_cols = ['MBA_DELINQUENCY_STATUS_0', 'MBA_DELINQUENCY_STATUS_3',
'MBA_DELINQUENCY_STATUS_6', 'MBA_DELINQUENCY_STATUS_9',
'MBA_DELINQUENCY_STATUS_C', 'MBA_DELINQUENCY_STATUS_F', 'MBA_DELINQUENCY_STATUS_R'] + \
['BUYDOWN_FLAG_N', 'BUYDOWN_FLAG_U', 'BUYDOWN_FLAG_Y'] + \
['NEGATIVE_AMORTIZATION_FLAG_N', 'NEGATIVE_AMORTIZATION_FLAG_U', 'NEGATIVE_AMORTIZATION_FLAG_Y'] +\
['PREPAY_PENALTY_FLAG_N', 'PREPAY_PENALTY_FLAG_U', 'PREPAY_PENALTY_FLAG_Y'] +\
['OCCUPANCY_TYPE_1', 'OCCUPANCY_TYPE_2', 'OCCUPANCY_TYPE_3', 'OCCUPANCY_TYPE_U'] +\
['PRODUCT_TYPE_10', 'PRODUCT_TYPE_20', 'PRODUCT_TYPE_30', 'PRODUCT_TYPE_40',
'PRODUCT_TYPE_50', 'PRODUCT_TYPE_51', 'PRODUCT_TYPE_52', 'PRODUCT_TYPE_53',
'PRODUCT_TYPE_54', 'PRODUCT_TYPE_5A', 'PRODUCT_TYPE_5Z', 'PRODUCT_TYPE_60',
'PRODUCT_TYPE_61', 'PRODUCT_TYPE_62', 'PRODUCT_TYPE_63', 'PRODUCT_TYPE_6Z',
'PRODUCT_TYPE_70', 'PRODUCT_TYPE_80', 'PRODUCT_TYPE_81', 'PRODUCT_TYPE_82',
'PRODUCT_TYPE_83', 'PRODUCT_TYPE_84', 'PRODUCT_TYPE_8Z', 'PRODUCT_TYPE_U'] +\
['PROPERTY_TYPE_1', 'PROPERTY_TYPE_2', 'PROPERTY_TYPE_3', 'PROPERTY_TYPE_4',
'PROPERTY_TYPE_5', 'PROPERTY_TYPE_6', 'PROPERTY_TYPE_7', 'PROPERTY_TYPE_8',
'PROPERTY_TYPE_9', 'PROPERTY_TYPE_M', 'PROPERTY_TYPE_U', 'PROPERTY_TYPE_Z'] +\
['LOAN_PURPOSE_CATEGORY_P', 'LOAN_PURPOSE_CATEGORY_R', 'LOAN_PURPOSE_CATEGORY_U'] +\
['DOCUMENTATION_TYPE_1', 'DOCUMENTATION_TYPE_2', 'DOCUMENTATION_TYPE_3', 'DOCUMENTATION_TYPE_U'] +\
['CHANNEL_1', 'CHANNEL_2', 'CHANNEL_3', 'CHANNEL_4', 'CHANNEL_5', 'CHANNEL_6',
'CHANNEL_7', 'CHANNEL_8', 'CHANNEL_9', 'CHANNEL_A', 'CHANNEL_B', 'CHANNEL_C',
'CHANNEL_D', 'CHANNEL_U'] +\
['LOAN_TYPE_1', 'LOAN_TYPE_2', 'LOAN_TYPE_3', 'LOAN_TYPE_4', 'LOAN_TYPE_5', 'LOAN_TYPE_6', 'LOAN_TYPE_U'] +\
['IO_FLAG_N', 'IO_FLAG_U', 'IO_FLAG_Y'] +\
['CONVERTIBLE_FLAG_N', 'CONVERTIBLE_FLAG_U', 'CONVERTIBLE_FLAG_Y'] +\
['POOL_INSURANCE_FLAG_N', 'POOL_INSURANCE_FLAG_U', 'POOL_INSURANCE_FLAG_Y'] +\
['STATE_AK', 'STATE_AL', 'STATE_AR', 'STATE_AZ', 'STATE_CA', 'STATE_CO',
'STATE_CT', 'STATE_DC', 'STATE_DE', 'STATE_FL', 'STATE_GA', 'STATE_HI',
'STATE_IA', 'STATE_ID', 'STATE_IL', 'STATE_IN', 'STATE_KS', 'STATE_KY',
'STATE_LA', 'STATE_MA', 'STATE_MD', 'STATE_ME', 'STATE_MI', 'STATE_MN',
'STATE_MO', 'STATE_MS', 'STATE_MT', 'STATE_NC', 'STATE_ND', 'STATE_NE',
'STATE_NH', 'STATE_NJ', 'STATE_NM', 'STATE_NV', 'STATE_NY', 'STATE_OH',
'STATE_OK', 'STATE_OR', 'STATE_PA', 'STATE_PR', 'STATE_RI', 'STATE_SC',
'STATE_SD', 'STATE_TN', 'STATE_TX', 'STATE_UT', 'STATE_VA', 'STATE_VT',
'STATE_WA', 'STATE_WI', 'STATE_WV', 'STATE_WY'] +\
['CURRENT_INVESTOR_CODE_240', 'CURRENT_INVESTOR_CODE_250', 'CURRENT_INVESTOR_CODE_253', 'CURRENT_INVESTOR_CODE_U'] +\
['ORIGINATION_YEAR_B1995', 'ORIGINATION_YEAR_1995', 'ORIGINATION_YEAR_1996',
'ORIGINATION_YEAR_1997', 'ORIGINATION_YEAR_1998', 'ORIGINATION_YEAR_1999',
'ORIGINATION_YEAR_2000', 'ORIGINATION_YEAR_2001', 'ORIGINATION_YEAR_2002',
'ORIGINATION_YEAR_2003', 'ORIGINATION_YEAR_2004', 'ORIGINATION_YEAR_2005',
'ORIGINATION_YEAR_2006', 'ORIGINATION_YEAR_2007', 'ORIGINATION_YEAR_2008',
'ORIGINATION_YEAR_2009', 'ORIGINATION_YEAR_2010', 'ORIGINATION_YEAR_2011',
'ORIGINATION_YEAR_2012', 'ORIGINATION_YEAR_2013', 'ORIGINATION_YEAR_2014',
'ORIGINATION_YEAR_2015', 'ORIGINATION_YEAR_2016', 'ORIGINATION_YEAR_2017',
'ORIGINATION_YEAR_2018']
lab_cols = ['DELINQUENCY_STATUS_NEXT_0', 'DELINQUENCY_STATUS_NEXT_3',
'DELINQUENCY_STATUS_NEXT_6', 'DELINQUENCY_STATUS_NEXT_9',
'DELINQUENCY_STATUS_NEXT_C', 'DELINQUENCY_STATUS_NEXT_F',
'DELINQUENCY_STATUS_NEXT_R']
allcols = ncols + sel_nan_cols + cat_cols + lab_cols
return allcols
startTime = datetime.now()
if not os.path.exists(os.path.join(PRO_DIR, FLAGS.prepro_dir)): #os.path.exists
os.makedirs(os.path.join(PRO_DIR, FLAGS.prepro_dir))
#filtering_num_features(allclass_Ncomp_26numfeat())
allcols = None #filtering_num_features(allclass_Ncomp_26numfeat()) # filtering_allfeatures(allclasses_Ncomp_71feat()) # filtering_allfeatures(perclass_Ncomp_71feat()), filtering_num_features(perclass_Ncomp_26numfeat())
allfeatures_preprocessing(RAW_DIR, PRO_DIR, FLAGS.prepro_dir, FLAGS.train_period, FLAGS.valid_period, FLAGS.test_period, dividing='percentage',
chunksize=FLAGS.prepro_chunksize, refNorm=FLAGS.ref_norm, with_index=FLAGS.prepro_with_index, output_hdf=True, filtering_cols=allcols)
print('Preprocessing - Time: ', datetime.now() - startTime)
###Output
total_cols size: 107
['MBA_DAYS_DELINQUENT', 'CURRENT_INTEREST_RATE', 'LOANAGE', 'CURRENT_BALANCE', 'SCHEDULED_PRINCIPAL', 'SCHEDULED_MONTHLY_PANDI', 'LLMA2_CURRENT_INTEREST_SPREAD', 'LLMA2_C_IN_LAST_12_MONTHS', 'LLMA2_30_IN_LAST_12_MONTHS', 'LLMA2_60_IN_LAST_12_MONTHS', 'LLMA2_90_IN_LAST_12_MONTHS', 'LLMA2_FC_IN_LAST_12_MONTHS', 'LLMA2_REO_IN_LAST_12_MONTHS', 'LLMA2_0_IN_LAST_12_MONTHS', 'NUM_MODIF', 'P_RATE_TO_MOD', 'MOD_RATE', 'DIF_RATE', 'P_MONTHLY_PAY', 'MOD_MONTHLY_PAY', 'DIF_MONTHLY_PAY', 'CAPITALIZATION_AMT', 'MORTGAGE_RATE', 'FICO_SCORE_ORIGINATION', 'INITIAL_INTEREST_RATE', 'ORIGINAL_LTV', 'ORIGINAL_BALANCE', 'BACKEND_RATIO', 'ORIGINAL_TERM', 'SALE_PRICE', 'PREPAY_PENALTY_TERM', 'NUMBER_OF_UNITS', 'MARGIN', 'PERIODIC_RATE_CAP', 'PERIODIC_RATE_FLOOR', 'LIFETIME_RATE_CAP', 'LIFETIME_RATE_FLOOR', 'RATE_RESET_FREQUENCY', 'PAY_RESET_FREQUENCY', 'FIRST_RATE_RESET_PERIOD', 'LLMA2_ORIG_RATE_SPREAD', 'AGI', 'UR', 'LLMA2_ORIG_RATE_ORIG_MR_SPREAD', 'NUM_PRIME_ZIP']
Preprocessing File: /home/ubuntu/MLMortgage/data/raw/chuncks_random_c1mill/temporalloandynmodifmrstaticitur_CTrans_CLab_100th.txt
generating: /home/ubuntu/MLMortgage/data/processed/chuncks_random_c1mill/temporalloandynmodifmrstaticitur_CTrans_CLab_100th-pp.h5
chunk: 1 chunk size: 100000
|
predictor.ipynb | ###Markdown
Predictor
###Code
# user input
user_input = "text, Relaxed, Violet, Aroused, Creative, Happy, Energetic, Flowery, Diesel"
#predict function w/ user input
def predict_effects(user_input):
import basilica
import numpy as np
import pandas as pd
from scipy import spatial
# get data
!wget
# turn data into dataframe
df = pd.read_csv('med1.csv')
# get pickled trained embeddings for med cultivars
!wget https://github.com/MedCab-1/Data-Science/blob/master/medembedv2.pkl
#unpickling file of embedded cultivar descriptions
unpickled_df_test = pd.read_pickle("./medembedv2.pkl")
# Part 1
# a function to calculate_user_text_embedding
# to save the embedding value in session memory
user_input_embedding = 0
def calculate_user_text_embedding(input, user_input_embedding):
# setting a string of two sentences for the algo to compare
sentences = [input]
# calculating embedding for both user_entered_text and for features
with basilica.Connection('36a370e3-becb-99f5-93a0-a92344e78eab') as c:
user_input_embedding = list(c.embed_sentences(sentences))
return user_input_embedding
# run the function to save the embedding value in session memory
user_input_embedding = calculate_user_text_embedding(user_input, user_input_embedding)
# part 2
score = 0
def score_user_input_from_stored_embedding_from_stored_values(input, score, row1, user_input_embedding):
# obtains pre-calculated values from a pickled dataframe of arrays
embedding_stored = unpickled_df_test.loc[row1, 0]
# calculates the similarity of user_text vs. product description
score = 1 - spatial.distance.cosine(embedding_stored, user_input_embedding)
# returns a variable that can be used outside of the function
return score
# Part 3
for i in range(2351):
# calls the function to set the value of 'score'
# which is the score of the user input
score = score_user_input_from_stored_embedding_from_stored_values(user_input, score, i, user_input_embedding)
#stores the score in the dataframe
df.loc[i,'score'] = score
# Part 4 -
df_big_json = df['score'].sort_values(ascending=False)
df_big_json = df.copy()
df_big_json = df_big_json[:5]
df_big_json = df_big_json.to_json(orient='columns')
# Part 5: outputs as JSON object
return df_big_json
predict_effects(user_input_effects)
'''
For Flask App:
def input2output(q, model):
probs = model.predict_proba([q])[0]
matches = []
for i in range(len(probs)):
if probs[i] > 0.0:
matches.append((i, probs[i]))
matches.sort(key=lambda x:x[1], reverse=True)
idxs = [x[0] for x in matches]
return idxs
'''
###Output
_____no_output_____
###Markdown
Create the dataLoader class The data is pickled which means that the objects are converted into a byte stream. We will unpickle the object to get back the original data. (https://www.cs.toronto.edu/~kriz/cifar.html). Below is the Dataset class which can be used in torch.utils.data.dataLoader
###Code
class cifarDataset(Dataset):
def __init__(self, filePath, transform=None):
self.images, self.labels = self.__loadImages__(filePath)
self.transform = transform
def __loadImages__(self, filePath):
object = self.__unpickle__(filePath) #Extract our dataset
X = object[b'data']
X = X.reshape(len(object[b'data']),3,32,32) #Reshape to Color and the corresponding XY coordinates
l = object[b'labels']
return(X,l)
def __len__(self):
return len(self.images)
def __getitem__(self, idx):
image = self.images[idx]
#print("Before permute", image.shape)
image = np.transpose(image, (1,2,0)) #Permute because transforms.ToTensor converts HWC to CHW
#print("After permute", image.shape )
image = transforms.ToTensor()(image)
#print("ToTensor", image.shape)
#print("Before", image)
image = transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))(image) #Normalize our image
#print("After", image)
sample = {'image':image, 'label':self.labels[idx]}
return(sample)
def __unpickle__(self, file):
with open(file, 'rb') as fo:
dict = pickle.load(fo, encoding='bytes')
return dict
def showImage(img, label='Not labeled'):
img = img / 2 + 0.5 # unnormalize
img = img.permute(1,2,0)
plt.imshow(img)
plt.xlabel(label)
plt.show()
def getLabel(number):
names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
return(names[number])
###Output
_____no_output_____
###Markdown
Load the dataset
###Code
batch1 = cifarDataset(filePath='data/cifar-10-batches-py/data_batch_1')
batch2 = cifarDataset(filePath='data/cifar-10-batches-py/data_batch_2')
batch3 = cifarDataset(filePath='data/cifar-10-batches-py/data_batch_3')
batch4 = cifarDataset(filePath='data/cifar-10-batches-py/data_batch_4')
batch5 = cifarDataset(filePath='data/cifar-10-batches-py/data_batch_5')
#Concatenate our training dataset
batches = torch.utils.data.ConcatDataset([batch1,batch2, batch3, batch4, batch5])
#Use the dataLoader to extract images from our dataset
trainloader = DataLoader(batches, batch_size=5, shuffle=True, num_workers=4)
testBatch = cifarDataset(filePath='data/cifar-10-batches-py/test_batch')
#Create the dataLoader for our test set
testloader = DataLoader(testBatch, batch_size=1, shuffle=True, num_workers=4)
#Try using the dataloader to print one image
for i_batch, sample_batched in enumerate(trainloader):
print("Batch information: ", i_batch, sample_batched['image'].size(), sample_batched['label'])
showImage(sample_batched['image'][0],getLabel(sample_batched['label'][0]))
break
###Output
Batch information: 0 torch.Size([5, 3, 32, 32]) tensor([ 1, 3, 4, 6, 3])
###Markdown
Create a CNN model
###Code
#Define the neural net.
class CNN(nn.Module):
def __init__(self):
#Define the network
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(3, 8, 5) #We have 3 channels. Output 6 feature map with 5x5 kernel
self.pool = nn.MaxPool2d(2,2)
self.conv2 = nn.Conv2d(8, 20, 5)
self.fc1 = nn.Linear(20 * 5 * 5, 150)
self.fc2 = nn.Linear(150, 50)
self.fc3 = nn.Linear(50, 10)
# Adding a layer for LogSoftmax to obtain log probabilities
# As recommended in documentation for Negative log likelihood loss https://pytorch.org/docs/stable/nn.html#nllloss
self.logsoftmax = nn.LogSoftmax(dim=1)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 20 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.logsoftmax(self.fc3(x))
return(x)
###Output
_____no_output_____
###Markdown
Train the network
###Code
#Initialize the neural net
classifier = CNN()
classifier.to(device)
print(classifier)
#Create the optim
optimizer = optim.SGD(classifier.parameters(), lr=0.005, momentum=0.8)
LossCount = []
for epoch in range(10):
#Load batches from our trainLoader
LossAggregate = 0
for i_batch, sample_batched in enumerate(trainloader):
#Get our data
image = sample_batched['image']
label = sample_batched['label']
image, label = image.to(device), label.to(device)
# zero the gradient of optimizer
optimizer.zero_grad()
# forward pass
output = classifier(image)
# Use Negative Likelihood Loss
loss = nn.NLLLoss()(output, label)
#Record stats for every 100. Print average loss.
LossAggregate += loss.item()
if i_batch % 5000 == 4999: # print every 5000 batches (25000 images)
print('Epoch: %d. Minibatch %d loss %.3f' % (epoch + 1, i_batch+1, LossAggregate / 5000))
LossCount.append(LossAggregate/5000)
LossAggregate = 0.0
#Propagate our losses
loss.backward()
optimizer.step()
#Plot the loss curve
plt.plot(LossCount)
plt.title('Loss over full set')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.show()
###Output
_____no_output_____
###Markdown
Evaluate the accuracy
###Code
correct = 0
total = 0
with torch.no_grad():
for i_batch, sample_batched in enumerate(testloader):
#Get our data
image = sample_batched['image']
label = sample_batched['label']
image, label = image.to(device), label.to(device)
#Foward pass
output = classifier(image)
value = torch.max(output.data,1)[1]
total += 1
if value == label:
correct += 1
print('Accuracy on our 10000 test set is %d percent' % (100 * correct/total))
###Output
Accuracy on our 10000 test set is 62 percent
###Markdown
Importing Modules
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
%matplotlib inline
###Output
_____no_output_____
###Markdown
Reading the data file
###Code
data = pd.read_csv("diabetes.csv")
###Output
_____no_output_____
###Markdown
Making a Heatmap for better analysis of the conditions in diabtetic patients
###Code
import seaborn as sns
import matplotlib.pyplot as plt
corrmat = data.corr()
top_corr_features = corrmat.index
plt.figure(figsize=(20,20))
#plot heat map
g=sns.heatmap(data[top_corr_features].corr(),annot=True,cmap="RdYlGn")
###Output
_____no_output_____
###Markdown
Making Columns for Features and Prediction Variables
###Code
from sklearn.model_selection import train_test_split
feature = ['Pregnancies', 'Glucose', 'BloodPressure','SkinThickness','Insulin','BMI','DiabetesPedigreeFunction','Age']
predicted = ['Outcome']
###Output
_____no_output_____
###Markdown
Splitting Data for Features and Predicted
###Code
X = data[feature].values
y = data[predicted].values
###Output
_____no_output_____
###Markdown
Using RandomForests Classifier Testing.
###Code
X_pred = [[1,103,30,38,83,43.3,0.183,33]]
X_pred = pd.DataFrame(X_pred, columns=['Pregnancies', 'Glucose', 'BloodPressure','SkinThickness','Insulin','BMI','DiabetesPedigreeFunction','Age'])
###Output
_____no_output_____
###Markdown
Importing RandomForest
###Code
import sklearn
model=RandomForestClassifier(n_estimators=100, n_jobs=-1)
model.fit(X,y)
prediction = model.predict(X_pred)
acc = metrics.accuracy_score(prediction,y[0])
###Output
C:\Users\user\AppData\Local\Temp/ipykernel_21212/2030810289.py:3: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples,), for example using ravel().
model.fit(X,y)
C:\Users\user\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\sklearn\base.py:443: UserWarning: X has feature names, but RandomForestClassifier was fitted without feature names
warnings.warn(
###Markdown
Saving the model using pickle
###Code
import pickle
#setting savename
savename = "model.sav"
#dumping model into the file
pickle.dump(model, open(savename, "wb"))
###Output
_____no_output_____
###Markdown
Testing if the model is loading properly
###Code
load_model = pickle.load(open(savename, "rb"))
single = load_model.predict(X_pred)[0]
probability = load_model.predict_proba(X_pred)[:,1][0]*100
if single==1:
output = "The patient is diagnosed with Diabetes"
output1 = "Confidence: {}".format(probability)
else:
output = "The patient is not diagnosed with Diabetes"
output1 = ""
print(output)
print(output1)
###Output
The patient is not diagnosed with Diabetes
###Markdown
Oscillator Drift Prediction over Time
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import ipywidgets as widgets
from IPython.display import display
###Output
_____no_output_____
###Markdown
Path to current folder
###Code
path = "C:\\Users\\Ryan\\code\\freq-vs-age-prediction\\images"
###Output
_____no_output_____
###Markdown
Variable Declarations
###Code
f = 0 # Crystal oscillator frequency
t = 0 # Time
t1 = 0 # Cooking/pre-aging period
t2 = 0 # Operating period
f1 = 0 # Corresponding frequency
K = 0 # Aging slope
###Output
_____no_output_____
###Markdown
Import and Preview Drift Data
###Code
df = pd.read_csv('XTALTQ_BT0f03_Aging_Data.csv')
df = df.set_index('Day')
plt.figure(figsize=(9, 5))
plt.plot(df)
plt.title("Tolerance vs Time")
plt.ylabel("Tolerance (ppm)")
plt.xlabel("Time (days)")
plt.grid(True)
plt.legend(df.columns)
# plt.savefig(f"{path}\\Tolerance-vs-Vc.png")
plt.show()
df.describe()
###Output
_____no_output_____
###Markdown
Aging Prediction Calculations References- [Correlation of predicted and real aging behaviour ofcrystal oscillators using different fitting algorithms](https://www.qsl.net/dk1ag/aging_e.pdf)- [Oscillator Aging by Isotemp](https://www.isotemp.com/wp-content/uploads/2011/04/Crystal-Oscillator-Aging.pdf)$K = \frac { f(t_2) - f(t_1) }{ ln(t_2 / t_1) }$ $f(t) = K*ln(\frac {t}{t1}) + f_1$
###Code
t1 = 15
t2 = 400
t = np.arange(1,500)
f_prediction = pd.DataFrame({})
choice_parts = [2,3,4]
K = ( df.loc[t2] - df.loc[t1] ) / np.log( t2 / t1 )
# print(K)
# print(K[choice_parts[0]])
# print(K[choice_parts[1]])
# print(K[choice_parts[2]])
f_prediction['Unit#1 Prediction'] = K[choice_parts[0]] * np.log( (t / t1) + 1 )
f_prediction['Unit#2 Prediction'] = K[choice_parts[1]] * np.log( (t / t1) + 1 )
f_prediction['Unit#3 Prediction'] = K[choice_parts[2]] * np.log( (t / t1) + 1 )
df_normal = df - df.loc[1]
df_normal.iloc[:, choice_parts].plot(figsize=(9, 5))
plt.gca().set_prop_cycle(None)
plt.plot(f_prediction, '--')
plt.title("Tolerance vs Time")
plt.ylabel("Tolerance (ppm)")
plt.xlabel("Time (days)")
plt.grid(True)
plt.xscale('log')
# plt.legend(['Unit#1', 'Unit#2', 'Unit#3', 'Unit#1 Prediction', 'Unit#2 Prediction', 'Unit#3 Prediction'])
# plt.savefig(f"{path}\\{t.min()}_{t.max()}.png")
plt.show()
###Output
_____no_output_____
###Markdown
Aging Prediction Over time intervals
###Code
import os.path
for i in range(1,10):
interval_start = 1+50*i
interval_end = 50+50*i
plt.figure(figsize=(9, 5))
plt.plot(df_normal[['Unit#1', 'Unit#2', 'Unit#3']].iloc[interval_start:interval_end])
plt.gca().set_prop_cycle(None)
plt.plot(f_prediction.iloc[interval_start:interval_end], '--')
plt.title("Tolerance vs Time")
plt.ylabel("Tolerance (ppm)")
plt.xlabel("Time (days)")
plt.ylim((df_normal['Unit#1'][interval_start:interval_end].min()-.05, df_normal['Unit#1'][interval_start:interval_end].max()+0.05))
plt.grid(True)
# plt.xscale('log')
plt.legend(['Unit#1', 'Unit#2', 'Unit#3', 'Unit#1 Prediction', 'Unit#2 Prediction', 'Unit#3 Prediction'])
# plt.savefig(f"{path}\\{interval_start}_{interval_end}.png")
plt.show()
df.describe()
###Output
_____no_output_____
###Markdown
Stock Price Prediction using Linear RegressionThe dataset can be downloaded from https://www.kaggle.com/borismarjanovic/price-volume-data-for-all-us-stocks-etfsI am going to analyse the effect on quality of predictions with various Feature conbinations for different Labels.
###Code
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
from matplotlib import dates
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
#To supress the FutureWarning
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
import h5py
warnings.resetwarnings()
###Output
_____no_output_____
###Markdown
The Stock I am picking for this experiment is Apple Inc. NASDAQ: AAPLApple Inc. is an American multinational technology company headquartered in Cupertino, California, that designs, develops, and sells consumer electronics, computer software, and online services.
###Code
stock = pd.read_csv('Stocks/aapl.us.txt', sep=",")
stock
###Output
_____no_output_____
###Markdown
The dataset contains 8364 rows and 7 columns
###Code
# Stock Price Graph
def stocks_data(symbols, dates):
df = pd.DataFrame(index=dates)
for symbol in symbols:
df_temp = pd.read_csv("Stocks/{}.us.txt".format(symbol), index_col='Date',
parse_dates=True, usecols=['Date', 'Close'], na_values=['nan'])
df_temp = df_temp.rename(columns={'Close': symbol})
df = df.join(df_temp)
return df
dates = pd.date_range('2016-01-02','2016-12-31',freq='B')
symbols = ['aapl']
df = stocks_data(symbols, dates)
df.fillna(method='pad')
df.interpolate().plot()
plt.show()
###Output
_____no_output_____
###Markdown
Experiment 1Feature: Open Label: High
###Code
stock = stock.reindex(np.random.permutation(stock.index))
stock
stock.describe()
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model of one feature.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(buffer_size=10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_model(learning_rate, steps, batch_size, input_feature="Open"):
"""Trains a linear regression model of one feature.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
input_feature: A `string` specifying a column from `california_housing_dataframe`
to use as input feature.
"""
periods = 10
steps_per_period = steps / periods
my_feature = input_feature
my_feature_data = stock[[my_feature]]
my_label = "High"
targets = stock[my_label]
# Create feature columns.
feature_columns = [tf.feature_column.numeric_column(my_feature)]
# Create input functions.
training_input_fn = lambda:my_input_fn(my_feature_data, targets, batch_size=batch_size)
prediction_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False)
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
# Set up to plot the state of our model's line each period.
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.title("Learned Line by Period")
plt.ylabel(my_label)
plt.xlabel(my_feature)
sample = stock.sample(n=300)
plt.scatter(sample[my_feature], sample[my_label])
colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)]
# Train the model, but inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
root_mean_squared_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period
)
# Take a break and compute predictions.
predictions = linear_regressor.predict(input_fn=prediction_input_fn)
predictions = np.array([item['predictions'][0] for item in predictions])
# Compute loss.
root_mean_squared_error = math.sqrt(metrics.mean_squared_error(predictions, targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, root_mean_squared_error))
# Add the loss metrics from this period to our list.
root_mean_squared_errors.append(root_mean_squared_error)
# Finally, track the weights and biases over time.
# Apply some math to ensure that the data and line are plotted neatly.
y_extents = np.array([0, sample[my_label].max()])
weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
x_extents = (y_extents - bias) / weight
x_extents = np.maximum(np.minimum(x_extents,
sample[my_feature].max()),
sample[my_feature].min())
y_extents = weight * x_extents + bias
plt.plot(x_extents, y_extents, color=colors[period])
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.subplot(1, 2, 2)
plt.ylabel('RMSE')
plt.xlabel('Periods')
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(root_mean_squared_errors)
# Output a table with calibration data.
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
display.display(calibration_data.describe())
print("Final RMSE (on training data): %0.2f" % root_mean_squared_error)
return calibration_data
caliberation_data = train_model(
learning_rate=0.01,
steps=100,
batch_size=5
)
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.scatter(calibration_data["predictions"], calibration_data["targets"])
plt.subplot(1, 2, 2)
_ = stock["Open"].hist()
###Output
_____no_output_____
###Markdown
Experiment 2Label = HighFeature = Volume
###Code
caliberation_data = train_model(
learning_rate=0.01,
steps=100,
batch_size=5,
input_feature="Volume"
)
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.scatter(calibration_data["predictions"], calibration_data["targets"])
plt.subplot(1, 2, 2)
_ = stock["Volume"].hist()
###Output
_____no_output_____ |
week4/week4-seq2seq.ipynb | ###Markdown
Learn to calculate with seq2seq modelIn this assignment, you will learn how to use neural networks to solve sequence-to-sequence prediction tasks. Seq2Seq models are very popular these days because they achieve great results in Machine Translation, Text Summarization, Conversational Modeling and more.Using sequence-to-sequence modeling you are going to build a calculator for evaluating arithmetic expressions, by taking an equation as an input to the neural network and producing an answer as it's output.The resulting solution for this problem will be based on state-of-the-art approaches for sequence-to-sequence learning and you should be able to easily adapt it to solve other tasks. However, if you want to train your own machine translation system or intellectual chat bot, it would be useful to have access to compute resources like GPU, and be patient, because training of such systems is usually time consuming. LibrariesFor this task you will need the following libraries: - [TensorFlow](https://www.tensorflow.org) — an open-source software library for Machine Intelligence. In this assignment, we use Tensorflow 1.15.0. You can install it with pip: !pip install tensorflow==1.15.0 - [scikit-learn](http://scikit-learn.org/stable/index.html) — a tool for data mining and data analysis. If you have never worked with TensorFlow, you will probably want to read some tutorials during your work on this assignment, e.g. [Neural Machine Translation](https://www.tensorflow.org/tutorials/seq2seq) tutorial deals with very similar task and can explain some concepts to you.
###Code
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
! wget https://raw.githubusercontent.com/hse-aml/natural-language-processing/master/setup_google_colab.py -O setup_google_colab.py
import setup_google_colab
setup_google_colab.setup_week4()
###Output
_____no_output_____
###Markdown
DataOne benefit of this task is that you don't need to download any data — you will generate it on your own! We will use two operators (addition and subtraction) and work with positive integer numbers in some range. Here are examples of correct inputs and outputs: Input: '1+2' Output: '3' Input: '0-99' Output: '-99'*Note, that there are no spaces between operators and operands.*Now you need to implement the function *generate_equations*, which will be used to generate the data.
###Code
import random
def generate_equations(allowed_operators, dataset_size, min_value, max_value):
"""Generates pairs of equations and solutions to them.
Each equation has a form of two integers with an operator in between.
Each solution is an integer with the result of the operaion.
allowed_operators: list of strings, allowed operators.
dataset_size: an integer, number of equations to be generated.
min_value: an integer, min value of each operand.
max_value: an integer, max value of each operand.
result: a list of tuples of strings (equation, solution).
"""
sample = []
for _ in range(dataset_size):
######################################
######### YOUR CODE HERE #############
######################################
return sample
###Output
_____no_output_____
###Markdown
To check the correctness of your implementation, use *test_generate_equations* function:
###Code
def test_generate_equations():
allowed_operators = ['+', '-']
dataset_size = 10
for (input_, output_) in generate_equations(allowed_operators, dataset_size, 0, 100):
if not (type(input_) is str and type(output_) is str):
return "Both parts should be strings."
if eval(input_) != int(output_):
return "The (equation: {!r}, solution: {!r}) pair is incorrect.".format(input_, output_)
return "Tests passed."
print(test_generate_equations())
###Output
_____no_output_____
###Markdown
Finally, we are ready to generate the train and test data for the neural network:
###Code
from sklearn.model_selection import train_test_split
allowed_operators = ['+', '-']
dataset_size = 100000
data = generate_equations(allowed_operators, dataset_size, min_value=0, max_value=9999)
train_set, test_set = train_test_split(data, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
Prepare data for the neural networkThe next stage of data preparation is creating mappings of the characters to their indices in some vocabulary. Since in our task we already know which symbols will appear in the inputs and outputs, generating the vocabulary is a simple step. How to create dictionaries for other taskFirst of all, you need to understand what is the basic unit of the sequence in your task. In our case, we operate on symbols and the basic unit is a symbol. The number of symbols is small, so we don't need to think about filtering/normalization steps. However, in other tasks, the basic unit is often a word, and in this case the mapping would be *word $\to$ integer*. The number of words might be huge, so it would be reasonable to filter them, for example, by frequency and leave only the frequent ones. Other strategies that your should consider are: data normalization (lowercasing, tokenization, how to consider punctuation marks), separate vocabulary for input and for output (e.g. for machine translation), some specifics of the task.
###Code
word2id = {symbol:i for i, symbol in enumerate('#^$+-1234567890')}
id2word = {i:symbol for symbol, i in word2id.items()}
###Output
_____no_output_____
###Markdown
Special symbols
###Code
start_symbol = '^'
end_symbol = '$'
padding_symbol = '#'
###Output
_____no_output_____
###Markdown
You could notice that we have added 3 special symbols: '^', '\$' and '':- '^' symbol will be passed to the network to indicate the beginning of the decoding procedure. We will discuss this one later in more details.- '\$' symbol will be used to indicate the *end of a string*, both for input and output sequences. - '' symbol will be used as a *padding* character to make lengths of all strings equal within one training batch.People have a bit different habits when it comes to special symbols in encoder-decoder networks, so don't get too much confused if you come across other variants in tutorials you read. Padding When vocabularies are ready, we need to be able to convert a sentence to a list of vocabulary word indices and back. At the same time, let's care about padding. We are going to preprocess each sequence from the input (and output ground truth) in such a way that:- it has a predefined length *padded_len*- it is probably cut off or padded with the *padding symbol* ''- it *always* ends with the *end symbol* '$'We will treat the original characters of the sequence **and the end symbol** as the valid part of the input. We will store *the actual length* of the sequence, which includes the end symbol, but does not include the padding symbols. Now you need to implement the function *sentence_to_ids* that does the described job.
###Code
def sentence_to_ids(sentence, word2id, padded_len):
""" Converts a sequence of symbols to a padded sequence of their ids.
sentence: a string, input/output sequence of symbols.
word2id: a dict, a mapping from original symbols to ids.
padded_len: an integer, a desirable length of the sequence.
result: a tuple of (a list of ids, an actual length of sentence).
"""
sent_ids = ######### YOUR CODE HERE #############
sent_len = ######### YOUR CODE HERE #############
return sent_ids, sent_len
###Output
_____no_output_____
###Markdown
Check that your implementation is correct:
###Code
def test_sentence_to_ids():
sentences = [("123+123", 7), ("123+123", 8), ("123+123", 10)]
expected_output = [([5, 6, 7, 3, 5, 6, 2], 7),
([5, 6, 7, 3, 5, 6, 7, 2], 8),
([5, 6, 7, 3, 5, 6, 7, 2, 0, 0], 8)]
for (sentence, padded_len), (sentence_ids, expected_length) in zip(sentences, expected_output):
output, length = sentence_to_ids(sentence, word2id, padded_len)
if output != sentence_ids:
return("Convertion of '{}' for padded_len={} to {} is incorrect.".format(
sentence, padded_len, output))
if length != expected_length:
return("Convertion of '{}' for padded_len={} has incorrect actual length {}.".format(
sentence, padded_len, length))
return("Tests passed.")
print(test_sentence_to_ids())
###Output
_____no_output_____
###Markdown
We also need to be able to get back from indices to symbols:
###Code
def ids_to_sentence(ids, id2word):
""" Converts a sequence of ids to a sequence of symbols.
ids: a list, indices for the padded sequence.
id2word: a dict, a mapping from ids to original symbols.
result: a list of symbols.
"""
return [id2word[i] for i in ids]
###Output
_____no_output_____
###Markdown
Generating batches The final step of data preparation is a function that transforms a batch of sentences to a list of lists of indices.
###Code
def batch_to_ids(sentences, word2id, max_len):
"""Prepares batches of indices.
Sequences are padded to match the longest sequence in the batch,
if it's longer than max_len, then max_len is used instead.
sentences: a list of strings, original sequences.
word2id: a dict, a mapping from original symbols to ids.
max_len: an integer, max len of sequences allowed.
result: a list of lists of ids, a list of actual lengths.
"""
max_len_in_batch = min(max(len(s) for s in sentences) + 1, max_len)
batch_ids, batch_ids_len = [], []
for sentence in sentences:
ids, ids_len = sentence_to_ids(sentence, word2id, max_len_in_batch)
batch_ids.append(ids)
batch_ids_len.append(ids_len)
return batch_ids, batch_ids_len
###Output
_____no_output_____
###Markdown
The function *generate_batches* will help to generate batches with defined size from given samples.
###Code
def generate_batches(samples, batch_size=64):
X, Y = [], []
for i, (x, y) in enumerate(samples, 1):
X.append(x)
Y.append(y)
if i % batch_size == 0:
yield X, Y
X, Y = [], []
if X and Y:
yield X, Y
###Output
_____no_output_____
###Markdown
To illustrate the result of the implemented functions, run the following cell:
###Code
sentences = train_set[0]
ids, sent_lens = batch_to_ids(sentences, word2id, max_len=10)
print('Input:', sentences)
print('Ids: {}\nSentences lengths: {}'.format(ids, sent_lens))
###Output
_____no_output_____
###Markdown
Encoder-Decoder architectureEncoder-Decoder is a successful architecture for Seq2Seq tasks with different lengths of input and output sequences. The main idea is to use two recurrent neural networks, where the first neural network *encodes* the input sequence into a real-valued vector and then the second neural network *decodes* this vector into the output sequence. While building the neural network, we will specify some particular characteristics of this architecture.
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Let us use TensorFlow building blocks to specify the network architecture.
###Code
class Seq2SeqModel(object):
pass
###Output
_____no_output_____
###Markdown
First, we need to create [placeholders](https://www.tensorflow.org/api_guides/python/io_opsPlaceholders) to specify what data we are going to feed into the network during the execution time. For this task we will need: - *input_batch* — sequences of sentences (the shape will equal to [batch_size, max_sequence_len_in_batch]); - *input_batch_lengths* — lengths of not padded sequences (the shape equals to [batch_size]); - *ground_truth* — sequences of groundtruth (the shape will equal to [batch_size, max_sequence_len_in_batch]); - *ground_truth_lengths* — lengths of not padded groundtruth sequences (the shape equals to [batch_size]); - *dropout_ph* — dropout keep probability; this placeholder has a predifined value 1; - *learning_rate_ph* — learning rate.
###Code
def declare_placeholders(self):
"""Specifies placeholders for the model."""
# Placeholders for input and its actual lengths.
self.input_batch = tf.placeholder(shape=(None, None), dtype=tf.int32, name='input_batch')
self.input_batch_lengths = tf.placeholder(shape=(None, ), dtype=tf.int32, name='input_batch_lengths')
# Placeholders for groundtruth and its actual lengths.
self.ground_truth = ######### YOUR CODE HERE #############
self.ground_truth_lengths = ######### YOUR CODE HERE #############
self.dropout_ph = tf.placeholder_with_default(tf.cast(1.0, tf.float32), shape=[])
self.learning_rate_ph = ######### YOUR CODE HERE #############
Seq2SeqModel.__declare_placeholders = classmethod(declare_placeholders)
###Output
_____no_output_____
###Markdown
Now, let us specify the layers of the neural network. First, we need to prepare an embedding matrix. Since we use the same vocabulary for input and output, we need only one such matrix. For tasks with different vocabularies there would be multiple embedding layers.- Create embeddings matrix with [tf.Variable](https://www.tensorflow.org/api_docs/python/tf/Variable). Specify its name, type (tf.float32), and initialize with random values.- Perform [embeddings lookup](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) for a given input batch.
###Code
def create_embeddings(self, vocab_size, embeddings_size):
"""Specifies embeddings layer and embeds an input batch."""
random_initializer = tf.random_uniform((vocab_size, embeddings_size), -1.0, 1.0)
self.embeddings = ######### YOUR CODE HERE #############
# Perform embeddings lookup for self.input_batch.
self.input_batch_embedded = ######### YOUR CODE HERE #############
Seq2SeqModel.__create_embeddings = classmethod(create_embeddings)
###Output
_____no_output_____
###Markdown
EncoderThe first RNN of the current architecture is called an *encoder* and serves for encoding an input sequence to a real-valued vector. Input of this RNN is an embedded input batch. Since sentences in the same batch could have different actual lengths, we also provide input lengths to avoid unnecessary computations. The final encoder state will be passed to the second RNN (decoder), which we will create soon. - TensorFlow provides a number of [RNN cells](https://www.tensorflow.org/api_guides/python/contrib.rnnCore_RNN_Cells_for_use_with_TensorFlow_s_core_RNN_methods) ready for use. We suggest that you use [GRU cell](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/GRUCell), but you can also experiment with other types. - Wrap your cells with [DropoutWrapper](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper). Dropout is an important regularization technique for neural networks. Specify input keep probability using the dropout placeholder that we created before.- Combine the defined encoder cells with [Dynamic RNN](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn). Use the embedded input batches and their lengths here.- Use *dtype=tf.float32* everywhere.
###Code
def build_encoder(self, hidden_size):
"""Specifies encoder architecture and computes its output."""
# Create GRUCell with dropout.
encoder_cell = ######### YOUR CODE HERE #############
# Create RNN with the predefined cell.
_, self.final_encoder_state = ######### YOUR CODE HERE #############
Seq2SeqModel.__build_encoder = classmethod(build_encoder)
###Output
_____no_output_____
###Markdown
DecoderThe second RNN is called a *decoder* and serves for generating the output sequence. In the simple seq2seq arcitecture, the input sequence is provided to the decoder only as the final state of the encoder. Obviously, it is a bottleneck and [Attention techniques](https://www.tensorflow.org/tutorials/seq2seqbackground_on_the_attention_mechanism) can help to overcome it. So far, we do not need them to make our calculator work, but this would be a necessary ingredient for more advanced tasks. During training, decoder also uses information about the true output. It is feeded in as input symbol by symbol. However, during the prediction stage (which is called *inference* in this architecture), the decoder can only use its own generated output from the previous step to feed it in at the next step. Because of this difference (*training* vs *inference*), we will create two distinct instances, which will serve for the described scenarios.The picture below illustrates the point. It also shows our work with the special characters, e.g. look how the start symbol `^` is used. The transparent parts are ignored. In decoder, it is masked out in the loss computation. In encoder, the green state is considered as final and passed to the decoder. Now, it's time to implement the decoder: - First, we should create two [helpers](https://www.tensorflow.org/api_guides/python/contrib.seq2seqDynamic_Decoding). These classes help to determine the behaviour of the decoder. During the training time, we will use [TrainingHelper](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper). For the inference we recommend to use [GreedyEmbeddingHelper](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper). - To share all parameters during training and inference, we use one scope and set the flag 'reuse' to True at inference time. You might be interested to know more about how [variable scopes](https://www.tensorflow.org/programmers_guide/variables) work in TF. - To create the decoder itself, we will use [BasicDecoder](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder) class. As previously, you should choose some RNN cell, e.g. GRU cell. To turn hidden states into logits, we will need a projection layer. One of the simple solutions is using [OutputProjectionWrapper](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/OutputProjectionWrapper). - For getting the predictions, it will be convinient to use [dynamic_decode](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode). This function uses the provided decoder to perform decoding.
###Code
def build_decoder(self, hidden_size, vocab_size, max_iter, start_symbol_id, end_symbol_id):
"""Specifies decoder architecture and computes the output.
Uses different helpers:
- for train: feeding ground truth
- for inference: feeding generated output
As a result, self.train_outputs and self.infer_outputs are created.
Each of them contains two fields:
rnn_output (predicted logits)
sample_id (predictions).
"""
# Use start symbols as the decoder inputs at the first time step.
batch_size = tf.shape(self.input_batch)[0]
start_tokens = tf.fill([batch_size], start_symbol_id)
ground_truth_as_input = tf.concat([tf.expand_dims(start_tokens, 1), self.ground_truth], 1)
# Use the embedding layer defined before to lookup embedings for ground_truth_as_input.
self.ground_truth_embedded = ######### YOUR CODE HERE #############
# Create TrainingHelper for the train stage.
train_helper = tf.contrib.seq2seq.TrainingHelper(self.ground_truth_embedded,
self.ground_truth_lengths)
# Create GreedyEmbeddingHelper for the inference stage.
# You should provide the embedding layer, start_tokens and index of the end symbol.
infer_helper = ######### YOUR CODE HERE #############
def decode(helper, scope, reuse=None):
"""Creates decoder and return the results of the decoding with a given helper."""
with tf.variable_scope(scope, reuse=reuse):
# Create GRUCell with dropout. Do not forget to set the reuse flag properly.
decoder_cell = ######### YOUR CODE HERE #############
# Create a projection wrapper.
decoder_cell = tf.contrib.rnn.OutputProjectionWrapper(decoder_cell, vocab_size, reuse=reuse)
# Create BasicDecoder, pass the defined cell, a helper, and initial state.
# The initial state should be equal to the final state of the encoder!
decoder = ######### YOUR CODE HERE #############
# The first returning argument of dynamic_decode contains two fields:
# rnn_output (predicted logits)
# sample_id (predictions)
outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder=decoder, maximum_iterations=max_iter,
output_time_major=False, impute_finished=True)
return outputs
self.train_outputs = decode(train_helper, 'decode')
self.infer_outputs = decode(infer_helper, 'decode', reuse=True)
Seq2SeqModel.__build_decoder = classmethod(build_decoder)
###Output
_____no_output_____
###Markdown
In this task we will use [sequence_loss](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/sequence_loss), which is a weighted cross-entropy loss for a sequence of logits. Take a moment to understand, what is your train logits and targets. Also note, that we do not want to take into account loss terms coming from padding symbols, so we will mask them out using weights.
###Code
def compute_loss(self):
"""Computes sequence loss (masked cross-entopy loss with logits)."""
weights = tf.cast(tf.sequence_mask(self.ground_truth_lengths), dtype=tf.float32)
self.loss = ######### YOUR CODE HERE #############
Seq2SeqModel.__compute_loss = classmethod(compute_loss)
###Output
_____no_output_____
###Markdown
The last thing to specify is the optimization of the defined loss. We suggest that you use [optimize_loss](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/optimize_loss) with Adam optimizer and a learning rate from the corresponding placeholder. You might also need to pass global step (e.g. as tf.train.get_global_step()) and clip gradients by 1.0.
###Code
def perform_optimization(self):
"""Specifies train_op that optimizes self.loss."""
self.train_op = ######### YOUR CODE HERE #############
Seq2SeqModel.__perform_optimization = classmethod(perform_optimization)
###Output
_____no_output_____
###Markdown
Congratulations! You have specified all the parts of your network. You may have noticed, that we didn't deal with any real data yet, so what you have written is just recipies on how the network should function.Now we will put them to the constructor of our Seq2SeqModel class to use it in the next section.
###Code
def init_model(self, vocab_size, embeddings_size, hidden_size,
max_iter, start_symbol_id, end_symbol_id, padding_symbol_id):
self.__declare_placeholders()
self.__create_embeddings(vocab_size, embeddings_size)
self.__build_encoder(hidden_size)
self.__build_decoder(hidden_size, vocab_size, max_iter, start_symbol_id, end_symbol_id)
# Compute loss and back-propagate.
self.__compute_loss()
self.__perform_optimization()
# Get predictions for evaluation.
self.train_predictions = self.train_outputs.sample_id
self.infer_predictions = self.infer_outputs.sample_id
Seq2SeqModel.__init__ = classmethod(init_model)
###Output
_____no_output_____
###Markdown
Train the network and predict output[Session.run](https://www.tensorflow.org/api_docs/python/tf/Sessionrun) is a point which initiates computations in the graph that we have defined. To train the network, we need to compute *self.train_op*. To predict output, we just need to compute *self.infer_predictions*. In any case, we need to feed actual data through the placeholders that we defined above.
###Code
def train_on_batch(self, session, X, X_seq_len, Y, Y_seq_len, learning_rate, dropout_keep_probability):
feed_dict = {
self.input_batch: X,
self.input_batch_lengths: X_seq_len,
self.ground_truth: Y,
self.ground_truth_lengths: Y_seq_len,
self.learning_rate_ph: learning_rate,
self.dropout_ph: dropout_keep_probability
}
pred, loss, _ = session.run([
self.train_predictions,
self.loss,
self.train_op], feed_dict=feed_dict)
return pred, loss
Seq2SeqModel.train_on_batch = classmethod(train_on_batch)
###Output
_____no_output_____
###Markdown
We implemented two prediction functions: *predict_for_batch* and *predict_for_batch_with_loss*. The first one allows only to predict output for some input sequence, while the second one could compute loss because we provide also ground truth values. Both these functions might be useful since the first one could be used for predicting only, and the second one is helpful for validating results on not-training data during the training.
###Code
def predict_for_batch(self, session, X, X_seq_len):
feed_dict = ######### YOUR CODE HERE #############
pred = session.run([
self.infer_predictions
], feed_dict=feed_dict)[0]
return pred
def predict_for_batch_with_loss(self, session, X, X_seq_len, Y, Y_seq_len):
feed_dict = ######### YOUR CODE HERE #############
pred, loss = session.run([
self.infer_predictions,
self.loss,
], feed_dict=feed_dict)
return pred, loss
Seq2SeqModel.predict_for_batch = classmethod(predict_for_batch)
Seq2SeqModel.predict_for_batch_with_loss = classmethod(predict_for_batch_with_loss)
###Output
_____no_output_____
###Markdown
Run your experimentCreate *Seq2SeqModel* model with the following parameters: - *vocab_size* — number of tokens; - *embeddings_size* — dimension of embeddings, recommended value: 20; - *max_iter* — maximum number of steps in decoder, recommended value: 7; - *hidden_size* — size of hidden layers for RNN, recommended value: 512; - *start_symbol_id* — an index of the start token (`^`). - *end_symbol_id* — an index of the end token (`$`). - *padding_symbol_id* — an index of the padding token (``).Set hyperparameters. You might want to start with the following values and see how it works:- *batch_size*: 128;- at least 10 epochs;- value of *learning_rate*: 0.001- *dropout_keep_probability* equals to 0.5 for training (typical values for dropout probability are ranging from 0.1 to 1.0); larger values correspond smaler number of dropout units;- *max_len*: 20.
###Code
tf.reset_default_graph()
model = ######### YOUR CODE HERE #############
batch_size = ######### YOUR CODE HERE #############
n_epochs = ######### YOUR CODE HERE #############
learning_rate = ######### YOUR CODE HERE #############
dropout_keep_probability = ######### YOUR CODE HERE #############
max_len = ######### YOUR CODE HERE #############
n_step = int(len(train_set) / batch_size)
###Output
_____no_output_____
###Markdown
Finally, we are ready to run the training! A good indicator that everything works fine is decreasing loss during the training. You should account on the loss value equal to approximately 2.7 at the beginning of the training and near 1 after the 10th epoch.
###Code
session = tf.Session()
session.run(tf.global_variables_initializer())
invalid_number_prediction_counts = []
all_model_predictions = []
all_ground_truth = []
print('Start training... \n')
for epoch in range(n_epochs):
random.shuffle(train_set)
random.shuffle(test_set)
print('Train: epoch', epoch + 1)
for n_iter, (X_batch, Y_batch) in enumerate(generate_batches(train_set, batch_size=batch_size)):
######################################
######### YOUR CODE HERE #############
######################################
# prepare the data (X_batch and Y_batch) for training
# using function batch_to_ids
predictions, loss = ######### YOUR CODE HERE #############
if n_iter % 200 == 0:
print("Epoch: [%d/%d], step: [%d/%d], loss: %f" % (epoch + 1, n_epochs, n_iter + 1, n_step, loss))
X_sent, Y_sent = next(generate_batches(test_set, batch_size=batch_size))
######################################
######### YOUR CODE HERE #############
######################################
# prepare test data (X_sent and Y_sent) for predicting
# quality and computing value of the loss function
# using function batch_to_ids
predictions, loss = ######### YOUR CODE HERE #############
print('Test: epoch', epoch + 1, 'loss:', loss,)
for x, y, p in list(zip(X, Y, predictions))[:3]:
print('X:',''.join(ids_to_sentence(x, id2word)))
print('Y:',''.join(ids_to_sentence(y, id2word)))
print('O:',''.join(ids_to_sentence(p, id2word)))
print('')
model_predictions = []
ground_truth = []
invalid_number_prediction_count = 0
# For the whole test set calculate ground-truth values (as integer numbers)
# and prediction values (also as integers) to calculate metrics.
# If generated by model number is not correct (e.g. '1-1'),
# increase invalid_number_prediction_count and don't append this and corresponding
# ground-truth value to the arrays.
for X_batch, Y_batch in generate_batches(test_set, batch_size=batch_size):
######################################
######### YOUR CODE HERE #############
######################################
all_model_predictions.append(model_predictions)
all_ground_truth.append(ground_truth)
invalid_number_prediction_counts.append(invalid_number_prediction_count)
print('\n...training finished.')
###Output
_____no_output_____
###Markdown
Evaluate resultsBecause our task is simple and the output is straight-forward, we will use [MAE](https://en.wikipedia.org/wiki/Mean_absolute_error) metric to evaluate the trained model during the epochs. Compute the value of the metric for the output from each epoch.
###Code
from sklearn.metrics import mean_absolute_error
for i, (gts, predictions, invalid_number_prediction_count) in enumerate(zip(all_ground_truth,
all_model_predictions,
invalid_number_prediction_counts), 1):
mae = ######### YOUR CODE HERE #############
print("Epoch: %i, MAE: %f, Invalid numbers: %i" % (i, mae, invalid_number_prediction_count))
###Output
_____no_output_____ |
prescribing_exercises.ipynb | ###Markdown
Data Carpentry Inspired WorkshopThis workshop is inspired by the Data Carpentry python lesson for ecology: https://datacarpentry.org/python-ecology-lesson/. You use this lesson as a reference and come back to it after the workshop (it is open source and freely available). The main difference is that we are using UK antibiotics prescribing data for the exercises in this workshop. MotivationScreening two short videos from the "New Amsterdam" TV show. Video 106:50 This is when Dr. Max Goodwin fires the cardiologists."How can we help?" Video 212:55 This is when Dr. Floyd Reynolds is hired."Because there are other ways of helping people other than cutting them open." PlotShow the plot that we are looking at the end of the day. Data- Sample: https://www.dropbox.com/s/u75uezh2pbuk70d/antibiotics-sample.csv?dl=0- Full: https://www.dropbox.com/s/r9ain5cmuh6ztk2/antibiotics.csv?dl=0 AimsWe would like to answer the following questions at the end of the day:1. What is the most prescribed drug during November 2018?2. How the distribution of number of prescriptions vs the number of practices in November 2018 looks like?3. How has the number of antibiotics prescriptions changed between August and November 2018?4. Which practice has been treating patients for tuberculosis? What is Python?Python is a general-purpose programming language that supports rapid development of data analytics applications. The word “Python” is used to refer to both the programming language and the tool that executes the scripts written in Python language. Jupyter NotebookThe Jupyter Notebook is an open-source web application that allows you to create and share documents that contain cells with live code, equations, visualizations and narrative text. You can type Python code into a code cell and then execute the code by pressing `Shift`+`Return`. Output will be printed directly under the input cell. You can recognise a code cell by the `In[ ]:` at the beginning of the cell and output by `Out[ ]:`. Pressing the `+` button in the menu bar will add a new cell. All your commands as well as any output will be saved with the notebook. You can also easily share a notebook with your colleagues, along with the data that the notebook code is processing. Introduction to Python Arithmetic operations **Exercise**Do arithmetic operations using Python.
###Code
2 + 2
###Output
_____no_output_____
###Markdown
Variables **Exercise**Create a variable that stores an integer.
###Code
number_of_chromosomes = 23
###Output
_____no_output_____
###Markdown
**Exercise**Create a variable that stores some text.
###Code
university_name = "University of Manchester"
###Output
_____no_output_____
###Markdown
Functions You can use the function `print` to show the value of variables.
###Code
print(number_of_chromosomes)
print(university_name)
###Output
University of Manchester
###Markdown
Getting HelpYou can use `help` to access the documentation of the functions. Try `help(print)`. **Exercise**How many characters are there in "University of Manchester"?
###Code
len(university_name)
###Output
_____no_output_____
###Markdown
Creating Your FunctionsYou can create your own functions in Python.```def fahr_to_celsius(temp): return ((temp - 32) * (5/9))``` **Exercise**Create a function that converts ounces to grams.Create a function that converts pounds to ounces.Create a function that converts pounds to grams using the previous two functions.
###Code
def ounces_to_grams(ounces):
return ounces * 28.350
def pounds_to_ounces(pounds):
return 16 * ounces
def pounds_to_grams(pounds):
return ounces_to_grams(pounds_to_ounces(pounds))
###Output
_____no_output_____
###Markdown
Python built-in data types**Exercise**Create a list containing the individual words in the string "University of Manchester".
###Code
university_name.split()
###Output
_____no_output_____
###Markdown
Lists are a common data structure to hold an ordered sequence of elements. Each element can be accessed by an index. Note that Python indexes start with 0 instead of 1.
###Code
university_name_parts = university_name.split()
university_name_parts[0]
###Output
_____no_output_____
###Markdown
LibrariesOne of the best options for working with tabular data in Python is to use the Python Data Analysis Library (a.k.a. Pandas). The Pandas library provides data structures, produces high quality plots with matplotlib and integrates nicely with other libraries that use NumPy (which is another Python library) arrays.Python doesn’t load all of the libraries available to it by default. We have to add an import statement to our code in order to use library functions. To import a library, we use the syntax `import libraryName`. If we want to give the library a nickname to shorten the command, we can add `as nickName`. An example of importing the pandas library using the common nickname `pd` is below.
###Code
import pandas as pd
pd.read_csv("data/antibiotics-sample.csv")
###Output
_____no_output_____
###Markdown
Navigating files and directories
###Code
import os
os.getcwd()
os.listdir()
os.chdir("data")
pd.read_csv("data/antibiotics-sample.csv")
os.getcwd()
pd.read_csv("antibiotics-sample.csv")
###Output
_____no_output_____
###Markdown
Libraries can have pre-defined variables. These variables are different from functions witin libraries because we invoke them without parenthesis "()" at the end.
###Code
os.curdir # This is '.' for Windows and POSIX.
os.pardir # This is '..' for Windows and POSIX.
os.sep # This is '/' for POSIX and '\\' for Windows.
###Output
_____no_output_____
###Markdown
Usually, libraries' variables are immutable (i.e. they act as constants that cannot be changed).
###Code
os.chdir("..") # This is a function call, not a variable!
sample = pd.read_csv("data/antibiotics-sample.csv")
###Output
_____no_output_____
###Markdown
Pandas' DataFrame data type
###Code
sample
sample.shape
data = pd.read_csv("data/antibiotics.csv")
data.shape
data.dtypes
###Output
_____no_output_____
###Markdown
All the values in a column have the same type. For example, months have type int64, which is a kind of integer. Cells in the month column cannot have fractional values, but the weight and hindfoot_length columns can, because they have type float64. The object type doesn’t have a very helpful name, but in this case it represents strings (such as ‘M’ and ‘F’ in the case of sex).
###Code
data.columns
###Output
_____no_output_____
###Markdown
Accessing Columns Accessing one column`frame[colname]` will return the Series corresponding to the column called `colname`.It is also possible to access the column of a DataFrame called `colname` using `frame.colname`. Accessing two or more columnsYou can pass a list of columns to [] to select columns in that order. For example, `frame[[colname1, colname2]]`. **Exercise**What values do we have in the column `PERIOD`?
###Code
sample["PERIOD"]
sample["PRACTICE"]
sample["PERIOD"].unique()
pd.crosstab(sample["BNF NAME"], sample["PERIOD"])
crosstab_prescription = pd.crosstab(sample["BNF NAME"], sample["PERIOD"])
crosstab_prescription
crosstab_prescription.plot()
%matplotlib inline
crosstab_prescription.plot()
crosstab_prescription.plot(kind="bar")
###Output
_____no_output_____
###Markdown
**Exercise**Have a look at the documentation of `pd.crosstab`. What are the other arguments that it accept?Use extra arguments to answer (1) which antibiotic is responsible for most of the budget and (2) which antibiotic most dispensed.
###Code
pd.crosstab(
sample["BNF NAME"],
sample["PERIOD"],
values=sample["ACT COST"],
aggfunc=sum
).plot(kind="bar")
pd.crosstab(
sample["BNF NAME"],
sample["PERIOD"],
values=sample["ITEMS"],
aggfunc=sum
).plot(kind="bar")
pd.crosstab(
data["BNF NAME"],
data["PERIOD"],
values=data["ACT COST"],
aggfunc=sum
).plot(kind="bar")
###Output
_____no_output_____
###Markdown
Slicing data Accessing one row`frame.loc[row_index, :]` will return the one row with one column. Boolean indexingAnother common operation is the use of boolean operators to filter the data. The operators are: `|` for or, `&` for and, and `~` for not. These must be grouped using parentheses, since by default Python will evaluate an expression such as `df.A > 2 & df.B (2 & df.B) 2) & (df.B < 3)`. **Exercise**What are the values on the fifth row? **Note that the index of the first row is 0.**
###Code
data.loc[4, :]
data.loc[[4, 7], :]
###Output
_____no_output_____
###Markdown
**Exercise**Which practices are on rows with index 0, 133 and 671.
###Code
data.loc[[0, 133, 671], "PRACTICE"]
data.loc[[0, 133, 671], ["PRACTICE"]]
###Output
_____no_output_____
###Markdown
Note the difference of the result on the previous two examples. On the first one, the type is Series and on the second one is Frame.
###Code
small_sample = sample.head(10)
small_sample
###Output
_____no_output_____
###Markdown
**Exercise**What are all the presciptions coming from practice Y04664 in `small_sample`?
###Code
small_sample
small_sample.loc[[
False, # 0
False, # 1
False, # 2
False, # 3
True, # 4
True, # 5
True, # 6
False, # 7
False, # 8
False, # 9
], :]
small_sample["PRACTICE"] == "Y04664"
small_sample.loc[small_sample["PRACTICE"] == "Y04664", :]
small_sample.loc[
(small_sample["PRACTICE"] == "Y04664") | (small_sample["PRACTICE"] == "N85638")
, :]
###Output
_____no_output_____
###Markdown
**Exercise**Which practice prescribe "Fluclox Sod_Cap 250mg"?
###Code
data.loc[
data["BNF NAME"] == "Fluclox Sod_Cap 250mg",
"PRACTICE"
].unique()
data["BNF NAME"].unique()
data.loc[
data["BNF NAME"] == "Fluclox Sod_Cap 250mg ",
"PRACTICE"
].unique()
data.loc[
data["BNF NAME"].str.contains("Fluclox Sod_Cap 250mg"),
"PRACTICE"
].unique()
###Output
_____no_output_____
###Markdown
**Exercise**What is the most commonly prescribed antibiotic?**Note**: You can write your own for-loop over `data.iterrows()`to answer this question but Pandas has some computations and descriptive statistics functions built-in.```most_common_count = 0for code in unique_bnf_codes: counts = len(data.loc[data['BNF CODE'] == code, 'BNF CODE']) if counts > most_common_count: most_common_count = counts most_common_code = codeprint('Most common BNF code:', most_common_code)print('Frequency of most common drug:', most_common_count)```
###Code
# DataFrame.max([axis, skipna, level, …]) Return the maximum of the values for the requested axis.
data["QUANTITY"].max()
# Top 5 most commonly prescribed antibiotics
data.groupby(['BNF CODE', 'BNF NAME'])['BNF NAME'].count().sort_values(ascending=False).head(5)
# DataFrame.min([axis, skipna, level, …]) Return the minimum of the values for the requested axis.
data["QUANTITY"].min()
# DataFrame.mean([axis, skipna, level, …]) Return the mean of the values for the requested axis.
data["QUANTITY"].mean()
# DataFrame.median([axis, skipna, level, …]) Return the median of the values for the requested axis.
data["QUANTITY"].median()
# DataFrame.describe([percentiles, include, …]) Generate descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values.
data["QUANTITY"].describe()
# DataFrame.count([axis, level, numeric_only]) Count non-NA cells for each column or row.
data["QUANTITY"].count()
sample.shape
sample["PRACTICE"].count() # Return 12
data['BNF NAME'].value_counts().head(1)
###Output
_____no_output_____
###Markdown
**Exercise**What is the least prescribed antibiotic?
###Code
data['BNF NAME'].value_counts().tail(1)
###Output
_____no_output_____
###Markdown
Be careful when you assume some conditions.
###Code
data['BNF NAME'].value_counts().tail(2)
bnf_name_counts = data['BNF NAME'].value_counts()
bnf_name_counts[bnf_name_counts == 1]
###Output
_____no_output_____
###Markdown
Extracting data from existing values and creating new columns**Exercise**Create a new column only containing the commonly used antibiotic name (and not the full BNF NAME) and a new column containing only the year it was prescribed (and not the PERIOD containing the month as well).
###Code
sample
sample["YEAR"] = pd.Series([
2018, # 0
2018, # 1
2018, # 2
2018, # 3
2018, # 4
2018, # 5
2018, # 6
2018, # 7
2018, # 8
2018, # 9
2018, # 10
2018, # 11
2018, # 12
])
sample
sample['BNF NAME'].str.lower()
###Output
_____no_output_____
###Markdown
We are going to write a little function of our own here, to help with extracting the antibiotic name from the BNF NAME.
###Code
def extract_drug_name(bnf_name):
"""Extract drug name"""
return bnf_name.lower().split()[0].split("_")[0]
extract_drug_name("Phenoxymethylpenicillin_Soln 125mg/5ml")
extract_drug_name("Fluclox Sod_Cap 250mg")
extract_drug_name("Amoxicillin_Oral Susp 125mg/5ml")
###Output
_____no_output_____
###Markdown
Pandas defines a useful function `apply` on DataFrames which enables us to apply a function on every row or column of a DataFrame.
###Code
sample['BNF NAME'].apply(extract_drug_name)
sample["DRUG NAME"] = sample['BNF NAME'].apply(extract_drug_name)
sample
###Output
_____no_output_____
###Markdown
Split-Apply-CombineWe are referring to a process involving one or more of the following steps:- Splitting the data into groups based on some criteria.- Applying a function to each group independently.- Combining the results into a data structure.Out of these, the split step is the most straightforward. In fact, in many situations we may wish to split the data set into groups and do something with those groups. In the apply step, we might wish to one of the following:- Aggregation: compute a summary statistic (or statistics) for each group.- Transformation: perform some group-specific computations and return a like-indexed object.- Filtration: discard some groups, according to a group-wise computation that evaluates True or False.**Aim**Which GP surgery has prescribed the most and least antibiotics?
###Code
# Splitting
grouped = data.groupby("PRACTICE")
# Apply
grouped.size()
grouped.size().sort_values() # You can use ascending=False
grouped.size().sort_values(ascending=False).head(1)
###Output
_____no_output_____
###Markdown
PlottingThe plot method on a Series and DataFrame is just a simple wrapper around matplotlib.**Exercise**What does the distribution of antibiotics prescribed by GP practices look like?
###Code
# How many prescriptions from each practice?
prescriptions_per_practice = data["PRACTICE"].value_counts()
prescriptions_per_practice.head()
type(prescriptions_per_practice)
prescriptions_per_practice.index
prescriptions_per_practice.plot(kind='bar', legend=True, title ="Number of prescriptions per practice")
grouped.size().value_counts()
###Output
_____no_output_____
###Markdown
Note that the Series is sorted using the values. For the histogram we need sort it by the index.
###Code
distribution_data = grouped.size().value_counts().sort_index()
distribution_data
distribution_data.plot()
%matplotlib inline
distribution_data.plot()
import matplotlib.pyplot as plt
# More at https://matplotlib.org/api/_as_gen/matplotlib.pyplot.html#functions
plt.title("Histogram")
plt.xlabel("Number of prescriptions")
plt.ylabel("Number of practices prescribing")
distribution_data.plot()
###Output
_____no_output_____ |
ML-Base-MOOC/chapt-6 Polynomial-Regression/03-Overfit and underfit.ipynb | ###Markdown
过拟合和欠拟合
###Code
import numpy as np
import matplotlib.pyplot as plt
x = np.random.uniform(-3, 3, size=100)
X = x.reshape(-1, 1)
y = 0.5 * x**2 + x + 2 + np.random.normal(0,1, size=100)
plt.scatter(x, y)
###Output
_____no_output_____
###Markdown
1. 使用线性回归
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.score(X, y)
y_predict = lin_reg.predict(X)
plt.scatter(x, y)
plt.plot(np.sort(x), y_predict[np.argsort(x)], color='r')
###Output
_____no_output_____
###Markdown
**使用均方误差来描述拟合程度**
###Code
from sklearn.metrics import mean_squared_error
y_predict = lin_reg.predict(X)
mean_squared_error(y, y_predict)
###Output
_____no_output_____
###Markdown
2. 使用多项式回归
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
def PolynomialRegression(degree):
return Pipeline([
("poly", PolynomialFeatures(degree=degree)),
("std_scaler", StandardScaler()),
("lin_reg", LinearRegression())
])
poly2_reg = PolynomialRegression(degree=2)
poly2_reg.fit(X, y)
y2_predict = poly2_reg.predict(X)
mean_squared_error(y, y2_predict)
###Output
_____no_output_____
###Markdown
- **显然比线性回归拟合程度更高**
###Code
plt.scatter(x, y)
plt.plot(np.sort(x), y2_predict[np.argsort(x)], color='r')
###Output
_____no_output_____
###Markdown
- degree = 10
###Code
poly10_reg = PolynomialRegression(degree=10)
poly10_reg.fit(X, y)
y10_predict = poly10_reg.predict(X)
mean_squared_error(y, y10_predict)
plt.scatter(x, y)
plt.plot(np.sort(x), y10_predict[np.argsort(x)], color='r')
###Output
_____no_output_____
###Markdown
- degree = 100
###Code
poly100_reg = PolynomialRegression(degree=100)
poly100_reg.fit(X, y)
y100_predict = poly100_reg.predict(X)
mean_squared_error(y, y100_predict)
plt.scatter(x, y)
plt.plot(np.sort(x), y100_predict[np.argsort(x)], color='r')
###Output
_____no_output_____
###Markdown
- 可以看出,随着degree的值越大,拟合的程度就越高- 但是此时模型已经不能很好预测数据,称为过拟合 3. train-test-split的意义
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=333)
###Output
_____no_output_____
###Markdown
线性回归
###Code
lin_reg = LinearRegression()
lin_reg.fit(X_train, y_train)
y_predict = lin_reg.predict(X_test)
mean_squared_error(y_test, y_predict)
###Output
_____no_output_____
###Markdown
多项式回归
###Code
poly2_reg = PolynomialRegression(degree=2)
poly2_reg.fit(X_train, y_train)
y2_predict = poly2_reg.predict(X_test)
mean_squared_error(y_test, y2_predict)
###Output
_____no_output_____
###Markdown
- 显然在degree=2时的模型的泛化能力强于线性回归
###Code
poly10_reg = PolynomialRegression(degree=10)
poly10_reg.fit(X_train, y_train)
y10_predict = poly10_reg.predict(X_test)
mean_squared_error(y_test, y10_predict)
poly100_reg = PolynomialRegression(degree=100)
poly100_reg.fit(X_train, y_train)
y100_predict = poly100_reg.predict(X_test)
mean_squared_error(y_test, y100_predict)
###Output
_____no_output_____
###Markdown
- 结合上面可以看出,degree越高,对训练数据拟合的越好,但是对测试数据集预测的能力越低- 即模型的泛化能力越差[](https://postimg.cc/nXfCndn4) 4. 学习曲线- 随着学习的数据越多,对训练数据与测试数据的拟合程度的变化曲线
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=10)
X_train.shape
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
train_score = []
test_score = []
for i in range(1, 76):
lin_reg = LinearRegression()
lin_reg.fit(X_train[:i], y_train[:i])
y_train_predict = lin_reg.predict(X_train[:i])
train_score.append(mean_squared_error(y_train[:i], y_train_predict))
y_test_predict = lin_reg.predict(X_test)
test_score.append(mean_squared_error(y_test, y_test_predict))
plt.plot([i for i in range(1, 76)], np.sqrt(train_score), label="train")
plt.plot([i for i in range(1, 76)], np.sqrt(test_score), label="test")
plt.legend()
# 封装函数
def plot_learning_curve(algorithm, X_train, X_test, y_train, y_test):
train_score = []
test_score = []
for i in range(1, len(X_train)+1):
algorithm.fit(X_train[:i], y_train[:i])
y_train_predict = algorithm.predict(X_train[:i])
train_score.append(mean_squared_error(y_train[:i], y_train_predict))
y_test_predict = algorithm.predict(X_test)
test_score.append(mean_squared_error(y_test, y_test_predict))
plt.plot([i for i in range(1, len(X_train)+1)], np.sqrt(train_score), label="train")
plt.plot([i for i in range(1, len(X_train)+1)], np.sqrt(test_score), label="test")
plt.axis([0, len(X_train)+1, 0, 4])
plt.legend()
plot_learning_curve(LinearRegression(), X_train, X_test, y_train, y_test)
# 多项式回归
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
def PolynomialRegression(degree):
return Pipeline([
("poly", PolynomialFeatures(degree=degree)),
("std_scaler", StandardScaler()),
("lin_reg", LinearRegression())
])
poly2_reg = PolynomialRegression(degree=2)
# 绘制学习曲线
plot_learning_curve(poly2_reg, X_train, X_test, y_train, y_test)
poly2_reg = PolynomialRegression(degree=8)
# 绘制学习曲线
plot_learning_curve(poly2_reg, X_train, X_test, y_train, y_test)
###Output
_____no_output_____
###Markdown
过拟合和欠拟合
###Code
import numpy as np
import matplotlib.pyplot as plt
x = np.random.uniform(-3, 3, size=100)
X = x.reshape(-1, 1)
y = 0.5 * x**2 + x + 2 + np.random.normal(0,1, size=100)
plt.scatter(x, y)
###Output
_____no_output_____
###Markdown
1. 使用线性回归
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.score(X, y)
y_predict = lin_reg.predict(X)
plt.scatter(x, y)
plt.plot(np.sort(x), y_predict[np.argsort(x)], color='r')
###Output
_____no_output_____
###Markdown
**使用均方误差来描述拟合程度**
###Code
from sklearn.metrics import mean_squared_error
y_predict = lin_reg.predict(X)
mean_squared_error(y, y_predict)
###Output
_____no_output_____
###Markdown
2. 使用多项式回归
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
def PolynomialRegression(degree):
return Pipeline([
("poly", PolynomialFeatures(degree=degree)),
("std_scaler", StandardScaler()),
("lin_reg", LinearRegression())
])
###Output
_____no_output_____
###Markdown
**当 degree 取 2 时**
###Code
poly2_reg = PolynomialRegression(degree=2)
poly2_reg.fit(X, y)
y2_predict = poly2_reg.predict(X)
mean_squared_error(y, y2_predict)
###Output
_____no_output_____
###Markdown
**显然此时比线性回归拟合程度更高(均方差更小)**
###Code
plt.scatter(x, y)
plt.plot(np.sort(x), y2_predict[np.argsort(x)], color='r')
###Output
_____no_output_____
###Markdown
**degree = 10 时**
###Code
poly10_reg = PolynomialRegression(degree=10)
poly10_reg.fit(X, y)
y10_predict = poly10_reg.predict(X)
mean_squared_error(y, y10_predict)
plt.scatter(x, y)
plt.plot(np.sort(x), y10_predict[np.argsort(x)], color='r')
###Output
_____no_output_____
###Markdown
**degree = 100 时**
###Code
poly100_reg = PolynomialRegression(degree=100)
poly100_reg.fit(X, y)
y100_predict = poly100_reg.predict(X)
mean_squared_error(y, y100_predict)
plt.scatter(x, y)
plt.plot(np.sort(x), y100_predict[np.argsort(x)], color='r')
###Output
_____no_output_____
###Markdown
- 可以看出,随着degree的值越大,拟合的程度**越来越高**- 但是此时模型已经不能很好预测数据,称为**过拟合** 3. train-test-split的意义
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=333)
###Output
_____no_output_____
###Markdown
线性回归
###Code
lin_reg = LinearRegression()
lin_reg.fit(X_train, y_train)
y_predict = lin_reg.predict(X_test)
mean_squared = mean_squared_error(y_test, y_predict)
score = lin_reg.score(X_test, y_test)
print("mean_squared: ", mean_squared)
print("score: ", score)
###Output
mean_squared: 3.0138859332499557
score: 0.526634907968638
###Markdown
多项式回归
###Code
poly2_reg = PolynomialRegression(degree=2)
poly2_reg.fit(X_train, y_train)
y2_predict = poly2_reg.predict(X_test)
mean_squared = mean_squared_error(y_test, y2_predict)
score = poly2_reg.score(X_test, y_test)
print("mean_squared: ", mean_squared)
###Output
mean_squared: 1.4330745991544904
###Markdown
显然在degree=2时的模型的泛化能力强于线性回归**(对数据的预测效果更好)** **当 degree 取 10 时**
###Code
poly10_reg = PolynomialRegression(degree=10)
poly10_reg.fit(X_train, y_train)
y10_predict = poly10_reg.predict(X_test)
mean_squared = mean_squared_error(y_test, y10_predict)
score = poly10_reg.score(X_test, y_test)
print("mean_squared: ", mean_squared)
poly100_reg = PolynomialRegression(degree=100)
poly100_reg.fit(X_train, y_train)
y100_predict = poly100_reg.predict(X_test)
mean_squared = mean_squared_error(y_test, y100_predict)
poly100_reg.score(X_test, y_test)
print("mean_squared: ", mean_squared)
###Output
mean_squared: 228258223189753.47
###Markdown
- 结合上面可以看出,degree越高,对训练数据拟合的越好,但是对测试数据集预测的能力越低(均方误差越来越高)- 即模型的泛化能力越差[](https://imgchr.com/i/8lmR2j) 4. 学习曲线- 随着学习的数据越多,对训练数据与测试数据的拟合程度的变化曲线
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=10)
X_train.shape
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
train_score = []
test_score = []
for i in range(1, 76):
lin_reg = LinearRegression()
lin_reg.fit(X_train[:i], y_train[:i])
y_train_predict = lin_reg.predict(X_train[:i])
train_score.append(mean_squared_error(y_train[:i], y_train_predict))
y_test_predict = lin_reg.predict(X_test)
test_score.append(mean_squared_error(y_test, y_test_predict))
plt.plot([i for i in range(1, 76)], np.sqrt(train_score), label="train")
plt.plot([i for i in range(1, 76)], np.sqrt(test_score), label="test")
plt.legend()
# 封装函数
def plot_learning_curve(algorithm, X_train, X_test, y_train, y_test):
train_score = []
test_score = []
for i in range(1, len(X_train)+1):
algorithm.fit(X_train[:i], y_train[:i])
y_train_predict = algorithm.predict(X_train[:i])
train_score.append(mean_squared_error(y_train[:i], y_train_predict))
y_test_predict = algorithm.predict(X_test)
test_score.append(mean_squared_error(y_test, y_test_predict))
plt.plot([i for i in range(1, len(X_train)+1)], np.sqrt(train_score), label="train")
plt.plot([i for i in range(1, len(X_train)+1)], np.sqrt(test_score), label="test")
plt.axis([0, len(X_train)+1, 0, 4])
plt.legend()
plot_learning_curve(LinearRegression(), X_train, X_test, y_train, y_test)
# 多项式回归
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import StandardScaler
def PolynomialRegression(degree):
return Pipeline([
("poly", PolynomialFeatures(degree=degree)),
("std_scaler", StandardScaler()),
("lin_reg", LinearRegression())
])
poly2_reg = PolynomialRegression(degree=2)
# 绘制学习曲线
plot_learning_curve(poly2_reg, X_train, X_test, y_train, y_test)
poly2_reg = PolynomialRegression(degree=8)
# 绘制学习曲线
plot_learning_curve(poly2_reg, X_train, X_test, y_train, y_test)
###Output
_____no_output_____ |
WHI/indicators/NASA_Artic_sea_ice.ipynb | ###Markdown
indicators - NASA Artic sea ice AVERAGE SEPTEMBER MINIMUM EXTENTData source: Satellite observations. Credit: NSIDC/NASA**What is Arctic sea ice extent?**Sea ice extent is a measure of the surface area of the ocean covered by sea ice. Increases in air and ocean temperatures decrease sea ice extent; in turn, the resulting darker ocean surface absorbs more solar radiation and increases Arctic warming.Date Range: 1979 - 2020. Get data from websitehttps://climate.nasa.gov/ => click on Artic Sea Ice
###Code
import pandas
df = pandas.read_csv("https://climate.nasa.gov/system/internal_resources/details/original/2264_N_09_extent_v3.0.csv")
df.head(5)#read the first 5 lines
###Output
_____no_output_____
###Markdown
Create simple graph
###Code
import plotly.express as px
fig = px.line(df, x="year", y=" extent")
fig.show()
###Output
_____no_output_____
###Markdown
World Health Indicator (WHI)Using a scale of 0 - 10(where 0 is the worst and 10 is the best) $$\begin{equation*}WHI = 10 \times (\frac{Current}{8})\end{equation*}$$The highest record value of the arctic see ice level was 7.67 million square km in 1980. This value has been decreasing since. That's why, our best case scenario is when the ice level is highest (8) and our worst is the lowest (0).
###Code
current = df[" extent"].iloc[-1]
WHI = (10*(current/8))
print(f"World Health Indicator (Raw values): {round(WHI, 2)}")
WHI_data = pandas.DataFrame.from_dict({"DATE_PROCESSED": pandas.to_datetime("today").date(), "INDICATOR": "Arctic Sea Ice level (million square km)", "VALUE": [round(WHI, 2)]})
WHI_data
import naas
path = '../output/Arctic_Sea_Ice_whi.csv'
WHI_data.to_csv(path)
naas.asset.add(path)
###Output
👌 Well done! Your Assets has been sent to production.
###Markdown
indicators - NASA Artic sea ice **Tags:** indicators opendata worldsituationroom AVERAGE SEPTEMBER MINIMUM EXTENTData source: Satellite observations. Credit: NSIDC/NASA**What is Arctic sea ice extent?**Sea ice extent is a measure of the surface area of the ocean covered by sea ice. Increases in air and ocean temperatures decrease sea ice extent; in turn, the resulting darker ocean surface absorbs more solar radiation and increases Arctic warming.Date Range: 1979 - 2020. Input Import libraries
###Code
import pandas
import plotly.express as px
import naas
###Output
_____no_output_____
###Markdown
Model Get data from websitehttps://climate.nasa.gov/ => click on Artic Sea Ice
###Code
df = pandas.read_csv("https://climate.nasa.gov/system/internal_resources/details/original/2264_N_09_extent_v3.0.csv")
df.head(5)#read the first 5 lines
###Output
_____no_output_____
###Markdown
Create simple graph
###Code
fig = px.line(df, x="year", y=" extent")
fig.show()
###Output
_____no_output_____
###Markdown
World Health Indicator (WHI)Using a scale of 0 - 10(where 0 is the worst and 10 is the best) $$\begin{equation*}WHI = 10 \times (\frac{Current}{8})\end{equation*}$$The highest record value of the arctic see ice level was 7.67 million square km in 1980. This value has been decreasing since. That's why, our best case scenario is when the ice level is highest (8) and our worst is the lowest (0).
###Code
current = df[" extent"].iloc[-1]
WHI = (10*(current/8))
print(f"World Health Indicator (Raw values): {round(WHI, 2)}")
WHI_data = pandas.DataFrame.from_dict({"DATE_PROCESSED": pandas.to_datetime("today").date(), "INDICATOR": "Arctic Sea Ice level (million square km)", "VALUE": [round(WHI, 2)]})
WHI_data
path = '../output/Arctic_Sea_Ice_whi.csv'
WHI_data.to_csv(path)
###Output
_____no_output_____
###Markdown
Output Add the asset
###Code
naas.asset.add(path)
###Output
_____no_output_____
###Markdown
indicators - NASA Artic sea ice **Tags:** indicators opendata worldsituationroom AVERAGE SEPTEMBER MINIMUM EXTENTData source: Satellite observations. Credit: NSIDC/NASA**What is Arctic sea ice extent?**Sea ice extent is a measure of the surface area of the ocean covered by sea ice. Increases in air and ocean temperatures decrease sea ice extent; in turn, the resulting darker ocean surface absorbs more solar radiation and increases Arctic warming.Date Range: 1979 - 2020. Input Import libraries
###Code
import pandas
import plotly.express as px
import naas
###Output
_____no_output_____
###Markdown
Model Get data from websitehttps://climate.nasa.gov/ => click on Artic Sea Ice
###Code
df = pandas.read_csv("https://climate.nasa.gov/system/internal_resources/details/original/2264_N_09_extent_v3.0.csv")
df.head(5)#read the first 5 lines
###Output
_____no_output_____
###Markdown
Create simple graph
###Code
fig = px.line(df, x="year", y=" extent")
fig.show()
###Output
_____no_output_____
###Markdown
World Health Indicator (WHI)Using a scale of 0 - 10(where 0 is the worst and 10 is the best) $$\begin{equation*}WHI = 10 \times (\frac{Current}{8})\end{equation*}$$The highest record value of the arctic see ice level was 7.67 million square km in 1980. This value has been decreasing since. That's why, our best case scenario is when the ice level is highest (8) and our worst is the lowest (0).
###Code
current = df[" extent"].iloc[-1]
WHI = (10*(current/8))
print(f"World Health Indicator (Raw values): {round(WHI, 2)}")
WHI_data = pandas.DataFrame.from_dict({"DATE_PROCESSED": pandas.to_datetime("today").date(), "INDICATOR": "Arctic Sea Ice level (million square km)", "VALUE": [round(WHI, 2)]})
WHI_data
path = '../output/Arctic_Sea_Ice_whi.csv'
WHI_data.to_csv(path)
###Output
_____no_output_____
###Markdown
Output Add the asset
###Code
naas.asset.add(path)
###Output
_____no_output_____
###Markdown
indicators - NASA Artic sea ice AVERAGE SEPTEMBER MINIMUM EXTENTData source: Satellite observations. Credit: NSIDC/NASA**What is Arctic sea ice extent?**Sea ice extent is a measure of the surface area of the ocean covered by sea ice. Increases in air and ocean temperatures decrease sea ice extent; in turn, the resulting darker ocean surface absorbs more solar radiation and increases Arctic warming.Date Range: 1979 - 2020. Get data from websitehttps://climate.nasa.gov/ => click on Artic Sea Ice
###Code
import pandas
df = pandas.read_csv("https://climate.nasa.gov/system/internal_resources/details/original/2264_N_09_extent_v3.0.csv")
df.head(5)#read the first 5 lines
###Output
_____no_output_____
###Markdown
Create simple graph
###Code
import plotly.express as px
fig = px.line(df, x="year", y=" extent")
fig.show()
###Output
_____no_output_____
###Markdown
World Health Indicator (WHI)Using a scale of 0 - 10(where 0 is the worst and 10 is the best) $$\begin{equation*}WHI = 10 \times (\frac{Current}{8})\end{equation*}$$The highest record value of the arctic see ice level was 7.67 million square km in 1980. This value has been decreasing since. That's why, our best case scenario is when the ice level is highest (8) and our worst is the lowest (0).
###Code
current = df[" extent"].iloc[-1]
WHI = (10*(current/8))
print(f"World Health Indicator (Raw values): {round(WHI, 2)}")
WHI_data = pandas.DataFrame.from_dict({"DATE_PROCESSED": pandas.to_datetime("today").date(), "INDICATOR": "Arctic Sea Ice level (million square km)", "VALUE": [round(WHI, 2)]})
WHI_data
import naas
path = '../output/Arctic_Sea_Ice_whi.csv'
WHI_data.to_csv(path)
naas.asset.add(path)
###Output
_____no_output_____ |
notebooks/Exploring Magic Functions.ipynb | ###Markdown
Author: Blesson John Replica of Abhishek's notebook from pluralsight course Magic Function: matplotlib
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(range(100))
###Output
_____no_output_____
###Markdown
Time magic Function
###Code
%time x = range(10000)
%%timeit x = range(10000)
max(x)
%%writefile test.txt
This is the content that is written into this file from Jupyter notebook
notebok
%ls
%%html
<i>image in juypter notebook</i>
<img src="http://imgs.xkcd.com/comics/correlation.png"></img>
###Output
_____no_output_____
###Markdown
Latex function
###Code
%%latex
\begin{align}
Gradient: \nabla J = -2H^T (Y-HW)
\end{align}
###Output
_____no_output_____
###Markdown
load_ext
###Code
!pip install ipython-sql
%load_ext sql
%sql sqlite://
%%sql
create table forum(name,forum_name,profession);
insert into forum values('Blesson','Data Science','CSA');
insert into forum values('Joshua','kids zone','student');
%sql select * from forum;
###Output
* sqlite://
Done.
###Markdown
magic function: lsmagic
###Code
%lsmagic
###Output
_____no_output_____ |
useful_code/Tests.ipynb | ###Markdown
Hop-pub test _24th August 2021_ Sebastian Lara-TorresMelih Kara **To do:** - Implement Heartbeat messages on a seperate continuously running script? - Should Alert messages be published this way?- Accept Datetime object and convert into str- Accept other input types?- We do not have different topics for different tiers, the content should be modified depending on the tier right? It might make sense to refactor according to this.
###Code
import hop_pub_v02 as hop_pub
# by default, it still publishes something
# can be randomized
publisher = hop_pub.Publish_Observation(welcome=True)
###Output
### Publish SNEWS Observation Messages ###
Your Python version:3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0]
Current hop-client version:0.4.0
snews version:0.0.1
Publishing to kafka.scimma.org
Observation Topic: kafka://kafka.scimma.org/snews.alert-test
Submitting messages to the following Tiers;
Significance_Tier & Coincidence_Tier & Timing_Tier
###Markdown
Publishing option can be changed.
###Code
publisher.publish_to['Timing_Tier'] = False
print(publisher.publish_to)
# default dictionary
publisher.msg_dict
###Output
_____no_output_____
###Markdown
Message as a dictionary
###Code
message = {'machine_time':'24/08/2021 15:49:55',
'status': 'ON'} # To Do: accept datetime ovject and convert into str
publisher = hop_pub.Publish_Observation(msg=message)
# Only the value that is changed is overwritten
publisher.msg_dict
publisher.publish_to_tiers()
###Output
Publishing OBS message to Significance_Tier:
detector_id :0
machine_time :24/08/2021 15:49:55
neutrino_time :01/01/01 01:01:01
status :ON
p_value :0
Publishing OBS message to Coincidence_Tier:
detector_id :0
machine_time :24/08/2021 15:49:55
neutrino_time :01/01/01 01:01:01
status :ON
|
Problem-1/Checkpoint-1.ipynb | ###Markdown
Checkpoint 1Checkpoint1: Use Pandas to view the dataset. a. Display first 10 records and last 10 records b. Compute the data distribution across each of these attributes and show them with a bar graph c. Report: Is the data distribution balanced or skewed? If skewed, where do you see the data imbalance? Can you use data augmentation to offset the imbalance if any?
###Code
import pandas as pd
from os.path import join
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data_path = join("..", "..", "Dataset-1", "selfie_dataset.txt")
headers = [
"image_name", "score", "partial_faces" ,"is_female" ,"baby" ,"child" ,"teenager" ,"youth" ,"middle_age" ,"senior" ,"white" ,"black" ,"asian" ,"oval_face" ,"round_face" ,"heart_face" ,"smiling" ,"mouth_open" ,"frowning" ,"wearing_glasses" ,"wearing_sunglasses" ,"wearing_lipstick" ,"tongue_out" ,"duck_face" ,"black_hair" ,"blond_hair" ,"brown_hair" ,"red_hair" ,"curly_hair" ,"straight_hair" ,"braid_hair" ,"showing_cellphone" ,"using_earphone" ,"using_mirror", "braces" ,"wearing_hat" ,"harsh_lighting", "dim_lighting"
]
len(headers)
df_image_details = pd.read_csv(data_path, names=headers, delimiter=" ")
print("Len of dataset :", len(df_image_details))
df_image_details.head(10)
df_image_details.tail(10)
for col in df_image_details.columns[3:]:
plt.bar(sorted(df_image_details[col].unique()), df_image_details[col].value_counts().values)
plt.title('Column : {}'.format(col))
plt.show()
###Output
_____no_output_____ |
src/11_drl_sarsa/11_2_temp_diff_frozen_lake.ipynb | ###Markdown
Part 1: TD Control: Sarsa (update_Q_sarsa)In this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the **estimated action value** corresponding to state `s` and action `a`.Please complete the function in the code cell below.
###Code
def update_Q_sarsa(alpha, gamma, Q, state, action, reward, next_state=None, next_action=None, plot=None):
"""Returns updated Q-value for the most recent experience."""
current = Q[state][action] # estimate in Q-table (for current state, action pair)
# get value of state, action pair at next time step
Qsa_next = Q[next_state][next_action] if next_state is not None else 0
target = reward + (gamma * Qsa_next) # construct TD target, gamma=discount
new_value = current + (alpha * (target - current)) # get updated value, alpha analog=lr
if plot:
print("current:", current, "Qsa_next:", Qsa_next, "target:", target, "new_value:", new_value)
return new_value
def epsilon_greedy(Q, state, nA, eps):
"""Selects epsilon-greedy action for supplied state.
Params
======
Q (dictionary): action-value function
state (int): current state
nA (int): number actions in the environment
eps (float): epsilon
"""
if random.random() > eps: # select greedy action with probability epsilon
return np.argmax(Q[state])
else: # otherwise, select an action randomly
return random.choice(np.arange(env.action_space.n))
def sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=1000):
nA = env.action_space.n # number of actions
Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays
eps_decay = .99999
eps = 1.
eps_min = .05
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
for i_episode in range(1, num_episodes+1):
# monitor progress
plot = False
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
print()
plot = True
score = 0 # initialize score
state = env.reset() # start episode
# eps = 1.0 / i_episode
eps = max(eps*eps_decay, eps_min) # set the value of epsilon
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
while True:
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
if not done:
next_action = epsilon_greedy(Q, next_state, nA, eps) # epsilon-greedy action
Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \
state, action, reward, next_state, next_action, plot)
state = next_state # S <- S'
action = next_action # A <- A'
if done:
Q[state][action] = update_Q_sarsa(alpha, gamma, Q, \
state, action, reward)
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
###Output
_____no_output_____
###Markdown
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly!- Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function.- However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
###Code
# obtain the estimated optimal policy and corresponding action-value function
Q_sarsa = sarsa(env, 5000, .01)
helper.print_field_positions()
print()
helper.print_Q(Q_sarsa)
print()
# print the estimated optimal policy
policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(16)]).reshape(4,4)
#check_test.run_check('td_control_check', policy_sarsa)
# -1 sind löcher oder Goal, da kommt man gar nicht hinein, daher gibt es keine state/action paar
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A (Final state) = -1):")
print(policy_sarsa)
# plot the estimated optimal state-value function
V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(16)])
plot_values(V_sarsa)
helper.print_actions()
print()
print("Policy:")
policy_sarsa
###Output
Actions:
[0] Left
[1] Down
[2] Right
[3] Up
Policy:
###Markdown
Part 2: TD Control: Q-learning (update_Q_sarsamax)In this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._)
###Code
def update_Q_sarsamax(alpha, gamma, Q, state, action, reward, next_state=None, plot=None):
"""Returns updated Q-value for the most recent experience."""
current = Q[state][action] # estimate in Q-table (for current state, action pair)
Qsa_next = np.max(Q[next_state]) if next_state is not None else 0 # value of next state
target = reward + (gamma * Qsa_next) # construct TD target
new_value = current + (alpha * (target - current)) # get updated value
if plot:
print("current:", current, "Qsa_next:", Qsa_next, "target:", target, "new_value:", new_value)
return new_value
def q_learning(env, num_episodes, alpha, gamma=0.9999, plot_every=1000):
"""Q-Learning - TD Control
Params
======
num_episodes (int): number of episodes to run the algorithm
alpha (float): learning rate
gamma (float): discount factor
plot_every (int): number of episodes to use when calculating average score
"""
nA = env.action_space.n # number of actions
Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
eps_decay = .99999
eps = 1.
eps_min = .05
for i_episode in range(1, num_episodes+1):
# monitor progress
plot = False
if i_episode % plot_every == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
print()
plot = True
score = 0 # initialize score
state = env.reset() # start episode
# eps = 1.0 / i_episode # epsilon divergiert zu schnell hier!
eps = max(eps*eps_decay, eps_min) # set the value of epsilon
while True:
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
Q[state][action] = update_Q_sarsamax(alpha, gamma, Q, \
state, action, reward, next_state, plot=False)
state = next_state # S <- S'
if done:
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
###Output
_____no_output_____
###Markdown
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
###Code
Q_sarsamax = q_learning(env, 50000, .01)
# print the estimated optimal policy
policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(16)]).reshape((4,4))
# check_test.run_check('td_control_check', policy_sarsamax)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A (Final state) = -1):")
print(policy_sarsamax)
# plot the estimated optimal state-value function
plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(16)])
helper.print_actions()
print()
print("Policy:")
policy_sarsamax
###Output
Actions:
[0] Left
[1] Down
[2] Right
[3] Up
Policy:
###Markdown
Part 3: TD Control: Expected Sarsa (update_Q_expsarsa)In this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._)
###Code
def update_Q_expsarsa(alpha, gamma, nA, eps, Q, state, action, reward, next_state=None):
"""Returns updated Q-value for the most recent experience."""
current = Q[state][action] # estimate in Q-table (for current state, action pair)
policy_s = np.ones(nA) * eps / nA # current policy (for next state S')
policy_s[np.argmax(Q[next_state])] = 1 - eps + (eps / nA) # greedy action
Qsa_next = np.dot(Q[next_state], policy_s) # get value of state at next time step
target = reward + (gamma * Qsa_next) # construct target
new_value = current + (alpha * (target - current)) # get updated value
return new_value
def expected_sarsa(env, num_episodes, alpha, gamma=1.0, plot_every=100):
"""Expected SARSA - TD Control
Params
======
num_episodes (int): number of episodes to run the algorithm
alpha (float): step-size parameters for the update step
gamma (float): discount factor
plot_every (int): number of episodes to use when calculating average score
"""
nA = env.action_space.n # number of actions
Q = defaultdict(lambda: np.zeros(nA)) # initialize empty dictionary of arrays
# monitor performance
tmp_scores = deque(maxlen=plot_every) # deque for keeping track of scores
avg_scores = deque(maxlen=num_episodes) # average scores over every plot_every episodes
eps_decay = .99999
eps = 1.
eps_min = .05
for i_episode in range(1, num_episodes+1):
# monitor progress
if i_episode % 100 == 0:
print("\rEpisode {}/{}".format(i_episode, num_episodes), end="")
sys.stdout.flush()
score = 0 # initialize score
state = env.reset() # start episode
eps = max(eps*eps_decay, eps_min) # set the value of epsilon
while True:
action = epsilon_greedy(Q, state, nA, eps) # epsilon-greedy action selection
next_state, reward, done, info = env.step(action) # take action A, observe R, S'
score += reward # add reward to agent's score
# update Q
Q[state][action] = update_Q_expsarsa(alpha, gamma, nA, eps, Q, \
state, action, reward, next_state)
state = next_state # S <- S'
if done:
tmp_scores.append(score) # append score
break
if (i_episode % plot_every == 0):
avg_scores.append(np.mean(tmp_scores))
# plot performance
plt.plot(np.linspace(0,num_episodes,len(avg_scores),endpoint=False), np.asarray(avg_scores))
plt.xlabel('Episode Number')
plt.ylabel('Average Reward (Over Next %d Episodes)' % plot_every)
plt.show()
# print best 100-episode performance
print(('Best Average Reward over %d Episodes: ' % plot_every), np.max(avg_scores))
return Q
###Output
_____no_output_____
###Markdown
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function.
###Code
# obtain the estimated optimal policy and corresponding action-value function
Q_expsarsa = expected_sarsa(env, 50000, 1)
# print the estimated optimal policy
policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(16)]).reshape(4,4)
# check_test.run_check('td_control_check', policy_expsarsa)
print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A (Terminal State)= -1):")
print(policy_expsarsa)
# plot the estimated optimal state-value function
plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(16)])
helper.print_actions()
print()
print("Policy:")
policy_sarsamax
###Output
Actions:
[0] Left
[1] Down
[2] Right
[3] Up
Policy:
###Markdown
Aufgabe 11 - Temporal-Difference Methods with Frozen Lake18.01.2022, Thomas ItenIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.**Content**0. Explore Frozen Lake1. TD Control: Sarsa (update_Q_sarsa)2. TD Control: Q-learning (update_Q_sarsamax, dito Q-Learing)3. TD Control: Expected Sarsa (update_Q_expsarsa)**References**- https://colab.research.google.com/drive/1dloqQlR77yAIXEWgRGWSoSJXflMqCUuN?usp=sharing Part 0: Explore Frozen Lake Imports and Plot Helpers
###Code
import sys
import gym
import numpy as np
import random
import math
from collections import defaultdict, deque
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("white")
# from plot_utils import plot_values
def plot_values(V):
# reshape the state-value function
V = np.reshape(V, (4,4))
# plot the state-value function
fig = plt.figure(figsize=(15,5))
ax = fig.add_subplot(111)
im = ax.imshow(V, cmap='cool')
for (j,i),label in np.ndenumerate(V):
ax.text(i, j, np.round(label,3), ha='center', va='center', fontsize=14)
plt.tick_params(bottom='off', left='off', labelbottom='off', labelleft='off')
plt.title('State-Value Function')
plt.show()
###Output
_____no_output_____
###Markdown
Frozen Lake Helper
###Code
class FrozenLakeHelper():
"""Some helper methods used throughout this notebook."""
def render(self, env, display_mode="brackets", print_result=True, legend=False):
"""
IntelliJ notebooks to not render the color of the current position correct.
Details see: https://youtrack.jetbrains.com/issue/PY-32191
Therfore we use this customized render methode with two simple display modes.
:param env: The current environment to render it's fields.
:param display_mode: display current position with "brackets" or in "lowercase"
:param print_result: print the last action and result
:param legend: print the legend
:return: lastaction as text and fields with marked current position
"""
# init data
row, col = env.s // env.ncol, env.s % env.ncol
desc = env.desc.tolist()
desc = [[c.decode("utf-8") for c in line] for line in desc]
actions = ["Left", "Down", "Right", "Up"]
action = "Init" if env.lastaction is None else actions[env.lastaction]
# format display mode
indicator = None
if display_mode == "brackets":
desc[row][col] = "[{}]".format(desc[row][col])
desc = [[ (" {} ".format(c) if len(c) == 1 else c) for c in line ] for line in desc]
indicator = "[]"
elif display_mode == "lowercase":
desc[row][col] = (desc[row][col]).lower()
indicator = "lowercase"
# print result
if print_result:
if legend:
print("Last action:", action)
else:
print(action + ":")
for line in desc:
for pos in line:
print(pos, end="")
print("")
if legend:
print("Legend: S=Start, F=Frozen (safe), H=Hole, G=Goal, " + indicator + "=Current Position")
print("")
# return result
return action, desc
def print_field_positions(self):
print("Field positons:")
print("[ 0] [ 1] [ 2] [ 3]")
print("[ 4] [ 5] [ 6] [ 7]")
print("[ 8] [09] [10] [11]")
print("[12] [13] [14] [15]")
def print_actions(self):
print("Actions:")
print("[0] Left")
print("[1] Down")
print("[2] Right")
print("[3] Up")
def print_Q(self, Q):
print("Field: Left Down Right Up")
for field in Q:
print(f"{field : >5}", end="")
print(":", Q[field])
# Create helper instance
helper = FrozenLakeHelper()
###Output
_____no_output_____
###Markdown
Frozen Lake environment
###Code
env = gym.make('FrozenLake-v1', is_slippery=False)
print("Action space:")
print(env.action_space)
print("")
helper.print_actions()
print("")
print("Observation space:")
print(env.observation_space)
print("")
helper.print_field_positions()
###Output
Action space:
Discrete(4)
Actions:
[0] Left
[1] Down
[2] Right
[3] Up
Observation space:
Discrete(16)
Field positons:
[ 0] [ 1] [ 2] [ 3]
[ 4] [ 5] [ 6] [ 7]
[ 8] [09] [10] [11]
[12] [13] [14] [15]
###Markdown
Reset and initial state
###Code
env.reset() # reset the environment the set agent to start state
helper.render(env, legend=True)
print()
###Output
Last action: Init
[S] F F F
F H F H
F F F H
H F F G
Legend: S=Start, F=Frozen (safe), H=Hole, G=Goal, []=Current Position
|
Metdat-science/Pertemuan 6 - 23 Februari 2022/Tugas_672019321.ipynb | ###Markdown
**Elsha Yuandini Dewasasmita - 672019321** **Soal No 1.**Jawaban no 1 :Saya memilih Line Graph / Line Chart karena pada contoh kasus nomor 1 dijelaskan bahwa terdapat harga bitcoin dari tahun 2018 hingga 2019 yang dicatat setiap **MINGGU**, dari sini bisa dilihat bahwa terdapat banyak data pada soal nomor 1. Dari bentuk soal ini sudah bisa dilihat bahwa ada 2 kondisi yang berbeda, yaitu harga bitcoin tahun 2018 dan harga bitcoin tahun 2019. Kemudian pada pertanyaan soal no 1 ditanyakan mengenai **tahun berapa yang memberikan keuntungan yang lebih baik bagi pemegang bitcoin?**. Untuk kasus yang seperti ini lebih cocok menggunakan Line Graph dikarenakan 2 kondisi yang berbeda tadi, sehingga 1 dataset dengan nama **prices** dengan 104 data banyaknya saya pecah menjadi 2 dengan nama **prices** dan **prices_2019** yang masing masing berisi 52 data (karena 1 tahun = 52 minggu), yang kemudian saya eksekusi menggunakan 2 *syntax* yaitu **plt.plot(minggu, prices, marker='o')** untuk mengeksekusi harga bitcoin tahun 2018 dengan garis warna hijau dan ***plt.plot(minggu, prices_2019, linestyle='--', marker='o')*** dengan garis warna oranye untuk harga bitcoin tahun 2019. Sehingga kesimpulannya adalah, **pada tahun 2019 lah yang memberikan keuntungan yang lebih baik bagi pemegang bitcoin karena bisa dilihat di garis oranye lebih dominan datanya banyak yang naik, yang berarti harga bitcoin pada tahun 2019 dominan meningkat**
###Code
import matplotlib.pyplot as plt
import numpy as np
prices = [14292.2, 12858.9, 11467.5, 9241.1, 8559.6, 11073.5, 9704.3, 11402.3, 8762.0, 7874.9, 8547.4,
6938.2, 6905.7, 8004.4, 8923.1, 9352.4, 9853.5, 8459.5, 8245.1, 7361.3, 7646.6,
7515.8, 6505.8, 6167.3, 6398.9, 6765.5, 6254.8, 7408.7, 8234.1, 7014.3, 6231.6,
6379.1, 6734.8, 7189.6, 6184.3, 6519.0, 6729.6, 6603.9, 6596.3, 6321.7, 6572.2,
6494.2, 6386.2, 6427.1, 5621.8, 3920.4, 4196.2, 3430.4, 3228.7, 3964.4, 3706.8, 3785.4]
prices_2019 = [3597.2, 3677.8, 3570.9, 3502.5, 3661.4, 3616.8, 4120.4, 3823.1,
3944.3, 4006.4, 4002.5, 4111.8, 5046.2, 5051.8,
5290.2, 5265.9, 5830.9, 7190.3, 7262.6, 8027.4,
8545.7, 7901.4, 8812.5, 10721.7, 11906.5, 11268.0,
11364.9, 10826.7, 9492.1, 10815.7, 11314.5, 10218.1,
10131.0, 9594.4, 10461.1, 10337.3, 9993.0, 8208.5,
8127.3, 8304.4, 7957.3, 9230.6, 9300.6, 8804.5,
8497.3, 7324.1, 7546.6, 7510.9, 7080.8, 7156.2,
7321.5, 7376.8]
minggu = list(np.arange(1,53)) #menampilkan list data nomor 1 - 52 (53-1)
fig = plt.figure(figsize=(18,9))
ax= fig.add_subplot()
plt.plot(minggu, prices, marker='o')
plt.plot(minggu, prices_2019, linestyle='--', marker='o')
plt.title ('Harga Bitcoin dari tahun 2018 dan 2019')
plt.ylabel('Harga Bitcoin')
plt.xlabel('Minggu')
ax.plot(minggu,prices)
plt.show
###Output
_____no_output_____
###Markdown
**Soal No 2.**Jawaban no 2 : Saya memilih Pie Chart karena pada contoh kasus yang tertulis, ditanyakan bahwa berapa presentase peluang memilih permen dalam sekali coba jika kita mengambilnya acak dalam sekali pengambilan. Sehingga Pie Chart lebih cocok untuk memvisualisasikan contoh kasus ini karena hanya memuat data yang sedikit. **Dan banyaknya permen kopiko adalah 39 permen dari total keseluruhan 260 permen, sehingga peluang permen kopiko terambil adalah sebesar 15% atau dengan kata lain permen kopiko dapat terambil dalam sekali percobaan dengan peluang 3/20 (15/100) atau 0,15 (39/260)**
###Code
import matplotlib.pyplot as plt
nama_permen = ['Mentos', 'Kopiko', 'Golia', 'Yupie', 'Fisherman']
Jumlah_permen = [52, 39, 78, 13, 78]
warna = ('#D2691E', '#FFB6C1', '#00FFFF', '#FFFF00', '#ADFF2F')
highlight = (0,0.1,0,0,0)
plt.title ('Peluang ambil permen dalam sekali coba')
plt.pie(Jumlah_permen, labels = nama_permen, autopct = '%1.2f%%', colors = warna, explode = highlight, shadow = True ) #jika 2 angka dibelakang koma
plt.show
###Output
_____no_output_____
###Markdown
**Soal No 3.**Jawaban no 3: Saya menggunakan **Bar Chart** karena pada contoh kasus nomor 3 dijelaskan bahwa terdapat daftar menu *dessert* atau makanan penutup yang dicatat frekuensi terjualnya setiap seminggu sekali. Dan pada soal juga dijelaskan bahwa pihak Kafe Biru ingin *menghapus* 3 makanan yang tidak populer dari menu, sehingga Bar chart lebih cocok memvisualisasikan contoh kasus ini karena Bar Chart cocok untuk memvisualisasikan kasus yang memiliki data puluhan (10-20 data). **Sebutkan tiga makanan penutup yang harus disingkirkan.**Bisa dilihat pada diagram bar bahwa 3 makanan paling populer adalah **ice cream, kue coklat-keju, dan donat**, dengan kata lain para mahasiswa lebih suka membeli ketiga makanan tersebut. Dan 3 makanan penutup yang kurang populer adalah **puding vanila, pastel, kue wajik**, sehingga pemilik Kafe Biru harus menyingkirkan dan menghapus menu 3 makanan tersebut.
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
datapenjualan_makananpenutup = ('Donat', 'Pastel', 'Kue Coklat', 'Ice Cream', 'Puding Vanila',
'Brownies', 'Puding Strawberry', 'Puding Coklat','Ice Cream Nutela',
'Kue Coklat-Keju', 'Kue Wajik', 'Kue Sus', 'Mochi')
terjual = (14, 5, 12, 19, 6, 8, 12, 9, 10, 17, 2, 9, 13 )
x_koordinat = np.arange(len(datapenjualan_makananpenutup))
df = pd.DataFrame({'Data' : datapenjualan_makananpenutup, 'Sold' : terjual})
df.sort_values(by='Sold', inplace = True, ascending = False)
warna = ['#0000FF' for _ in range(len(df))]
warna [10] = '#FF0000'
warna [11] = '#FF0000'
warna [12] = '#FF0000'
plt.title ('Daftar makanan terpopuler Kafe Biru')
plt.bar(x_koordinat, df['Sold'], tick_label=df['Data'], color=warna)
plt.xticks(rotation=90)
plt.ylabel('Terjual')
plt.show()
###Output
_____no_output_____
###Markdown
**Soal No 4.**Jawaban no 4 : Saya menggunakan **Heatmap** karena pada contoh kasus nomor 4 dijelaskan bahwa terdapat penggunaan rata-rata CPU per jam selama seminggu. Jika melihat secara fakta, CPU yang digunakan selama berjam jam tentu saja akan membuat suhu dari CPU itu sendiri semakin panas, oleh karena itulah penggunaan Heatmap cocok pada kasus ini, karena **Heatmap sendiri merupaka visualisasi data dengan representasi warna yang berbeda** sehingga pada kasus ini dapat memvisualisasikan suhu CPU selama seminggu. Biasanya semakin tinggi angka suatu data, maka warnanya akan semakin gelap. **Jam berapa pekerja biasanya makan siang?**Bisa dilihat pada diagram Heatmap bahwa CPU tidak digunakan atau bersuhu rendah (warna biru) ketika jam 13.00 siang. Dengan kata lain para pekerja makan siang dan beristirahat pada jam 13.00 siang. **Apakah pekerja tersebut bekerja pada akhir pekan?**Bisa dilihat pada diagram Heatmap terutama pada akhir pekan (sabtu dan minggu) tidak ada aktivitas penggunaan CPU yang signifikan. Sehingga bisa dikatakan para pekerja libur pada hari sabtu karena selama satu hari tersebut suhu CPU berwarna biru yang berarti tidak ada aktivitas pada hari sabtu. Namun pada hari minggu malam para pekerja mulai bekerja lagi karena ditandai dengan adnya aktivitas CPU yang digunakan pada jam 18.00 - 20.00. **Pada hari apa pekerja mulai bekerja pada komputer mereka pada malam hari?**Bisa dilihat pada hari minggu pada jam 18.00 - 20.00 terdapat aktivitas penggunaan CPU, sehingga bisa dikatakan pekerja mulai bekerja pada malam hari pada hari minggu
###Code
import seaborn as sbr
hari = ['Senin', 'Selasa', 'Rabu', 'Kamis', 'Jumat', 'Sabtu', 'Minggu']
jam = list(np.arange(0,24)) # data nomor 0 - 23 (24-1)
datapenggunaan_cpu = [[2, 2, 4, 2, 4, 1, 1, 4, 4, 12, 22, 23, 45, 9, 33, 56, 23, 40, 21, 6, 6, 2, 2, 3], # Senin
[1, 2, 3, 2, 3, 2, 3, 2, 7, 22, 45, 44, 33, 9, 23, 19, 33, 56, 12, 2, 3, 1, 2, 2], # Selasa
[2, 3, 1, 2, 4, 4, 2, 2, 1, 2, 5, 31, 54, 7, 6, 34, 68, 34, 49, 6, 6, 2, 2, 3], # Rabu
[1, 2, 3, 2, 4, 1, 2, 4, 1, 17, 24, 18, 41, 3, 44, 42, 12, 36, 41, 2, 2, 4, 2, 4], # Kamis
[4, 1, 2, 2, 3, 2, 5, 1, 2, 12, 33, 27, 43, 8, 38, 53, 29, 45, 39, 3, 1, 1, 3, 4], # Jumat
[2, 3, 1, 2, 2, 5, 2, 8, 4, 2, 3, 1, 5, 1, 2, 3, 2, 6, 1, 2, 2, 1, 4, 3], # Sabtu
[1, 2, 3, 1, 1, 3, 4, 2, 3, 1, 2, 2, 5, 3, 2, 1, 4, 2, 45, 26, 33, 2, 2, 1], # Minggu
]
sbr.heatmap(datapenggunaan_cpu, yticklabels=hari, xticklabels=jam, cmap ='coolwarm')
###Output
_____no_output_____
###Markdown
**Soal no 5.**Jawaban no 5: saya memilih **Scatter Plot** karena pada contoh soal dijelaskan bahwa terdapat pertumbuhan jamur yang menyebar. Sehingga pemilihan Scatter Plot sangat cocok pada contoh kasus ini karena kita bisa menjadikan Scatter Plot sebagai perumpamaan tempat tumbuh jamur. **Kira-kira di manakah letak pusat pertumbuhan jamur/koordinat pusat (x,y)?**Pusat perumbuhan jamur bisa dilihat berdasarkan rumus modus di dalam statistik. Sehingga dengan menggunakan rumus statistik dapat diperoleh nilai modus x (7.82) dan nilai modus y (-3.41). Sehingga bisa sisimpulkan letak pusat pertumbuhan jamur berada di koordinat (7.82, -3.41)
###Code
import matplotlib.pyplot as plt
import statistics as sts
x = [4.61, 5.08, 5.18, 7.82, 10.46, 7.66, 7.6, 9.32, 14.04, 9.95, 4.95,
7.23, 5.21, 8.64, 10.08, 8.32, 12.83, 7.51, 7.82, 6.29, 0.04, 6.62,
13.16, 6.34, 0.09, 10.04, 13.06, 9.54, 11.32, 7.12, -0.67, 10.5, 8.37,
7.24, 9.18, 10.12, 12.29, 8.53, 11.11, 9.65, 9.42, 8.61, -0.67, 5.94,
6.49, 7.57, 3.11, 8.7, 5.28, 8.28, 9.55, 8.33, 13.7, 6.65, 2.4, 3.54,
9.19, 7.51, -0.68, 8.47, 14.82, 5.31, 14.01, 8.75, -0.57, 5.35, 10.51,
3.11, -0.26 , 5.74, 8.33, 6.5, 13.85, 9.78, 4.91, 4.19, 14.8, 10.04,
13.47, 3.28]
y = [-2.36, -3.41, 13.01, -2.91, -2.28, 12.83, 13.13, 11.94, 0.93,
-2.76, 13.31, -3.57, -2.33, 12.43, -1.83, 12.32, -0.42, -3.08, -2.98,
12.46, 8.34, -3.19, -0.47, 12.78, 2.12, -2.72, 10.64, 11.98, 12.21,
12.52, 5.53, 11.72, 12.91, 12.56, -2.49, 12.08, -1.09, -2.89, -1.78,
-2.47, 12.77, 12.41, 5.33, -3.23, 13.45, -3.41, 12.46, 12.1, -2.56,
12.51, -2.37, 12.76, 9.69, 12.59, -1.12, -2.8, 12.94, -3.55, 7.33,
12.59, 2.92, 12.7, 0.5, 12.57, 6.39, 12.84, -1.95, 11.76, 6.82, 12.44,
13.28, -3.46, 0.7, -2.55, -2.37, 12.48, 7.26, -2.45, 0.31, -2.51]
plt.figure(figsize=(20,10))
plt.scatter(sts.mode(x), sts.mode(y), color ='#FF0000')
plt.scatter(x,y)
print("Pusat pertumbuhan jamur pada koordinat ", "{",sts.mode(x),"}", ",", "{", sts.mode(y),"}", "dengan titik warna merah")
plt.show()
###Output
Pusat pertumbuhan jamur pada koordinat { 7.82 } , { -3.41 } dengan titik warna merah
|
jupyter_notebooks/0018_filter_features.ipynb | ###Markdown
Filter features
###Code
import pandas as pd
from pandas_profiling import ProfileReport
df = pd.read_pickle('features_20201124.pkl')
del df['language']
del df['smog_score']
del df['ari_score']
del df['coleman_liau_score']
del df['new_dale_chall_score']
del df['flesch_score']
del df['flesch_kincaid_score']
del df['lix_score']
del df['asl_flesch']
del df['asw_flesch']
del df['asl_fog']
del df['new_dale_chall_class']
del df['pmw']
del df['acw']
del df['asw']
del df['words']
del df['characters']
del df['syllables']
del df['strain_score']
del df['acs']
del df['ass']
del df['ppw_fog']
###Output
_____no_output_____
###Markdown
Save to pickle file
###Code
df.to_pickle('filtered_20201125.pkl')
###Output
_____no_output_____
###Markdown
Pandas profiling report
###Code
profile = ProfileReport(df, title='Pandas profiling report')
profile.to_file('filter_20210106.html')
###Output
_____no_output_____ |
superviselySDK/help/jupyterlab_scripts/src/tutorials/02_data_management/data_management.ipynb | ###Markdown
Supervisely Tutorial 2 Online API basics: organize and explore workspaces, projects and neural networks In this tutorial we will cover the basics of how to script your interactions with the Supervisely web instance using our online API.You will learn how to query the web instance for existing projects and datasets, get and update their metadata and download images and their labeling data locally for further processing with our Python SDK. You will also see how to add an existing neural network from our public repository, read off its metainformation and download the weights and inference confi locally.In the follow up tutorials (4 and 5) you will learn how to request neural net inference from the web instance and how to automate complex data processing pipelines using Supervisely workflows. Necessary imports
###Code
import supervisely_lib as sly
# PyPlot only for rendering images inside Jupyter.
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Initialize API access with your credentialsBefore starting to interact with a Supervisely web instance using our API, you need to supply your use credentials: your unique access token that you can find under your profile details:
###Code
import os
# Jupyter notebooks hosted on Supervisely can get their user's
# credentials from the environment varibales.
# If you are running the notebook outside of Supervisely, plug
# the server address and your API token here.
# You can find your API token in the account settings:
# -> click your name in the top-right corner
# -> select "account settings"
# -> select "API token" tab on top.
address = os.environ['SERVER_ADDRESS']
token = os.environ['API_TOKEN']
print("Server address: ", address)
print("Your API token: ", token)
# Initialize the API access object.
api = sly.Api(address, token)
###Output
Server address: http://192.168.1.69:5555
Your API token: OfaV5z24gEQ7ikv2DiVdYu1CXZhMavU7POtJw2iDtQtvGUux31DUyWTXW6mZ0wd3IRuXTNtMFS9pCggewQWRcqSTUi4EJXzly8kH7MJL1hm3uZeM2MCn5HaoEYwXejKT
###Markdown
Workspace managementIn Supervisely, workspaces are the top level groups of your work items. Each workspace contains plugins, such as neural network implementations and projects with datasets.Let us start with listing all the existing workspaces:
###Code
# In Supervisely, a user can belong to multiple teams.
# Everyone has a default team with just their user in it.
# We will work in the context of that default team.
team = api.team.get_list()[0]
# Query for all the workspaces in the selected team
workspaces = api.workspace.get_list(team.id)
print("Team {!r} contains {} workspaces:".format(team.name, len(workspaces)))
for workspace in workspaces:
print("{:<8}{:<15s}".format(workspace.id, workspace.name))
###Output
Team 'max' contains 22 workspaces:
9 my_super_workspace_002
10 region_pipeline
34 script1
35 dtl_bug
39 script2
40 train_test
41 ws7
44 dfgd
45 test_dtl_segmentation
55 my_super_workspace
56 test_workspace_001
57 test_workspace_002
58 test_api
60 test_api2
67 my_super_workspace_001
69 test_workspace
82 tutorial_04
83 tutorial_05_backup
84 tutorial_05
90 my_super_workspace_003
92 test_new
111 test_fast_agent
###Markdown
We can quickly read off more details on the workspace, like the description, creation and last modification times:
###Code
print(workspaces[0])
###Output
WorkspaceInfo(id=9, name='my_super_workspace_002', description='super workspace description', team_id=9, created_at='2019-01-20T13:25:19.142Z', updated_at='2019-01-20T13:25:19.142Z')
###Markdown
For this tutorial, we will create a new workspace to avoid interfering with any existing work.
###Code
workspace_name = 'tutorial_workspace'
# Just in case there is already a workspace with this name,
# we can ask the web instance for a new unique name to use.
if api.workspace.exists(team.id, workspace_name):
workspace_name = api.workspace.get_free_name(team.id, workspace_name)
# Create the workspace and print out its metadata.
workspace = api.workspace.create(team.id, workspace_name, 'tutorial workspace description')
print(workspace)
###Output
WorkspaceInfo(id=114, name='tutorial_workspace', description='tutorial workspace description', team_id=9, created_at='2019-04-07T15:59:02.645Z', updated_at='2019-04-07T15:59:02.645Z')
###Markdown
We can query for workspace metadata both by workspace name and by numeric ID:
###Code
workspace_by_name = api.workspace.get_info_by_name(team.id, workspace_name)
print(workspace_by_name)
print()
workspace_by_id = api.workspace.get_info_by_id(workspace.id)
print(workspace_by_id)
###Output
WorkspaceInfo(id=114, name='tutorial_workspace', description='tutorial workspace description', team_id=9, created_at='2019-04-07T15:59:02.645Z', updated_at='2019-04-07T15:59:02.645Z')
WorkspaceInfo(id=114, name='tutorial_workspace', description='tutorial workspace description', team_id=9, created_at='2019-04-07T15:59:02.645Z', updated_at='2019-04-07T15:59:02.645Z')
###Markdown
Both workspace name and description can be changed later:
###Code
# update workspace name, description, or both
new_name = 'my_super_workspace'
new_description = 'super workspace description'
if api.workspace.exists(team.id, new_name):
new_name = api.workspace.get_free_name(team.id, new_name)
print("Before update: {}\n".format(workspace))
workspace = api.workspace.update(workspace.id, new_name, new_description)
print("After update: {}".format(workspace))
###Output
Before update: WorkspaceInfo(id=114, name='tutorial_workspace', description='tutorial workspace description', team_id=9, created_at='2019-04-07T15:59:02.645Z', updated_at='2019-04-07T15:59:02.645Z')
After update: WorkspaceInfo(id=114, name='my_super_workspace_004', description='super workspace description', team_id=9, created_at='2019-04-07T15:59:02.645Z', updated_at='2019-04-07T15:59:02.645Z')
###Markdown
Project managementA project is a group of datasets with common labeling metadata (the set of available classes and tags). For example, one can have a project of labeled road scenes (so the taxonomy of the classes will relate to vehicles, pedestrians and road signs), and inside the project have a separate dataset for every day on which the data was collected. We will start populating our new workspace by cloning one of the publically available in Supervisely projects into it.
###Code
# 'lemons_annotated' is one of our out of the box demo projects, so
# we will make a copy with the appropriate name.
project_name = 'lemons_annotated_clone'
if api.project.exists(workspace.id, project_name):
project_name = api.project.get_free_name(workspace.id, project_name)
task_id = api.project.clone_from_explore('Supervisely/Demo/lemons_annotated', workspace.id, project_name)
# The clone call returns immediately, so the code does not
# have to block on waiting for the task to complete.
# Since we do not have much to do in the meantime, just wait for the task.
api.task.wait(task_id, api.task.Status.FINISHED)
# Now that the task has finished we can query for the project metadata.
project = api.project.get_info_by_name(workspace.id, project_name)
print("Project {!r} has been sucessfully cloned from explore: ".format(project.name))
print(project)
###Output
Project 'lemons_annotated_clone' has been sucessfully cloned from explore:
ProjectInfo(id=1276, name='lemons_annotated_clone', description='', size='861069', readme='', workspace_id=114, created_at='2019-04-07T15:59:08.975Z', updated_at='2019-04-07T15:59:08.975Z')
###Markdown
Now we have a project in the new workspace, let us make sure there is only one. Query and print out the projects in the workspace:
###Code
projects = api.project.get_list(workspace.id)
print("Workspace {!r} contains {} projects:".format(workspace.name, len(projects)))
for project in projects:
print("{:<5}{:<15s}".format(project.id, project.name))
###Output
Workspace 'my_super_workspace_004' contains 1 projects:
1276 lemons_annotated_clone
###Markdown
We can query project metadata both by project name and by numeric id:
###Code
# Get project info by name
project = api.project.get_info_by_name(workspace.id, project_name)
if project is None:
print("Workspace {!r} not found".format(project_name))
else:
print(project)
print()
# Get project info by id.
project = api.project.get_info_by_id(project.id)
if project is None:
print("Project with id={!r} not found".format(some_project_id))
else:
print(project)
###Output
ProjectInfo(id=1276, name='lemons_annotated_clone', description='', size='861069', readme='', workspace_id=114, created_at='2019-04-07T15:59:08.975Z', updated_at='2019-04-07T15:59:08.975Z')
ProjectInfo(id=1276, name='lemons_annotated_clone', description='', size='861069', readme='', workspace_id=114, created_at='2019-04-07T15:59:08.975Z', updated_at='2019-04-07T15:59:08.975Z')
###Markdown
Separately we can query for the number of datasets in a project, and for the number of images in a dataset:
###Code
# get number of datasets and images in project
datasets_count = api.project.get_datasets_count(project.id)
images_count = api.project.get_images_count(project.id)
print("Project {!r} contains:\n {} datasets \n {} images\n".format(project.name, datasets_count, images_count))
###Output
Project 'lemons_annotated_clone' contains:
1 datasets
6 images
###Markdown
Get the labeling meta information for the projects - the set of available object classes and tags. We get back a serialized project meta, which can be conveniently parsed into a `ProjectMeta` object from our Python SDK. See our tutorial 1 for a detailed guide on how to work with projects metadata using the SDK.
###Code
meta_json = api.project.get_meta(project.id)
meta = sly.ProjectMeta.from_json(meta_json)
print(meta)
###Output
ProjectMeta:
Object Classes
+-------+--------+----------------+
| Name | Shape | Color |
+-------+--------+----------------+
| kiwi | Bitmap | [255, 0, 0] |
| lemon | Bitmap | [81, 198, 170] |
+-------+--------+----------------+
Image Tags
+------+------------+-----------------+
| Name | Value type | Possible values |
+------+------------+-----------------+
+------+------------+-----------------+
Object Tags
+------+------------+-----------------+
| Name | Value type | Possible values |
+------+------------+-----------------+
+------+------------+-----------------+
###Markdown
List the datasets from the given project:
###Code
datasets = api.dataset.get_list(project.id)
print("Project {!r} contains {} datasets:".format(project.name, len(datasets)))
for dataset in datasets:
print("Id: {:<5} Name: {:<15s} images count: {:<5}".format(dataset.id, dataset.name, dataset.images_count))
###Output
Project 'lemons_annotated_clone' contains 1 datasets:
Id: 1717 Name: ds1 images count: 6
###Markdown
List all the images for a given dataset, their sizes, dimensions and the number of labeled objects:
###Code
dataset = datasets[0]
images = api.image.get_list(dataset.id)
print("Dataset {!r} contains {} images:".format(dataset.name, len(images)))
for image in images:
print("Id: {:<5} Name: {:<15s} labels count: {:<5} size(bytes): {:<10} width: {:<5} height: {:<5}"
.format(image.id, image.name, image.labels_count, image.size, image.width, image.height))
###Output
Dataset 'ds1' contains 6 images:
Id: 146018 Name: IMG_0748.jpeg labels count: 3 size(bytes): 155790 width: 1067 height: 800
Id: 146019 Name: IMG_1836.jpeg labels count: 3 size(bytes): 140222 width: 1067 height: 800
Id: 146020 Name: IMG_3861.jpeg labels count: 4 size(bytes): 148388 width: 1067 height: 800
Id: 146021 Name: IMG_4451.jpeg labels count: 5 size(bytes): 135689 width: 1067 height: 800
Id: 146022 Name: IMG_2084.jpeg labels count: 7 size(bytes): 142097 width: 1067 height: 800
Id: 146023 Name: IMG_8144.jpeg labels count: 4 size(bytes): 138883 width: 1067 height: 800
###Markdown
Download an image along with its annotation (all the labeling information for that image):
###Code
# Download and display the image.
image = images[0]
img = api.image.download_np(image.id)
print("Image Shape: {}".format(img.shape))
imgplot = plt.imshow(img)
# Download the serialized JSON annotation for the image.
ann_info = api.annotation.download(image.id)
# Parse the annotation using the Supervisely Python SDK
# and instantiate convenience wrappers for the objects in the annotation.
ann = sly.Annotation.from_json(ann_info.annotation, meta)
# Render the object labels on top of the original image.
img_with_ann = img.copy()
ann.draw(img_with_ann)
imgplot = plt.imshow(img_with_ann)
###Output
_____no_output_____
###Markdown
Neural network managementHere we will only cover working with neural networks metadata. There is a separate tutorial (Supervisely Tutorial 4) on running neural network training and inference.First, we will clone one of the existing publically avaliable in Supervisely models into our workspace:
###Code
# Set the destination model name within our workspace
model_name = 'yolo_coco'
# Grab a unique name in case the one we chose initially is busy.
if api.model.exists(workspace.id, model_name):
model_name = api.model.get_free_name(workspace.id, model_name)
# Request the model to be copied from our public repository.
# This kicks off an asynchronous task.
task_id = api.model.clone_from_explore('Supervisely/Model Zoo/YOLO v3 (COCO)', workspace.id, model_name)
# Wait for the copying to complete.
api.task.wait(task_id, api.task.Status.FINISHED)
# Query the metadata for the copied model.
model = api.model.get_info_by_name(workspace.id, model_name)
print("Model {!r} has been sucessfully cloned from explore: ".format(model.name))
print(model)
###Output
Model 'yolo_coco' has been sucessfully cloned from explore:
ModelInfo(id=360, name='yolo_coco', description='Trained on COCO. Can be used for both training and inference', config=None, hash='0/o/I7/TaFtVZ8Yk5JXHkBaI9HRTbfqQglvvC7rW8yDqRcFmictKTNsu5oGDxfkVgkVHZ34rFn4dZgVEEexjEjrRcR1pIl2voLTgzKTf5nDRCHEMJLAWleyzFZVJrUEMg3R.tar', only_train=False, plugin_id=6, plugin_version='latest', size='248027648', weights_location='uploaded', readme='', task_id=None, user_id=9, team_id=9, workspace_id=114, created_at='2019-04-07T15:59:28.334Z', updated_at='2019-04-07T15:59:28.334Z')
###Markdown
We can also download locally the model weights and config (which describes the set of classes the model would predict) as a .tar file:
###Code
api.model.download_to_tar(workspace.id, model.name, './model.tar')
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.